
Introduction
In the previous article, we covered Kubernetes fundamentals — Pods, Deployments, Services, scaling, and the control plane. Now we put that knowledge to practical use.
One of the biggest gaps in local development is the quality of your backing services. SQLite-in-memory is nothing like PostgreSQL HA. A single Redis node is nothing like a sentinel cluster. Spinning up fragile docker-compose stacks that die whenever you restart your laptop is nothing like a managed cloud service.
This guide sets up a persistent, on-demand local development infrastructure on your k3s cluster — PostgreSQL HA, MongoDB with transactions, Redis, RabbitMQ, Mailpit, and SigNoz APM — all reachable directly from your host machine as if they were managed cloud services. Your applications run outside Kubernetes, exactly as they do in a real dev workflow. The cluster handles the services.

Architecture Overview
The core insight that makes this work: MetalLB.
In a cloud environment, a LoadBalancer Service gets a real public IP from your cloud provider automatically. On a bare-metal or Multipass cluster, nothing assigns those IPs — you have to do it yourself. MetalLB is the answer. It watches for LoadBalancer Services and assigns IPs from a pool you define, making those IPs reachable from your host machine via L2 ARP.
Your Host Machine (running Node.js / NestJS / etc.)
│
│ connects to real IPs (10.46.7.100-108)
▼
MetalLB (L2 mode) — IP pool: 10.46.7.100–10.46.7.120
│
┌────┴──────────────────────────────────────────────────────┐
│ k3s Cluster (Multipass) │
│ │
│ dev-infra namespace: │
│ PostgreSQL HA ──► LoadBalancer 10.46.7.100:5432 │
│ MongoDB RS ──► LoadBalancer 10.46.7.101-103 │
│ Redis + Sentinel ──► LoadBalancer 10.46.7.104:6379 │
│ RabbitMQ ──► LoadBalancer 10.46.7.105:5672 │
│ Mailpit ──► LoadBalancer 10.46.7.106:1025 │
│ │
│ signoz namespace: │
│ SigNoz UI ──► LoadBalancer 10.46.7.107:3301 │
│ OTLP Collector ──► LoadBalancer 10.46.7.108:4318 │
└───────────────────────────────────────────────────────────┘
Every service gets a dedicated IP from the pool. Your host apps use those IPs directly — no port-forwarding, no tunnels, no docker network magic. It just works like a remote managed service.
Project Structure
Clone or create the following layout in your repo:
dev-infra/
├── bootstrap.sh # One-time setup — run once per machine
├── dev-infra.sh # Daily CLI — start/stop/restart/status/logs
├── .env.example # Copy to your projects
├── helm-values/
│ ├── postgresql-ha.yaml # Bitnami PostgreSQL HA config
│ ├── mongodb.yaml # Bitnami MongoDB ReplicaSet config
│ ├── redis.yaml # Bitnami Redis + Sentinel config
│ ├── rabbitmq.yaml # Bitnami RabbitMQ config
│ └── signoz.yaml # SigNoz APM config
└── manifests/
├── metallb-pool.yaml # MetalLB IP pool + L2Advertisement
└── mailpit.yaml # Mailpit deployment + service
Prerequisites
Before running the bootstrap, make sure you have:
- k3s cluster running (3-node Multipass setup from the previous article)
kubectlconfigured and pointing to your cluster:kubectl cluster-infohelminstalled on your host:brew install helmor helm.sh- Your Multipass subnet confirmed — this guide assumes
10.46.7.x(adjust if yours differs)
Check your subnet:
multipass list
# NAME STATE IPV4 IMAGE
# k3s-master Running 10.46.7.61 Ubuntu 24.04 LTS
# k3s-worker-1 Running 10.46.7.183 Ubuntu 24.04 LTS
# k3s-worker-2 Running 10.46.7.206 Ubuntu 24.04 LTS
The MetalLB pool (10.46.7.100–10.46.7.120) must be on the same subnet and those IPs must be free. If your subnet is different, update manifests/metallb-pool.yaml and all loadBalancerIP fields in the Helm values files.
Step 1 — MetalLB (The Foundation)
MetalLB is what makes all the LoadBalancer IPs reachable from your host. Without it, LoadBalancer Services would sit in <pending> forever.
# manifests/metallb-pool.yaml
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: dev-infra-pool
namespace: metallb-system
spec:
addresses:
- 10.46.7.100-10.46.7.120
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: dev-infra-l2
namespace: metallb-system
spec:
ipAddressPools:
- dev-infra-pool
The L2Advertisement resource tells MetalLB to announce IPs via ARP on the local network — this is what makes 10.46.7.100 reachable from your Mac or Linux host.
Step 2 — PostgreSQL HA
We use Bitnami’s postgresql-ha chart, which gives us a proper HA setup: two PostgreSQL nodes managed by repmgr (replication manager), behind a pgpool connection pooler.
Your apps connect to pgpool — it handles routing reads/writes, connection pooling, and transparent failover.
# helm-values/postgresql-ha.yaml
global:
postgresql:
username: devuser
password: devpassword
repmgrPassword: repmgrpassword
postgresPassword: adminpassword
database: devdb
postgresql:
replicaCount: 2
pgpool:
replicaCount: 1
service:
type: LoadBalancer
loadBalancerIP: "10.46.7.100"
ports:
postgresql: 5432
persistence:
size: 8Gi
storageClass: "local-path"
Connection string for your host apps:
postgresql://devuser:devpassword@10.46.7.100:5432/devdb
This is pgpool — it looks exactly like a standard PostgreSQL connection from the outside.
Step 3 — MongoDB ReplicaSet
MongoDB’s transaction support requires a ReplicaSet. A standalone MongoDB instance silently ignores session.startTransaction() in many drivers. This setup gives you a proper 3-node RS — so your transaction code is tested against the same architecture it’ll run in production.
# helm-values/mongodb.yaml
architecture: replicaset
replicaSetName: rs0
replicaCount: 3
auth:
enabled: true
rootUser: root
rootPassword: rootpassword
usernames: [devuser]
passwords: [devpassword]
databases: [devdb]
externalAccess:
enabled: true
service:
type: LoadBalancer
loadBalancerIPs:
- "10.46.7.101" # primary
- "10.46.7.102" # secondary-0
- "10.46.7.103" # secondary-1
ports:
mongodb: 27017
persistence:
size: 8Gi
storageClass: "local-path"
The full ReplicaSet URI for your host apps — use this, not just the primary IP, so your driver can handle failover:
mongodb://devuser:devpassword@10.46.7.101:27017,10.46.7.102:27017,10.46.7.103:27017/devdb?authSource=devdb&replicaSet=rs0
With Mongoose, set this in your connection options:
await mongoose.connect(process.env.MONGODB_URI, {
replicaSet: "rs0",
});
Sessions and transactions now work exactly as they would on MongoDB Atlas.
Step 4 — Redis with Sentinel
A single Redis node is fine for caching. But if you’re building queue workers, pub/sub, or anything that depends on Redis being alive across a restart, you want Sentinel — automatic failover without code changes in your app.
# helm-values/redis.yaml
architecture: replication
auth:
enabled: true
password: "redispassword"
master:
service:
type: LoadBalancer
loadBalancerIP: "10.46.7.104"
ports:
redis: 6379
replica:
replicaCount: 1
sentinel:
enabled: true
masterSet: mymaster
quorum: 1
For simple connections (ioredis, node-redis):
redis://:redispassword@10.46.7.104:6379
For sentinel-aware clients, which handle master failover automatically:
// ioredis sentinel connection
const redis = new Redis({
sentinels: [{ host: "10.46.7.104", port: 26379 }],
name: "mymaster",
password: "redispassword",
});
Step 5 — RabbitMQ
RabbitMQ with the management plugin — you get AMQP messaging and a browser-based UI to inspect queues, exchanges, bindings, and message rates.
# helm-values/rabbitmq.yaml
replicaCount: 1
auth:
username: devuser
password: devpassword
erlangCookie: "a-very-secret-erlang-cookie-change-me"
plugins: "rabbitmq_management rabbitmq_peer_discovery_k8s rabbitmq_shovel rabbitmq_shovel_management"
service:
type: LoadBalancer
loadBalancerIP: "10.46.7.105"
ports:
amqp: 5672
manager: 15672
Your apps connect via AMQP:
amqp://devuser:devpassword@10.46.7.105:5672
Open the management UI at http://10.46.7.105:15672 — invaluable for debugging message flows during development.
Step 6 — Mailpit (SMTP Trap)
Mailpit is a lightweight SMTP server that catches all outgoing email and displays it in a clean web UI. No emails escape to the internet. No SendGrid test accounts. No accidental emails to real users.
# manifests/mailpit.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mailpit
namespace: dev-infra
spec:
replicas: 1
selector:
matchLabels:
app: mailpit
template:
metadata:
labels:
app: mailpit
spec:
containers:
- name: mailpit
image: axllent/mailpit:latest
ports:
- containerPort: 1025 # SMTP
- containerPort: 8025 # Web UI
env:
- name: MP_SMTP_AUTH_ACCEPT_ANY
value: "true"
- name: MP_SMTP_AUTH_ALLOW_INSECURE
value: "true"
---
apiVersion: v1
kind: Service
metadata:
name: mailpit
namespace: dev-infra
spec:
type: LoadBalancer
loadBalancerIP: "10.46.7.106"
selector:
app: mailpit
ports:
- name: smtp
port: 1025
targetPort: 1025
- name: webui
port: 8025
targetPort: 8025
Point your mailer at:
MAIL_HOST=10.46.7.106
MAIL_PORT=1025
View all captured emails at http://10.46.7.106:8025. It renders HTML emails, shows headers, and even has a spam score check — far more useful than console log output.
Step 7 — SigNoz APM
SigNoz gives you the full observability stack: distributed tracing, metrics, and logs — open source, self-hosted, with a Datadog-like UI. Critically, it receives traces from your host machine apps via the OTLP collector exposed through MetalLB.
# helm-values/signoz.yaml
clickhouse:
enabled: true
persistence:
size: 20Gi
storageClass: "local-path"
frontend:
service:
type: LoadBalancer
loadBalancerIP: "10.46.7.107"
port: 3301
otelCollector:
service:
type: LoadBalancer
loadBalancerIP: "10.46.7.108"
In your Node.js/NestJS app, instrument with OpenTelemetry:
// tracing.ts — load before anything else
import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter({
url: "http://10.46.7.108:4318/v1/traces",
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
Set environment variables in your .env:
OTEL_SERVICE_NAME=my-api
OTEL_EXPORTER_OTLP_ENDPOINT=http://10.46.7.108:4318
Open http://10.46.7.107:3301 — you’ll see traces from your host app flowing in as if it were deployed in the cluster.
Step 8 — Bootstrap (First-Time Setup)
With all the config in place, a single script installs everything:
chmod +x bootstrap.sh dev-infra.sh
./bootstrap.sh
The bootstrap script does the following in order:
- Creates the
dev-infranamespace - Installs MetalLB and applies the IP pool
- Adds Bitnami and SigNoz Helm repos
- Installs PostgreSQL HA, MongoDB, Redis, RabbitMQ via Helm
- Applies Mailpit manifests
- Installs SigNoz via Helm into the
signoznamespace
It waits for each component before moving on, so you always know what’s failing if something goes wrong.
The Dev CLI — dev-infra.sh
This is the main tool you’ll use day-to-day. Kubernetes scales Deployments and StatefulSets to zero replicas on stop — data on PVCs is always preserved. When you start again, your data is exactly where you left it.
# Start everything (e.g., after multipass start)
./dev-infra.sh start all
# Check what's running
./dev-infra.sh status
# Stop just SigNoz when you don't need tracing (saves ~1.5GB RAM)
./dev-infra.sh stop signoz
# Restart postgres after a config change
./dev-infra.sh restart postgres
# Tail RabbitMQ logs while debugging a queue issue
./dev-infra.sh logs rabbitmq
# Print all connection strings
./dev-infra.sh urls
The status command shows pod phase, readiness, node placement, and the external IP for each service at a glance.
Multipass Lifecycle Integration
When you stop Multipass VMs, the k3s cluster goes away. When you start them back up, the cluster recovers — but services that were running may need to come back up too.
Always stop Multipass cleanly (Flannel networking and etcd quorum break on suspend):
# End of day — stop cleanly
./dev-infra.sh stop all
multipass stop --all
# Start of day — bring everything back
multipass start --all
# Wait ~30 seconds for k3s to stabilize, then:
./dev-infra.sh start all
To verify the cluster is ready before starting services:
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# k3s-master Ready control-plane,master 5d v1.34.4+k3s1
# k3s-worker-1 Ready <none> 5d v1.34.4+k3s1
# k3s-worker-2 Ready <none> 5d v1.34.4+k3s1
Connection Strings Reference
Copy .env.example to any project and you’re connected:
# PostgreSQL HA
DATABASE_URL=postgresql://devuser:devpassword@10.46.7.100:5432/devdb
# MongoDB ReplicaSet (transactions-capable)
MONGODB_URI=mongodb://devuser:devpassword@10.46.7.101:27017,10.46.7.102:27017,10.46.7.103:27017/devdb?authSource=devdb&replicaSet=rs0
# Redis
REDIS_URL=redis://:redispassword@10.46.7.104:6379
# RabbitMQ
RABBITMQ_URL=amqp://devuser:devpassword@10.46.7.105:5672
# Mailpit SMTP
MAIL_HOST=10.46.7.106
MAIL_PORT=1025
# SigNoz / OpenTelemetry
OTEL_EXPORTER_OTLP_ENDPOINT=http://10.46.7.108:4318
OTEL_SERVICE_NAME=your-service-name
How This All Fits Together
This setup is an evolution of the local dev mental model:
- docker-compose era: Quick, fragile, resets on restart, different from prod networking.
- k3s + LoadBalancer era: Persistent PVC data, real HA topology, stable IPs, stopped/started on demand.
The key shift is that your applications don’t need to know or care that these services are running inside Kubernetes. They just see IP addresses and ports — the same way they’d talk to RDS, Atlas, ElastiCache, or CloudAMQP in staging or production. The dev/prod parity becomes dramatically higher.
When SigNoz captures traces from a Node.js process running on your host, routed through an OpenTelemetry collector running in a Pod, displayed in a UI served by another Pod — that’s the same architecture you’d use in production. The only difference is the scale.
Conclusion
Running production-grade backing services locally used to mean either accepting poor parity (SQLite, in-memory Redis) or spinning up expensive cloud environments just to test. With a k3s cluster on Multipass and MetalLB providing real IPs, you get the best of both worlds: everything runs locally, costs nothing, starts and stops in seconds, and behaves exactly like the real thing. Ship with more confidence — because you’ve been testing against the real thing all along.