Skip to content

Local Development

k3d creates a lightweight Kubernetes cluster inside Docker, giving you a production-like environment locally with full observability (Prometheus, Grafana, Tempo).

Install the required tools:

Terminal window
# macOS
brew install k3d kubectl helm docker
# Linux (k3d)
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
# Also install kubectl and helm from their official sources

Make sure Docker is running with sufficient resources (8GB RAM recommended).

Terminal window
cd apps/backend/kubernetes
./scripts/k3d-setup.sh

This creates a complete local Kubernetes environment in about 5 minutes.

ComponentDescription
k3d cluster1 server + 2 agents
Local registrylocalhost:5111 (external) / flow-like-registry:5000 (internal)
CockroachDB3-node distributed database
RedisJob queue and execution state
APIFlow-Like API service (port 8080)
Executor PoolWorkflow execution workers
PrometheusMetrics collection
GrafanaDashboards and visualization
TempoDistributed tracing

After deployment, access services via port-forwarding:

Terminal window
# API (main endpoint) - exposed via nodePort, no port-forward needed
# Access at http://localhost:8080
# Grafana (monitoring dashboards) - exposed via nodePort at 30002
# Access at http://localhost:30002
# Prometheus (raw metrics)
kubectl port-forward -n flow-like svc/flow-like-prometheus 9090:9090 &
ServiceAccess MethodURL
APINodePort (automatic)http://localhost:8080
GrafanaNodePort (automatic)http://localhost:30002
Prometheuskubectl port-forward svc/flow-like-prometheus 9090:9090http://localhost:9090
CockroachDBkubectl port-forward svc/flow-like-cockroachdb-public 26257:26257localhost:26257

Default credentials:

  • Username: admin
  • Password: Retrieved from secret:
Terminal window
kubectl get secret -n flow-like flow-like-grafana \
-o jsonpath='{.data.admin-password}' | base64 -d && echo

Grafana comes pre-configured with these dashboards:

DashboardDescription
System OverviewCPU, memory, network across all pods
API ServiceRequest rates, latencies, error rates
Executor PoolJob queue depth, execution times, worker status
CockroachDBQuery performance, replication lag, storage
RedisCommands/sec, memory, connected clients
TracingRequest traces via Tempo integration
Terminal window
# API logs
kubectl logs -f deployment/flow-like-api -n flow-like
# Executor logs
kubectl logs -f deployment/flow-like-executor-pool -n flow-like
# All pods
kubectl logs -f -l app.kubernetes.io/instance=flow-like -n flow-like
Terminal window
./scripts/k3d-setup.sh rebuild

This rebuilds Docker images, pushes to the local registry, and triggers a rolling restart.

Terminal window
# Show status
./scripts/k3d-setup.sh status
# Delete cluster
./scripts/k3d-setup.sh delete
# Shell into API pod
kubectl exec -it deployment/flow-like-api -n flow-like -- /bin/sh
Terminal window
# Check current values
helm get values flow-like -n flow-like
# Upgrade with new values
helm upgrade flow-like ./helm -n flow-like --set api.replicas=2
# View release history
helm history flow-like -n flow-like
Terminal window
# Check pod status
kubectl get pods -n flow-like
# Describe failing pod
kubectl describe pod <pod-name> -n flow-like
# Check events
kubectl get events -n flow-like --sort-by='.lastTimestamp'
Terminal window
# Check CockroachDB logs
kubectl logs -f statefulset/flow-like-cockroachdb -n flow-like
# Verify database is ready
kubectl exec -it flow-like-cockroachdb-0 -n flow-like -- cockroach sql --insecure \
-e "SHOW DATABASES;"
Terminal window
# Verify local registry
curl http://localhost:5111/v2/_catalog
# Rebuild and push images
./scripts/k3d-setup.sh rebuild

If the API can’t reach external services (like authentication providers), check the network policy:

Terminal window
# View network policies
kubectl get networkpolicy -n flow-like
# Test external connectivity from API pod
kubectl exec -it deployment/flow-like-api -n flow-like -- \
wget -qO- --timeout=5 https://httpbin.org/ip || echo "Failed"

The network policy allows egress to external HTTPS (port 443) by default. If you need additional ports, update the networkPolicy section in your Helm values.

If executions fail with authentication errors in the executor:

Terminal window
# Check executor logs
kubectl logs -f deployment/flow-like-executor-pool -n flow-like
# Verify BACKEND_PUB secret is set
kubectl get secret flow-like-api-keys -n flow-like -o jsonpath='{.data.BACKEND_PUB}' | base64 -d

The executor needs BACKEND_PUB and BACKEND_KID environment variables from the API keys secret to verify execution JWTs.