Docker Compose
This deployment lives in apps/backend/docker-compose/ and provides a simple way to run Flow-Like on a single machine.
Architecture
Section titled “Architecture”┌─────────────────────────────────────────────────────────────────────────────┐│ Docker Compose Network │├─────────────────────────────────────────────────────────────────────────────┤│ Core Services: ││ ┌─────────────┐ ┌─────────────────────────────────────────────────┐ ││ │ API │────▶│ Execution Runtime │ ││ │ Container │ │ (Server Mode - handles multiple jobs) │ ││ │ :8080 │◀────│ :9000 │ ││ └─────────────┘ └─────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌─────────────┐ ││ │ PostgreSQL │ ││ │ :5432 │ ││ └─────────────┘ │├─────────────────────────────────────────────────────────────────────────────┤│ Monitoring (optional): ││ ┌─────────────┐ ┌─────────────┐ ││ │ Prometheus │ │ Grafana │ ││ │ :9091 │ │ :3002 │ ││ └─────────────┘ └─────────────┘ ││ │└─────────────────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────┐ │ External Storage │ │ (S3/Azure/GCP/R2) │ └─────────────────────┘Services
Section titled “Services”| Service | Description | Port |
|---|---|---|
api | Main Flow-Like API service | 8080 |
runtime | Shared execution environment | 9000 |
postgres | PostgreSQL database | 5432 |
db-init | One-time migration job | — |
prometheus | Metrics collection (optional) | 9091 |
grafana | Dashboards (optional) | 3002 |
Quick Start
Section titled “Quick Start”cd apps/backend/docker-composecp .env.example .env# Edit .env with your storage credentials
# Generate JWT keypair for execution trust../../tools/gen-execution-keys.sh
# Start core servicesdocker compose up -d
# Or include monitoringdocker compose --profile monitoring up -dMonitoring
Section titled “Monitoring”Enable optional Prometheus + Grafana monitoring:
docker compose --profile monitoring up -dAccess Grafana at http://localhost:3002 (default: admin/admin).
Execution Model
Section titled “Execution Model”This Docker Compose setup uses shared execution where a single runtime container handles multiple jobs concurrently. This is suitable for:
- Development and testing
- Trusted workloads
- High-throughput scenarios with controlled input
For stronger isolation (one container per execution), consider:
- Kubernetes deployment with Kata containers
- AWS Lambda (per-invocation isolation)