Skip to content

Storage Providers

Flow-Like requires S3-compatible object storage for storing workflow data, execution logs, and content. Three providers are supported natively.

STORAGE_PROVIDER=aws
# Credentials
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Region and endpoint
AWS_REGION=us-east-1
AWS_ENDPOINT= # Leave empty for AWS S3
# Bucket names
META_BUCKET=flow-like-meta
CONTENT_BUCKET=flow-like-content
LOG_BUCKET=flow-like-logs
Section titled “S3 Express One Zone (Recommended for Meta Bucket)”

S3 Express One Zone is a high-performance, single-AZ storage class ideal for the meta bucket:

  • 10x faster than standard S3 (single-digit millisecond latency)
  • 50% lower cost per request than standard S3
  • Consistent performance for metadata-heavy workloads

Express One Zone bucket names end with --<az-id>--x-s3 (e.g., flow-like-meta--usw2-az1--x-s3).

# Enable S3 Express for meta bucket
META_BUCKET=flow-like-meta--usw2-az1--x-s3
META_BUCKET_EXPRESS_ZONE=true
# Content bucket can also use Express if in same AZ
CONTENT_BUCKET=flow-like-content
CONTENT_BUCKET_EXPRESS_ZONE=false
# Logs bucket (standard S3 is usually sufficient)
LOG_BUCKET=flow-like-logs
LOGS_BUCKET_EXPRESS_ZONE=false

The credentials need the following permissions on your buckets:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::flow-like-*",
"arn:aws:s3:::flow-like-*/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3express:CreateSession"
],
"Resource": [
"arn:aws:s3express:*:*:bucket/flow-like-*--*--x-s3"
]
}
]
}

Scoped Runtime Credentials (STS AssumeRole)

Section titled “Scoped Runtime Credentials (STS AssumeRole)”

Flow-Like generates scoped credentials for every execution using STS AssumeRole. This ensures users can only access their own prefix-isolated storage paths, providing strict isolation between users and apps.

# Role to assume for runtime credentials
RUNTIME_ROLE_ARN=arn:aws:iam::123456789012:role/FlowLikeRuntimeRole

The runtime role needs a trust policy allowing the API to assume it:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/FlowLikeApiRole"
},
"Action": "sts:AssumeRole"
}
]
}

When RUNTIME_ROLE_ARN is set, each execution receives temporary credentials scoped to:

  • Read/write the specific app’s data (apps/{app_id}/*)
  • Read/write the user’s app data (users/{user_id}/apps/{app_id}/*)
  • Write execution logs (runs/{app_id}/*)
  • Access temporary storage (tmp/user/{user_id}/apps/{app_id}/*)

R2 is S3-compatible and supports prefix-scoped temporary credentials through Cloudflare’s proprietary API:

STORAGE_PROVIDER=r2
# R2 credentials for S3 API access (from R2 API tokens)
R2_ACCESS_KEY_ID=your-r2-access-key-id
R2_SECRET_ACCESS_KEY=your-r2-secret-access-key
# R2 endpoint (replace with your account ID)
R2_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
# R2 Temp Credentials API (required for scoped credentials)
R2_ACCOUNT_ID=your-cloudflare-account-id
R2_API_TOKEN=your-cloudflare-api-token
# Bucket names
META_BUCKET=flow-like-meta
CONTENT_BUCKET=flow-like-content
LOG_BUCKET=flow-like-logs

The API token needs the Workers R2 Storage:Edit permission for the temp credentials API:

  1. Go to your Cloudflare DashboardManage AccountAPI Tokens
  2. Create a custom token with:
    • Permissions: AccountWorkers R2 StorageEdit
    • Account Resources: Include your account

Unlike AWS STS, R2 uses Cloudflare’s proprietary temp credentials API which:

  • Creates temporary S3-compatible credentials (access key, secret key, session token)
  • Supports prefix-scoping via the prefixes parameter
  • Returns credentials with configurable TTL (default: 1 hour)

Each execution receives temporary credentials scoped to access only:

  • The specific app’s data prefixes
  • The user’s app data prefixes
  • Execution log paths

For local development or air-gapped environments:

STORAGE_PROVIDER=aws
# MinIO credentials
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
# MinIO endpoint
AWS_ENDPOINT=http://minio:9000
AWS_REGION=us-east-1
AWS_USE_PATH_STYLE=true
# Bucket names
META_BUCKET=flow-like-meta
CONTENT_BUCKET=flow-like-content
LOG_BUCKET=flow-like-logs
# STS AssumeRole for scoped credentials
# MinIO requires STS to be enabled: https://min.io/docs/minio/linux/developers/security-token-service.html
RUNTIME_ROLE_ARN=arn:minio:iam:::role/FlowLikeRuntimeRole

To add MinIO to your Docker Compose stack, add this service:

services:
minio:
image: minio/minio
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001" # Console
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio_data:/data
networks:
- flowlike
volumes:
minio_data:
STORAGE_PROVIDER=azure
# Azure credentials
AZURE_STORAGE_ACCOUNT_NAME=yourstorageaccount
AZURE_STORAGE_ACCOUNT_KEY=your-access-key
# Container names
AZURE_META_CONTAINER=flow-like-meta
AZURE_CONTENT_CONTAINER=flow-like-content
AZURE_LOG_CONTAINER=flow-like-logs
Terminal window
az storage container create --name flow-like-meta --account-name yourstorageaccount
az storage container create --name flow-like-content --account-name yourstorageaccount
az storage container create --name flow-like-logs --account-name yourstorageaccount
STORAGE_PROVIDER=gcp
# GCP project
GCS_PROJECT_ID=your-project-id
# Service account JSON (base64 encoded or raw)
GOOGLE_APPLICATION_CREDENTIALS_JSON={"type":"service_account","project_id":"..."}
# Bucket names
GCP_META_BUCKET=flow-like-meta
GCP_CONTENT_BUCKET=flow-like-content
GCP_LOG_BUCKET=flow-like-logs

The service account needs the Storage Object Admin role on your buckets:

Terminal window
gsutil iam ch serviceAccount:your-sa@project.iam.gserviceaccount.com:objectAdmin gs://flow-like-meta
gsutil iam ch serviceAccount:your-sa@project.iam.gserviceaccount.com:objectAdmin gs://flow-like-content
gsutil iam ch serviceAccount:your-sa@project.iam.gserviceaccount.com:objectAdmin gs://flow-like-logs

Some S3-compatible providers (MinIO, R2) require path-style URLs:

AWS_USE_PATH_STYLE=true

This changes requests from:

  • Virtual-hosted style: https://bucket.endpoint.com/key
  • Path style: https://endpoint.com/bucket/key
VariableDescriptionDefault
STORAGE_PROVIDERStorage backend (aws, r2, azure, gcp)aws
META_BUCKETBucket for app metadata and execution stateRequired
CONTENT_BUCKETBucket for user content and workflow dataRequired
LOG_BUCKETBucket for execution logsRequired
META_BUCKET_EXPRESS_ZONEEnable S3 Express for meta bucketfalse
CONTENT_BUCKET_EXPRESS_ZONEEnable S3 Express for content bucketfalse
LOGS_BUCKET_EXPRESS_ZONEEnable S3 Express for logs bucketfalse
RUNTIME_ROLE_ARNIAM role ARN for scoped runtime credentials (AWS/MinIO)Optional
R2_ACCESS_KEY_IDR2 S3-compatible access keyR2 only
R2_SECRET_ACCESS_KEYR2 S3-compatible secret keyR2 only
R2_ENDPOINTR2 S3-compatible endpoint URLR2 only
R2_ACCOUNT_IDCloudflare account ID for R2 temp credentialsR2 only
R2_API_TOKENCloudflare API token for R2 temp credentialsR2 only
EXECUTION_STATE_BACKENDState store backend (postgres, redis, s3)postgres
AWS_USE_PATH_STYLEUse path-style URLs (for MinIO/R2)false