Preview Environments for Actix-Web (Rust): Automated Per-PR Deployments with Bunnyshell
GuideMarch 20, 202611 min read

Preview Environments for Actix-Web (Rust): Automated Per-PR Deployments with Bunnyshell

Why Preview Environments for Actix-Web?

Every Rust team has been here: a PR looks solid in code review, cargo test goes green, but once it hits staging something unexpected breaks. Maybe a new handler panics on a particular query shape, or the sqlx migration conflicts with a migration on another branch, or the DATABASE_URL on the staging machine points at the wrong schema.

Preview environments solve this. Every pull request gets its own isolated deployment — Actix-Web binary, PostgreSQL database, the works — running in Kubernetes with production-like configuration. Reviewers click a link and see the actual running service responding to real HTTP requests, not just a diff.

With Bunnyshell, you get:

  • Automatic deployment — A new environment spins up for every PR
  • Production parity — Same multi-stage Docker image, same database engine, same infrastructure
  • Isolation — Each PR environment lives in its own Kubernetes namespace, no shared staging conflicts
  • Automatic cleanup — Environments are destroyed when the PR is merged or closed

One extra perk for Rust services: the final Docker image is tiny — typically 50 MB or less when built against debian:bookworm-slim, or even smaller when targeting a statically-linked musl binary. That means fast image pulls and minimal attack surface in every ephemeral environment.

Actix-Web compiles to a single self-contained binary. The runtime container needs no Rust toolchain, no package manager, and no interpreter — just the binary plus shared libraries. This makes it an ideal fit for ephemeral environments where fast cold-start matters.

Choose Your Approach

Bunnyshell supports three ways to set up preview environments for Actix-Web. Pick the one that fits your workflow:

ApproachBest forComplexityCI/CD maintenance
Approach A: Bunnyshell UITeams that want the fastest setup with zero pipeline maintenanceEasiestNone — Bunnyshell manages webhooks automatically
Approach B: Docker Compose ImportTeams already using docker-compose.yml for local developmentEasyNone — import converts to Bunnyshell config automatically
Approach C: Helm ChartsTeams with existing Helm infrastructure or complex K8s needsAdvancedOptional — can use CLI or Bunnyshell UI

All three approaches end the same way: a toggle in Bunnyshell Settings that enables automatic preview environments for every PR. No GitHub Actions, no GitLab CI pipelines to maintain — Bunnyshell adds webhooks to your Git provider and listens for PR events.

Prerequisites: Prepare Your Actix-Web App

Regardless of which approach you choose, your Actix-Web app needs two things: a multi-stage Dockerfile and the right configuration.

1. Create a Production-Ready Multi-Stage Dockerfile

Rust compile times are the main thing to plan around. The first cargo build --release in CI can take 5–10 minutes for a mid-sized project. After that, Docker layer caching keeps rebuilds fast — only recompiling when Cargo.toml or Cargo.lock changes.

Dockerfile
1# ── Stage 1: Build ──────────────────────────────────────────────────────────
2FROM rust:1.77-slim AS builder
3
4WORKDIR /app
5
6# Install build dependencies (for linking against libpq if using sqlx/diesel)
7RUN apt-get update && apt-get install -y --no-install-recommends \
8    libpq-dev pkg-config && \
9    rm -rf /var/lib/apt/lists/*
10
11# Cache dependencies separately from source code.
12# Copy manifests first so `cargo fetch` is only re-run when deps change.
13COPY Cargo.toml Cargo.lock ./
14
15# Create a dummy main so cargo can compile dependencies
16RUN mkdir src && echo 'fn main() {}' > src/main.rs
17RUN cargo build --release && rm -f target/release/deps/myapp*
18
19# Now copy the real source and build the final binary
20COPY src ./src
21RUN cargo build --release
22
23# ── Stage 2: Runtime ─────────────────────────────────────────────────────────
24FROM debian:bookworm-slim AS runtime
25
26RUN apt-get update && apt-get install -y --no-install-recommends \
27    libpq5 ca-certificates && \
28    rm -rf /var/lib/apt/lists/*
29
30WORKDIR /app
31
32# Copy only the compiled binary
33COPY --from=builder /app/target/release/myapp /app/myapp
34
35EXPOSE 8080
36ENV RUST_LOG=info
37CMD ["/app/myapp"]

Important: Replace myapp with your actual binary name from Cargo.toml. The app must bind to 0.0.0.0, not 127.0.0.1 — otherwise Kubernetes probes and ingress traffic will not reach it.

If you use sqlx with compile-time query checks (query! macro), you need either DATABASE_URL set at build time or SQLX_OFFLINE=true with a pre-generated sqlx-data.json cache file. The Dockerfile above assumes offline mode. Add ENV SQLX_OFFLINE=true before the cargo build step, and commit your sqlx-data.json to the repo.

2. Configure Actix-Web for Kubernetes

Actix-Web needs these settings to work correctly behind the Kubernetes ingress (which terminates TLS and forwards X-Forwarded-* headers):

Rust
1// src/main.rs
2use actix_web::{middleware, web, App, HttpServer};
3use std::env;
4
5#[actix_web::main]
6async fn main() -> std::io::Result<()> {
7    // Initialize logger — respects RUST_LOG env var
8    env_logger::init_from_env(env_logger::Env::default().default_filter_or("info"));
9
10    let database_url = env::var("DATABASE_URL")
11        .expect("DATABASE_URL must be set");
12    let port: u16 = env::var("PORT")
13        .unwrap_or_else(|_| "8080".to_string())
14        .parse()
15        .expect("PORT must be a valid number");
16
17    // Build a connection pool (example with sqlx)
18    let pool = sqlx::PgPool::connect(&database_url)
19        .await
20        .expect("Failed to connect to database");
21
22    log::info!("Starting server on 0.0.0.0:{}", port);
23
24    HttpServer::new(move || {
25        App::new()
26            // Logger middleware — uses X-Forwarded-For automatically
27            .wrap(middleware::Logger::default())
28            // Compress responses
29            .wrap(middleware::Compress::default())
30            // Shared DB pool
31            .app_data(web::Data::new(pool.clone()))
32            // Your routes
33            .service(web::scope("/api").configure(routes::configure))
34    })
35    // Bind to all interfaces — required for Kubernetes container networking
36    .bind(("0.0.0.0", port))?
37    .run()
38    .await
39}
Rust
1// src/routes.rs (example)
2use actix_web::{get, web, HttpResponse, Responder};
3
4pub fn configure(cfg: &mut web::ServiceConfig) {
5    cfg.service(health_check);
6}
7
8#[get("/health")]
9async fn health_check() -> impl Responder {
10    HttpResponse::Ok().json(serde_json::json!({"status": "ok"}))
11}

For database migrations, use sqlx migrate run at container startup or as a separate init step:

Rust
1// Run migrations on startup (add to main before HttpServer::new)
2sqlx::migrate!("./migrations")
3    .run(&pool)
4    .await
5    .expect("Failed to run database migrations");

Actix-Web Deployment Checklist

  • App binds to 0.0.0.0:8080 (not 127.0.0.1)
  • DATABASE_URL loaded from environment variable
  • APP_SECRET and other secrets loaded from environment variables
  • RUST_LOG=info set for structured logging
  • sqlx-data.json committed (if using compile-time query checks)
  • Multi-stage Dockerfile with debian:bookworm-slim runtime stage
  • Health check endpoint (/api/health or similar) for Kubernetes probes
  • env_logger or tracing initialized at startup

Approach A: Bunnyshell UI — Zero CI/CD Maintenance

This is the easiest approach. You connect your repo, paste a YAML config, deploy, and flip a toggle. No CI/CD pipelines to write or maintain — Bunnyshell automatically adds webhooks to your Git provider and creates/destroys preview environments when PRs are opened/closed.

Step 1: Create a Project and Environment

  1. Log into Bunnyshell
  2. Click Create project and name it (e.g., "Actix App")
  3. Inside the project, click Create environment and name it (e.g., "actix-main")

Step 2: Define the Environment Configuration

Click Configuration in your environment view and paste this bunnyshell.yaml:

YAML
1kind: Environment
2name: actix-preview
3type: primary
4
5environmentVariables:
6  APP_SECRET: SECRET["your-app-secret-here"]
7  DB_PASSWORD: SECRET["your-db-password"]
8
9components:
10  # ── Actix-Web Application ──
11  - kind: Application
12    name: actix-app
13    gitRepo: 'https://github.com/your-org/your-actix-repo.git'
14    gitBranch: main
15    gitApplicationPath: /
16    dockerCompose:
17      build:
18        context: .
19        dockerfile: Dockerfile
20      environment:
21        DATABASE_URL: 'postgres://actix:{{ env.vars.DB_PASSWORD }}@postgres:5432/actix_db'
22        APP_SECRET: '{{ env.vars.APP_SECRET }}'
23        PORT: '8080'
24        RUST_LOG: 'info'
25      ports:
26        - '8080:8080'
27    hosts:
28      - hostname: 'app-{{ env.base_domain }}'
29        path: /
30        servicePort: 8080
31    dependsOn:
32      - postgres
33
34  # ── PostgreSQL Database ──
35  - kind: Database
36    name: postgres
37    dockerCompose:
38      image: 'postgres:16-alpine'
39      environment:
40        POSTGRES_DB: actix_db
41        POSTGRES_USER: actix
42        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
43      ports:
44        - '5432:5432'
45
46volumes:
47  - name: postgres-data
48    mount:
49      component: postgres
50      containerPath: /var/lib/postgresql/data
51    size: 1Gi

Replace your-org/your-actix-repo with your actual repository. Save the configuration.

The DATABASE_URL is assembled inline using Bunnyshell template syntax. The hostname postgres refers to the component name — Kubernetes creates a DNS entry matching the component name within the namespace, so service discovery is automatic.

Step 3: Deploy

Click the Deploy button, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:

  1. Build your Actix-Web Docker image from the multi-stage Dockerfile (first build takes 5–10 min; subsequent builds use cache)
  2. Pull the PostgreSQL image
  3. Deploy everything into an isolated Kubernetes namespace
  4. Generate HTTPS URLs automatically with DNS

Monitor the deployment in the environment detail page. When status shows Running, click Endpoints to access your live Actix-Web service.

Step 4: Run Migrations

If you run migrations as a separate step (rather than at application startup), you can trigger them via the component terminal in the Bunnyshell UI, or via the CLI:

Bash
1export BUNNYSHELL_TOKEN=your-api-token
2bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
3# For sqlx migrations:
4bns exec COMPONENT_ID -- sqlx migrate run
5# Or, if using diesel:
6bns exec COMPONENT_ID -- diesel migration run

Step 5: Enable Automatic Preview Environments

This is the magic step — no CI/CD configuration needed:

  1. In your environment, go to Settings
  2. Find the Ephemeral environments section
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the Kubernetes cluster for ephemeral environments

That's it. Bunnyshell automatically adds a webhook to your Git provider (GitHub, GitLab, or Bitbucket). From now on:

  • Open a PR → Bunnyshell creates an ephemeral environment with the PR's branch
  • Push to PR → The environment redeploys with the latest changes
  • Bunnyshell posts a comment on the PR with a link to the live deployment
  • Merge or close the PR → The ephemeral environment is automatically destroyed

Note: The primary environment must be in Running or Stopped status before ephemeral environments can be created from it.


Approach B: Docker Compose Import

Already have a docker-compose.yml for local development? Bunnyshell can import it directly and convert it to its environment format. No manual YAML writing required.

Step 1: Add a docker-compose.yml to Your Repo

If you don't already have one, create docker-compose.yml in your repo root:

YAML
1version: '3.8'
2
3services:
4  actix-app:
5    build:
6      context: .
7      dockerfile: Dockerfile
8    ports:
9      - '8080:8080'
10    environment:
11      DATABASE_URL: 'postgres://actix:actix@postgres:5432/actix_db'
12      APP_SECRET: 'dev-secret-key-change-in-prod'
13      PORT: '8080'
14      RUST_LOG: 'info,actix_web=debug'
15    depends_on:
16      postgres:
17        condition: service_healthy
18
19  postgres:
20    image: postgres:16-alpine
21    environment:
22      POSTGRES_DB: actix_db
23      POSTGRES_USER: actix
24      POSTGRES_PASSWORD: actix
25    volumes:
26      - postgres-data:/var/lib/postgresql/data
27    healthcheck:
28      test: ["CMD-SHELL", "pg_isready -U actix -d actix_db"]
29      interval: 5s
30      timeout: 5s
31      retries: 5
32
33volumes:
34  postgres-data:

Step 2: Import into Bunnyshell

  1. Create a Project and Environment in Bunnyshell (same as Approach A, Step 1)
  2. Click Define environment
  3. Select your Git account and repository
  4. Set the branch (e.g., main) and the path to docker-compose.yml (use / if it's in the root)
  5. Click Continue — Bunnyshell parses and validates your Docker Compose file

Bunnyshell automatically detects:

  • All services (actix-app, postgres)
  • Exposed ports
  • Build configurations (multi-stage Dockerfile)
  • Volumes
  • Environment variables

It converts everything into a bunnyshell.yaml environment definition.

Important: The docker-compose.yml is only read during the initial import. Subsequent changes to the file won't auto-propagate — edit the environment configuration in Bunnyshell instead.

Step 3: Adjust the Configuration

After import, go to Configuration in the environment view and update:

  • Replace hardcoded secrets with SECRET["..."] syntax
  • Replace the hardcoded DATABASE_URL with Bunnyshell interpolation so it uses the correct password secret:
YAML
DATABASE_URL: 'postgres://actix:{{ env.vars.DB_PASSWORD }}@postgres:5432/actix_db'
APP_SECRET: '{{ env.vars.APP_SECRET }}'
  • Add a hosts block so your app gets an HTTPS URL:
YAML
1hosts:
2  - hostname: 'app-{{ env.base_domain }}'
3    path: /
4    servicePort: 8080

Step 4: Deploy and Enable Preview Environments

Same as Approach A — click Deploy, then go to Settings and toggle on ephemeral environments.

Best Practices for Docker Compose with Bunnyshell

  • Design for startup resilience — Kubernetes doesn't guarantee depends_on ordering like Docker Compose does. Make your Actix-Web app retry database connections on startup. With sqlx, use a retry loop or a startup probe:
Rust
1// Retry database connection with exponential backoff
2let pool = {
3    let mut retries = 5u32;
4    loop {
5        match sqlx::PgPool::connect(&database_url).await {
6            Ok(p) => break p,
7            Err(e) if retries > 0 => {
8                log::warn!("DB not ready, retrying in 2s: {}", e);
9                tokio::time::sleep(std::time::Duration::from_secs(2)).await;
10                retries -= 1;
11            }
12            Err(e) => panic!("Could not connect to database: {}", e),
13        }
14    }
15};
  • Use Bunnyshell interpolation for dynamic values like URLs:
YAML
# Bunnyshell environment config (after import)
DATABASE_URL: 'postgres://actix:{{ env.vars.DB_PASSWORD }}@postgres:5432/actix_db'

Approach C: Helm Charts

For teams with existing Helm infrastructure or complex Kubernetes requirements (custom ingress, service mesh, advanced scaling). Helm gives you full control over every Kubernetes resource.

Step 1: Create a Helm Chart

Structure your Actix-Web Helm chart in your repo:

Text
1helm/actix/
2├── Chart.yaml
3├── values.yaml
4└── templates/
5    ├── deployment.yaml
6    ├── service.yaml
7    ├── ingress.yaml
8    └── configmap.yaml

A minimal values.yaml:

YAML
1replicaCount: 1
2image:
3  repository: ""
4  tag: latest
5service:
6  port: 8080
7ingress:
8  enabled: true
9  className: bns-nginx
10  host: ""
11env:
12  DATABASE_URL: ""
13  APP_SECRET: ""
14  PORT: "8080"
15  RUST_LOG: "info"

Step 2: Define the Bunnyshell Configuration

Create a bunnyshell.yaml using Helm components:

YAML
1kind: Environment
2name: actix-helm
3type: primary
4
5environmentVariables:
6  APP_SECRET: SECRET["your-app-secret"]
7  DB_PASSWORD: SECRET["your-db-password"]
8  POSTGRES_DB: actix_db
9  POSTGRES_USER: actix
10
11components:
12  # ── Docker Image Build ──
13  - kind: DockerImage
14    name: actix-image
15    context: /
16    dockerfile: Dockerfile
17    gitRepo: 'https://github.com/your-org/your-actix-repo.git'
18    gitBranch: main
19    gitApplicationPath: /
20
21  # ── PostgreSQL via Helm ──
22  - kind: Helm
23    name: postgres
24    runnerImage: 'dtzar/helm-kubectl:3.8.2'
25    deploy:
26      - |
27        cat << EOF > pg_values.yaml
28          global:
29            storageClass: bns-network-sc
30          auth:
31            postgresPassword: {{ env.vars.DB_PASSWORD }}
32            database: {{ env.vars.POSTGRES_DB }}
33            username: {{ env.vars.POSTGRES_USER }}
34        EOF
35      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
36      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
37        --post-renderer /bns/helpers/helm/bns_post_renderer
38        -f pg_values.yaml postgres bitnami/postgresql --version 11.9.11'
39      - |
40        POSTGRES_HOST="postgres-postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
41    destroy:
42      - 'helm uninstall postgres --namespace {{ env.k8s.namespace }}'
43    start:
44      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
45        statefulset/postgres-postgresql'
46    stop:
47      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
48        statefulset/postgres-postgresql'
49    exportVariables:
50      - POSTGRES_HOST
51
52  # ── Actix-Web App via Helm ──
53  - kind: Helm
54    name: actix-app
55    runnerImage: 'dtzar/helm-kubectl:3.8.2'
56    deploy:
57      - |
58        cat << EOF > actix_values.yaml
59          replicaCount: 1
60          image:
61            repository: {{ components.actix-image.image }}
62          service:
63            port: 8080
64          ingress:
65            enabled: true
66            className: bns-nginx
67            host: app-{{ env.base_domain }}
68          env:
69            DATABASE_URL: 'postgres://{{ env.vars.POSTGRES_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.postgres.exported.POSTGRES_HOST }}:5432/{{ env.vars.POSTGRES_DB }}'
70            APP_SECRET: '{{ env.vars.APP_SECRET }}'
71            PORT: '8080'
72            RUST_LOG: 'info'
73        EOF
74      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
75        --post-renderer /bns/helpers/helm/bns_post_renderer
76        -f actix_values.yaml actix-{{ env.unique }} ./helm/actix'
77    destroy:
78      - 'helm uninstall actix-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
79    start:
80      - 'helm upgrade --namespace {{ env.k8s.namespace }}
81        --post-renderer /bns/helpers/helm/bns_post_renderer
82        --reuse-values --set replicaCount=1 actix-{{ env.unique }} ./helm/actix'
83    stop:
84      - 'helm upgrade --namespace {{ env.k8s.namespace }}
85        --post-renderer /bns/helpers/helm/bns_post_renderer
86        --reuse-values --set replicaCount=0 actix-{{ env.unique }} ./helm/actix'
87    gitRepo: 'https://github.com/your-org/your-actix-repo.git'
88    gitBranch: main
89    gitApplicationPath: /helm/actix
90    dependsOn:
91      - postgres
92      - actix-image

Key: Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your helm commands. This adds labels so Bunnyshell can track resources, show logs, and manage component lifecycle.

Step 3: Deploy and Enable Preview Environments

Same flow: paste the config in Configuration, hit Deploy, then enable ephemeral environments in Settings.


Enabling Preview Environments (All Approaches)

Regardless of which approach you used, enabling automatic preview environments is the same:

  1. Ensure your primary environment has been deployed at least once (Running or Stopped status)
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" → ON
  4. Toggle "Destroy environment after merge or close pull request" → ON
  5. Select the target Kubernetes cluster

What happens next:

  • Bunnyshell adds a webhook to your Git provider automatically
  • When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
  • Bunnyshell posts a comment on the PR with a direct link to the running deployment
  • When the PR is merged or closed, the ephemeral environment is automatically destroyed

No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.

Optional: CI/CD Integration via CLI

If you prefer to control preview environments from your CI/CD pipeline (e.g., for custom migration scripts or load testing), you can use the Bunnyshell CLI:

Bash
1# Install
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create, deploy, and run migrations in one flow
8bns environments create --from-path bunnyshell.yaml --name "pr-123" --project PROJECT_ID --k8s CLUSTER_ID
9bns environments deploy --id ENV_ID --wait
10bns exec COMPONENT_ID -- sqlx migrate run

Remote Development and Debugging

Bunnyshell makes it easy to develop and debug directly against any environment — primary or ephemeral:

Port Forwarding

Connect your local tools to the remote database:

Bash
1# Forward PostgreSQL to local port 15432
2bns port-forward 15432:5432 --component POSTGRES_COMPONENT_ID
3
4# Connect with psql or any DB tool
5psql -h localhost -p 15432 -U actix actix_db

Execute Commands in the Container

Bash
1# Run sqlx migrations manually
2bns exec COMPONENT_ID -- sqlx migrate run
3
4# Check migration status
5bns exec COMPONENT_ID -- sqlx migrate info
6
7# Open a psql session (if psql is in the runtime image)
8bns exec COMPONENT_ID -- psql "$DATABASE_URL"

Because the Actix-Web runtime image is debian:bookworm-slim, you can install debugging tools on-the-fly in the running container via bns exec COMPONENT_ID -- apt-get install -y .... For the minimal scratch-based image, use port-forwarding to the database instead.

Live Logs

Bash
1# Stream logs in real time
2bns logs --component COMPONENT_ID -f
3
4# Last 200 lines
5bns logs --component COMPONENT_ID --tail 200
6
7# Logs from the last 5 minutes
8bns logs --component COMPONENT_ID --since 5m

Actix-Web with env_logger emits structured lines like:

Text
[2026-03-20T10:04:55Z INFO  actix_web::middleware::logger] 127.0.0.1 "GET /api/health HTTP/1.1" 200 15 "-" "kube-probe/1.28" 0.000234

The RUST_LOG environment variable controls verbosity. Set RUST_LOG=actix_web=debug,myapp=trace for maximum detail during debugging.

Live Code Sync

For active development, sync your local code changes to the remote container in real time:

Bash
1bns remote-development up --component COMPONENT_ID
2# Edit files locally — changes sync automatically
3# When done:
4bns remote-development down

Remote development sync is most useful for interpreted files (templates, static assets, config). For Rust source changes, you still need to trigger a rebuild — either re-deploy the component or use a development image with cargo-watch installed.


Troubleshooting

IssueSolution
502 Bad GatewayActix-Web isn't listening on 0.0.0.0:8080. Check your bind call: must be ("0.0.0.0", port), not ("127.0.0.1", port).
Build times out (10+ min)First build compiles all dependencies. Add a cargo fetch layer before your source copy to cache deps. Ensure Docker layer cache is enabled in your cluster.
SQLX_OFFLINE error at build timeEither set ENV SQLX_OFFLINE=true in the Dockerfile and commit sqlx-data.json, or provide DATABASE_URL as a build argument.
Migration fails on startupEnsure postgres component is listed in dependsOn. Add a retry loop or use Kubernetes initContainers to wait for the DB to be ready.
Connection refused to PostgreSQLVerify DATABASE_URL uses postgres (the component name) as the host, not localhost.
Panic in release build onlyInteger overflow panics in debug but wraps in release. Use checked_add / saturating_add or audit your arithmetic.
522 Connection timed outCluster may be behind a firewall. Verify Cloudflare IPs are whitelisted on the ingress controller.
cargo build fails with linker errorThe builder image needs libpq-dev (for sqlx/diesel) and pkg-config. Add them to the apt-get install step in the builder stage.

What's Next?

  • Add background workers — Spawn a Tokio task or use a crate like tokio-cron-scheduler for scheduled jobs; add a separate component for long-running workers
  • Seed test data — Use bns exec <ID> -- cargo run --bin seed post-deploy to populate the database with demo data
  • Add Redis for caching — Add a redis:7-alpine Service component and pass REDIS_URL to your app
  • Monitor with Sentry — The sentry crate integrates with Actix-Web middleware; pass SENTRY_DSN as an environment variable
  • Switch to musl for fully static binaries — Build with rust:1.77-alpine targeting x86_64-unknown-linux-musl for a scratch-based runtime image under 20 MB

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.