How to Get Your Supabase Project on Docker (Complete Guide)
- Leanware Editorial Team
- Feb 25
- 10 min read
Getting Supabase working with Docker can mean different things depending on your goal. Some people want a quick local development environment. Others want to self-host Supabase in production. Still others simply want to put their application that uses Supabase into Docker containers.
This guide covers all three scenarios: running Supabase locally using Docker for development, self-hosting Supabase on your own infrastructure with Docker Compose, and containerizing an application that connects to Supabase - you'll know exactly which path fits your situation and how to execute it.
What Does "Getting Supabase on Docker" Actually Mean?

When you say “run Supabase in Docker,” it could refer to two different use cases:
Running a full Supabase stack locally for development and testing.
Dockerizing your own application (frontend or backend) that uses Supabase services.
These concepts are related but not the same. The first is about standing up Supabase services (like Postgres, Auth, Storage) inside Docker containers. The second is about packaging your own app code into a container that can talk to Supabase (either locally or in the cloud).
If you mix these up, you may end up troubleshooting the wrong layer or confusing development setup with production architecture.
Option 1: Running Supabase Locally with Docker
The Supabase CLI relies on Docker internally to spin up a full local Supabase stack on your machine. You get Postgres, Auth, Storage, Realtime, Studio, and the API gateway running locally, which mirrors the Supabase Cloud environment closely enough for reliable development and testing.
This is primarily a development and testing workflow. The CLI manages the Docker Compose orchestration for you, so you don't have to write any Docker configuration yourself.
Option 2: Dockerizing Your App That Uses Supabase
This is a completely different thing. Here, Supabase remains external (either on Supabase Cloud or a self-hosted instance), and you're containerizing your own frontend or backend application. You write a Dockerfile for your app, pass in the Supabase connection credentials as environment variables, and deploy the container wherever you need it.
These two scenarios are easy to conflate but architecturally very different. The sections below walk through each one in detail.
When Should You Run Supabase in Docker?
Not every situation calls for Docker. It makes sense for local development parity (matching your local environment to staging), offline or air-gapped work, CI/CD pipelines that run migrations without a remote Supabase instance, compliance requirements that prohibit third-party data storage, or cost control at high usage volumes where self-hosting becomes economical.
If none of these apply and you're just building an app, Supabase Cloud is the faster and lower-maintenance choice.
How to Run Supabase Locally with Docker (Step-by-Step)
This is the standard local development workflow. You'll need the Supabase CLI and Docker installed. The CLI starts and manages the local stack, while Docker runs the underlying services.
Installing Supabase CLI
Make sure Docker Desktop is installed and running first. The CLI depends on it to pull and start the service images. Install the CLI using your preferred package manager:
# macOS (Homebrew)
brew install supabase/tap/supabase
# npm
npm install -g supabase
# Windows (Scoop)
scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
scoop install supabaseVerify the installation with supabase --version before proceeding.
Initializing a Supabase Project
Navigate to your project directory and run:
supabase initThis creates a new supabase folder in your project. It's safe to commit this folder to version control. The folder contains your local configuration, migration files, seed data, and function definitions. The structure keeps your Supabase-related files organized alongside your application code.
Starting the Local Supabase Stack
supabase startThis may take some time on the first run since Docker needs to download the images. The CLI includes the full Supabase toolset along with additional images for local development, such as a local SMTP server and a database diff tool.
Once it finishes, the CLI outputs your local credentials: the API URL, the anon key, the service role key, the database URL, and the Studio URL. Save these for your local environment configuration.
Understanding the Docker Services Supabase Spins Up
When you run supabase start, the CLI orchestrates multiple Docker containers via Docker Compose. Here's what each one does:
Service | Description |
Postgres | The primary database. Runs with logical replication enabled for Realtime |
GoTrue (Auth) | Handles authentication, user management, and JWT issuance |
PostgREST | Auto-generates a REST API from your Postgres schema |
Realtime | Manages WebSocket connections for live database change events |
Storage | File storage API backed by local disk in development |
Studio | The local dashboard UI, equivalent to the Supabase Cloud dashboard |
Kong | API gateway that routes traffic to the appropriate services |
Inbucket | Local SMTP server for testing email flows without a real email provider |
You can run supabase status at any time to see which containers are running and what URLs they're serving on.
Accessing Supabase Studio Locally
Studio is accessible at http://localhost:54323 in the local environment. It gives you the same table editor, SQL editor, auth management, and storage management interface you'd find on Supabase Cloud. The API runs at http://localhost:54321.
When you're done working, stop the stack cleanly with:
supabase stopIf you want to wipe your local database and start fresh, use supabase stop --no-backup.
How to Self-Host Supabase Using Docker Compose
Self-hosting is a step up in complexity. You're no longer just running a local dev environment; you're responsible for infrastructure, security, persistence, and updates. Self-hosting is a good fit if you need full control over your data, have compliance requirements that prevent using managed services, or want to run Supabase in an isolated environment.
Using the Official Supabase Docker Repository
Supabase maintains an official Docker Compose setup in their GitHub repository. Clone it and use the provided docker-compose.yml as the base for your deployment:
git clone --depth 1 https://github.com/supabase/supabase
cd supabase/docker
cp .env.example .envWhile placeholder passwords and keys are provided in .env.example, you should never start your self-hosted Supabase using these defaults. Review the configuration options and ensure you set all secrets before starting the services.
Supabase publishes stable releases of the Docker Compose setup approximately once a month. Keep an eye on the self-hosted changelog to stay current.
Environment Variables and Secrets
The .env file controls everything: database passwords, JWT secrets, API keys, and service configuration. At minimum, you need to set:
POSTGRES_PASSWORD - your database password
JWT_SECRET - used to sign and verify JWTs
ANON_KEY - the public-facing API key for client-side requests
SERVICE_ROLE_KEY - the admin-level key that bypasses Row Level Security
The anon key is safe to expose in frontend code. The service role key is not. Treat it the same way you would a root database password. Use a secrets manager like AWS Secrets Manager, HashiCorp Vault, Doppler, or Infisical to inject secrets at runtime, rather than storing them in plain .env files on your server.
Persistent Volumes and Database Storage
One of the most common mistakes in Docker-based deployments is running Postgres without persistent volumes. If a container is removed without volumes configured, all your data disappears.
Your docker-compose.yml should define a named volume for Postgres:
volumes:
db-data:
services:
db:
image: supabase/postgres:15.1.0.147
volumes:
- db-data:/var/lib/postgresql/dataTest this early. Stop your containers, restart them, and confirm your data persists. Don't discover this gap in a production incident.
Reverse Proxy and HTTPS Setup
You cannot run a production Supabase instance over plain HTTP. You need a reverse proxy that handles TLS termination and routes traffic to the right internal services. Nginx and Traefik are both commonly used options.
The internal services communicate on a Docker network and should not be directly exposed to the internet. Only your reverse proxy should have public-facing ports. Kong (the API gateway in the Supabase stack) listens internally and the proxy forwards public traffic to it.
If you don't need specific services like Logflare, Realtime, Storage, or Edge Runtime, you can remove the corresponding sections from docker-compose.yml to reduce resource requirements.
How to Dockerize Your App That Uses Supabase
This section is specifically about containerizing your own application, not Supabase itself. Supabase stays external here, whether that's Supabase Cloud or a self-hosted instance you've already set up.
Creating a Dockerfile for a Node.js App
Use a multi-stage build to keep your production image small:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
EXPOSE 3000
CMD ["npm", "start"]The builder stage handles compilation and dependency installation. The runner stage copies only what's needed at runtime, keeping the final image lean.
Passing Supabase Environment Variables Securely
Never hardcode your Supabase URL or keys into the Docker image. That data gets baked into the image layers and can leak through image registries.
Pass variables at runtime:
docker run -e SUPABASE_URL=https://your-project.supabase.co \
-e SUPABASE_ANON_KEY=your-anon-key \
your-app-imageIn a Docker Compose setup, reference an .env file:
env_file:
- .env.productionFor production, use your platform's native secret management instead of .env files on disk. Most cloud providers (AWS, GCP, Azure, Render, Railway, Fly.io) offer first-class secret injection for containers.
Connecting to Supabase Cloud vs Local Supabase
The configuration difference between local and cloud is just the environment variables:
Environment | SUPABASE_URL | SUPABASE_ANON_KEY |
Local dev | Output from supabase start | |
Supabase Cloud | From project API settings | |
Self-hosted | From your .env config |
Manage these with environment-specific .env files (.env.development, .env.production) and make sure your application loads the right one based on NODE_ENV.
Local Development vs Production: What Changes?
As you move from a local Docker environment to a production deployment, the operational requirements shift from developer experience to system reliability.
Dev Environment Setup
In local development, the priority is speed and flexibility. Use supabase db reset to replay migrations against a clean database, and supabase diff to generate migration files from schema changes made through Studio. You don't need HTTPS, persistent volumes, or production-grade secrets management locally.
Production Deployment Patterns
In production, the concerns shift. You need HTTPS via a reverse proxy, persistent volumes with automated backups, runtime secrets injection, container monitoring, and CI/CD that runs migrations before deploying application code.
If you're running the full self-hosted Supabase stack in production, budget time for operational overhead. Updates, backups, and incident response are now your responsibility.
Common Mistakes When Using Supabase with Docker
Most issues people run into are not caused by Supabase itself. They usually come from misunderstanding the architecture, skipping persistence setup, or mishandling secrets.
Confusing Supabase Cloud with Self-Hosted Supabase
The most common source of confusion is treating these two as interchangeable. Supabase Cloud is fully managed. Self-hosted is not.
If you try to apply Supabase Cloud dashboard features or update mechanisms to a self-hosted instance incorrectly, things break in unpredictable ways. Always read the self-hosting documentation separately from the general Supabase docs.
Losing Database Data Due to Missing Volumes
This one is painful when it happens. Running docker compose down -v or removing containers without named volumes wipes your database.
For a complete teardown including data removal, run docker compose down -v followed by rm -rf volumes/db/data/ to delete Postgres data. Know what each command does before running it in any environment that has data you care about.
Exposing Service Role Keys in Containers
The service role key bypasses Row Level Security entirely. If this key ends up in a Docker image, a public environment variable, or client-side code, you have a serious security problem. It should only exist in server-side processes and secret management systems. Audit your containers and CI pipelines specifically for this.
Security Best Practices
Once your containers are running, security becomes an operational concern rather than just a configuration detail. At this stage, focus on how secrets flow into your system, how services communicate internally, and how access is enforced across roles.
Managing Secrets Properly
Your containers should receive secrets through environment injection at startup, not from files baked into the image.
For Kubernetes, use Secrets or an external secrets operator. For Docker Compose on a VPS, Doppler or Infisical are lightweight options that sync secrets at runtime. Rotate your JWT secret and service role key periodically and document the process before you need it.
Restricting Network Exposure
Docker's internal networking is an important security layer. Services that don't need to be publicly accessible (Postgres, GoTrue, Storage internals) should only communicate on a Docker internal network. Only the API gateway or reverse proxy should bind to public-facing ports.
Define your networks explicitly in docker-compose.yml:
networks:
supabase-internal:
internal: true
public:
driver: bridgeRole-Based Access and API Protection
Backend services that use the service role key should never expose it to the client. All client-side calls should use the anon key and rely on Row Level Security in Postgres. With correctly written RLS policies, the anon key is safe to use in browser code.
Should You Even Run Supabase on Docker?
Supabase Cloud handles scaling, backups, updates, monitoring, and infrastructure management for you.
For most projects, especially early-stage ones, this removes a significant operational burden. You pay for the managed service instead of paying in engineering time. Unless you have a specific reason to self-host, the Cloud platform is the pragmatic default.
When Self-Hosting Is the Right Fit
Self-hosting becomes a viable option when you have concrete requirements that Supabase Cloud cannot satisfy:
Data residency requirements that mandate specific geographic control
Internal security policies that prohibit third-party data storage
Very high usage volumes where the economics of self-hosting are favorable
Full infrastructure ownership as an organizational requirement
Final Recommendations and Architecture Patterns
Here's a quick summary to help you pick the right approach:
Goal | Recommended Approach |
Local dev and testing | Supabase CLI + supabase start |
Containerizing your app | Dockerfile + runtime env vars |
Full control, compliance | Self-hosted Docker Compose |
Simplicity and speed | Supabase Cloud |
Regardless of approach, these principles hold: never hardcode secrets, always configure persistent volumes in production, use Docker internal networking to limit exposure, and keep your service role key server-side only.
If you're just getting started, use Supabase Cloud for your backend and containerize your application layer. Move to self-hosting only when you have a concrete reason to do so.
Connect with our engineers at Leanware to review your Supabase architecture and choose the setup that fits your technical and operational needs.
Frequently Asked Questions
Can I run Supabase entirely on Docker?
Yes. You can run Supabase locally using the Supabase CLI, which uses Docker to spin up Postgres, Auth, Storage, Realtime, and Studio. You can also self-host Supabase in production using Docker Compose, but that requires infrastructure management and security configuration.
What is the difference between running Supabase on Docker and using Supabase Cloud?
Supabase Cloud is fully managed by Supabase. When you run Supabase on Docker, you're responsible for hosting, scaling, security, backups, and updates. Docker-based setups give you more control but require hands-on DevOps work.
Do I need Docker to use Supabase?
No. If you're using Supabase Cloud, you don't need Docker at all. Docker is only required if you want to run Supabase locally for development or self-host it in your own infrastructure.
How do I start Supabase locally with Docker?
Install the Supabase CLI, initialize your project with supabase init, and run supabase start. The CLI handles the Docker Compose orchestration and launches all required services.
Does Supabase CLI use Docker Compose?
Yes. The Supabase CLI uses Docker Compose to orchestrate Postgres, GoTrue (Auth), Storage, Realtime, and Studio.
How do I connect my Dockerized app to Supabase?
Set SUPABASE_URL and SUPABASE_ANON_KEY (or SUPABASE_SERVICE_ROLE_KEY for server-side use) as environment variables. Your container connects to whichever Supabase instance those values point to.
What environment variables are required for Supabase in Docker?
At minimum, the project URL and API key (anon or service role). For self-hosted setups, you also need POSTGRES_PASSWORD, JWT_SECRET, and additional service-specific variables in .env.
Can I use the Supabase Docker setup in production?
Yes, but it is not plug-and-play. Configure persistent volumes, backups, HTTPS via a reverse proxy, and proper secret management before going live.
How do I prevent losing database data in Docker?
Use named Docker volumes for Postgres in your docker-compose.yml. Without them, removing containers deletes your data permanently.
Is it secure to store Supabase service role keys in a Docker image?
No. Never bake service role keys into images. Inject them at runtime through environment variables or a secrets manager.





.webp)





