How to run three brands on one nSelf server
One of the questions we get frequently: can a single nSelf server run multiple products without them interfering with each other? The answer is yes, and the architecture is simpler than you might expect.
This post documents the exact pattern using a real example: running three separate brands on a single CX23 Hetzner instance.
The problem with naive multi-tenancy
The naive approach is to run one database and use a tenant ID column to separate data. That works, but it has problems:
- A bug in one app can corrupt another app's data
- Auth tokens from one app can leak to another
- Deploying one app requires restarting services that affect all apps
- Compliance is harder — data is physically co-located
nSelf solves this differently. Each application gets its own isolated services: its own Postgres database, its own Hasura instance, its own auth service, its own storage bucket. The only shared resources are the host machine and nginx.
How nSelf isolates apps
nSelf uses the source_account_id column pattern for multi-app isolation within a single deploy context. But for true brand isolation — different frontends, different schemas, different auth configs — the pattern is separate nSelf projects sharing a host via nginx.
Each app gets its own .env file and its own set of Docker services. nSelf's nginx config merges all apps into a single nginx instance that routes by subdomain.
Here is the directory structure:
/opt/nself/
brand-a/ ← first brand
.env
docker-compose.yml # generated by nself build
brand-b/ ← second brand
.env
docker-compose.yml
brand-c/ ← third brand
.env
docker-compose.yml
nginx/ ← shared nginx (managed by nself)
sites/ # one .conf per brand, generated
Configuring each brand
Each brand is a standard nSelf project. The key is that each gets unique ports in .env:
Brand A .env:
NSELF_PROJECT_NAME=brand-a
NSELF_DOMAIN=brand-a.example.com
POSTGRES_PORT=5432
HASURA_PORT=8080
AUTH_PORT=4000
STORAGE_PORT=5000
Brand B .env:
NSELF_PROJECT_NAME=brand-b
NSELF_DOMAIN=brand-b.example.com
POSTGRES_PORT=5433
HASURA_PORT=8081
AUTH_PORT=4001
STORAGE_PORT=5001
All services bind to 127.0.0.1. External traffic goes through nginx, which proxies to the correct internal port based on the subdomain.
# In each project directory
nself build
nself start
nSelf auto-detects that multiple projects share the host and merges the nginx configs. Each subdomain routes to its own Hasura, auth, and storage endpoints.
Isolation guarantees
Database isolation. Each brand gets a dedicated Postgres instance on its own port. There is no shared schema, no shared connection pool. A runaway query in brand A cannot starve brand B.
Auth isolation. Each brand has its own Hasura Auth instance with its own JWT secret. A token issued by brand A is invalid at brand B. There is no cross-brand session risk.
Storage isolation. Each brand's MinIO instance has its own data directory. Files from brand A are not accessible to brand B's API.
Deploy isolation. Restarting brand A's services does not affect brand B. You can deploy updates to one brand without touching the others.
Plugin sharing
Some plugins can be shared across brands via a shared install, but most should be installed per brand to maintain isolation:
cd /opt/nself/brand-a && nself plugin install notify cron
cd /opt/nself/brand-b && nself plugin install notify ai
cd /opt/nself/brand-c && nself plugin install chat livekit
Each plugin install adds to that brand's docker-compose.yml. Plugins from different brands do not share containers.
Resource allocation on a CX23
A Hetzner CX23 has 4 vCPUs and 8 GB RAM. Running three full nSelf stacks with moderate traffic is comfortable with these limits:
| Service | Per brand | Three brands total |
|---|---|---|
| Postgres | 512 MB | 1.5 GB |
| Hasura | 256 MB | 768 MB |
| Auth | 128 MB | 384 MB |
| Nginx | shared | ~128 MB |
| OS + overhead | — | ~1 GB |
| Available headroom | — | ~4 GB |
For high-traffic workloads, add Redis per brand for connection pooling and caching. That adds ~64 MB per brand.
Monitoring all three
nSelf's monitoring stack (Prometheus, Grafana, Loki) can be deployed once and configured to scrape all three brands:
# Deploy monitoring in a shared location
cd /opt/nself/monitoring
nself plugin install prometheus grafana loki
nself build
nself start
The Prometheus config auto-discovers all nSelf services on the host and labels metrics by project name. One Grafana dashboard shows all three brands.
When to upgrade from one server
This pattern works well for three to five brands with moderate traffic. Signs that you need to separate:
- Any brand exceeds 50 concurrent users consistently
- One brand's traffic spikes cause latency in others
- You need per-brand compliance (GDPR data residency by region)
- Development and production share a host (they should not)
At that point, each brand gets its own VPS. The migration is straightforward: nself export on the old server, nself import on the new one.
The nSelf way
This pattern demonstrates the nSelf philosophy: own the whole stack, know exactly what is running, and scale by adding servers rather than by paying per-seat SaaS fees. Three brands on a $15/month VPS costs the same as one team member's Supabase Pro subscription. The math compounds as you grow.
The CLI reference for multi-project setups: nself help multi-project.
Get updates from the nSelf blog
Engineering posts, product updates, and technical guides. No spam.
Related posts
Self-hosted AI: why it matters
Your emails, calendar, notes, and finances stay on your box. No vendor can rug-pull your assistant. The cost math works out better than you think.
Why we went Go: killing the 600-line bash script
nSelf started as a shell script. Then it hit 600 lines, broke on every new macOS release, and we rewrote it in Go. Here is why.
ɳClaw: a personal AI with infinite memory
ɳClaw remembers everything. Not because it dumps chat logs into a vector store, but because it builds a structured knowledge graph from every conversation. Here is how it works.