The "should we use Postgres for multi-tenant SaaS" question was settled years ago — yes, for almost any team that doesn't have specifically-pathological data needs. The real question is how. There are three workable patterns, and most of the architectural pain in scaling SaaS comes from picking the wrong one early or migrating between them late.

This is a field-tested guide based on building Vero (5,000+ tenant orgs) and several other multi-tenant SaaS platforms, plus a Postgres migration of 110M rows we wrote about separately.

The three patterns

Pattern A — Shared tables, tenant_id column

One database. One schema. Every business table has a tenant_id column, and every query filters on it.

Pros: Simplest to reason about. Cheapest to host. Migrations are atomic across the whole product.

Cons: A single missed WHERE tenant_id = ? is a data leak. Performance tuning is global — one heavy tenant can hurt everyone. Bulk operations on a single tenant (export, delete) require careful index design.

Use when: You have <10,000 tenants, similar usage profiles, and you can enforce tenant filtering at the application or RLS layer.

Pattern B — Schema-per-tenant

One database. One schema per tenant. Same tables, different schemas.

Pros: Strong isolation. Per-tenant backups and migrations are trivial. Postgres role-based access maps cleanly to schemas.

Cons: Migrations now have to run across N schemas. Postgres metadata grows linearly with tenants — at some point this hurts. Cross-tenant queries (analytics, billing) are harder.

Use when: You have 10–1,000 tenants, regulated workloads, or compliance constraints that require provable isolation.

Pattern C — Database-per-tenant

A separate database (or even a separate Postgres instance) per tenant.

Pros: Maximum isolation. Per-tenant scaling. Per-tenant data residency (which is a big deal in the GCC and EU).

Cons: Operational complexity. Connection pooling becomes a serious problem. Migrations are a fleet operation.

Use when: Enterprise tier with data residency requirements, or one tenant's load could swamp the others.

The pattern that quietly wins for most products

Pattern A with Postgres Row-Level Security (RLS) as the safety net.

The first index you build that doesn't lead with tenant_id will haunt you. — Internal Vero architecture review, 2024

Why we keep returning to it:

  1. Single-table queries scale further than people assume. Postgres handles billion-row tables fine with the right partitioning and indexing. We've yet to hit a multi-tenant ceiling that wasn't really a missing index.
  2. RLS catches the bug your code didn't. A tenant_id filter you forgot in the WHERE clause is a data leak. RLS blocks it at the database. Belt + suspenders.
  3. Migrations are atomic. Fleet migrations across schema-per-tenant are a special kind of pain. Pattern A skips it entirely.
  4. Per-tenant exports and deletes are a SQL filter, not an operational dance.

The tax: you have to be disciplined about the tenant_id column on every table, every index, every foreign key. The first index you build that doesn't lead with tenant_id will haunt you.

!
RLS is not free. Row-Level Security adds a planner overhead and forces you to think carefully about role-based connections. It's worth every cycle when it stops the data leak you didn't know you'd shipped — but you have to test it, not just enable it.

A minimal RLS setup looks something like this:

-- Enable RLS and pin the policy to the session tenant_id
ALTER TABLE invoices ENABLE ROW LEVEL SECURITY;

CREATE POLICY tenant_isolation ON invoices
    USING (tenant_id = current_setting('app.tenant_id')::uuid);

-- At transaction start, app sets the tenant on the connection
SET LOCAL app.tenant_id = '8a7c…';

Patterns within Pattern A that pay off

Once you've committed to shared-table multi-tenancy, six things consistently pay back the time spent:

1. Composite indexes leading with tenant_id

Every query is filtered on tenant_id. Every multi-column index should start with it. We've taken queries from 4 seconds to 8 milliseconds by reordering a single index.

-- Wrong: tenant_id buried in the middle
CREATE INDEX ON invoices (status, created_at, tenant_id);

-- Right: tenant_id leads
CREATE INDEX ON invoices (tenant_id, status, created_at);

2. Per-tenant connection pooling via SET LOCAL

Use SET LOCAL app.tenant_id = ? at transaction start. Your RLS policies read it. Your application can't accidentally leak across tenants. PgBouncer in transaction-pool mode plays nicely with this.

3. Partitioning the heavy tables by tenant_id

Past a certain point, the noisy-neighbour problem is real. Hash-partitioning the largest tables by tenant_id (8–32 partitions) gives you per-tenant query plans without the operational pain of full schema-per-tenant.

CREATE TABLE events (
    id          uuid PRIMARY KEY,
    tenant_id   uuid NOT NULL,
    payload     jsonb,
    created_at  timestamptz DEFAULT now()
) PARTITION BY HASH (tenant_id);

-- 16 partitions, created once
CREATE TABLE events_p00 PARTITION OF events
    FOR VALUES WITH (modulus 16, remainder 0);
-- … repeat for p01 … p15

4. Per-tenant rate limits at the application layer

A single bad tenant can monopolise your DB. Token-bucket per tenant_id at the app gateway is the cheapest insurance you'll ever buy.

5. Soft delete at the row level, hard delete in batches

A tenant cancellation should not lock your DB while it deletes 40M rows. Mark deleted, batch-purge later, off-hours.

6. Materialised analytics tables

Real-time analytics across all tenants on the live OLTP tables will eventually hurt you. Build a nightly (or hourly) materialised view per tenant or per cohort, and serve dashboards from there.

The migration story (that you really don't want)

The most expensive thing we've watched teams do is migrate from Pattern A to Pattern B (or vice versa) at scale. We did it once for a fintech client moving 110M rows across 200+ tenants, over a 19-day window with zero downtime. The takeaways:

  1. You will be doing it online. No customer accepts a maintenance window past a few minutes anymore.
  2. Triple-write before you switch reads. Old store, new store, validation table. Compare for at least a week before flipping any read traffic.
  3. Plan rollback before you plan rollforward. The first customer-facing bug after the cutover is when rollback matters. Have it scripted, tested, and one-command.
  4. Use logical replication, not pg_dump. pg_dump on a hot table at scale is a non-starter.
  5. Backfill in shards, not in one big transaction. A single 100M-row transaction will hurt you in ways you don't expect.
  6. Compare row counts daily. Drift is real. Catch it before customers do.

The full play-by-play of that migration is its own piece — what we'd say here is: pick the right pattern early, because the migration cost grows superlinearly with your row count.

How to choose, today, on a green field

A flowchart that's served us well:

  • Are you building a regulated, data-residency-sensitive product (healthcare, finance, public sector)? → Pattern C, accept the operational cost.
  • Will you have <100 tenants but each is a serious enterprise contract? → Pattern B.
  • Anything else? → Pattern A with RLS, partitioning the heavy tables, and discipline on indexes.

Most products belong in the third bucket. Most products don't ship there because Pattern A "feels too simple to be right." It is right. Lean into it. The patterns that survive production are usually the boring ones.

We've written about that mindset more broadly elsewhere — boring stack, predictable surfaces, fewer pages on call. There's a reason we still ship Postgres in 2026.