Database Optimization
Postgres performance auditing, RLS optimization, index management, pgTAP testing, and benchmarking
Database Optimization
How to audit, optimize, and test the platform's Postgres database for performance and security.
Audit Process
1. Extract Current Schema
# Full schema dump (tables, indexes, policies, functions)
npx supabase db dump --local > /tmp/schema.sql
# Extract specific components
npx supabase db dump --local | grep "CREATE INDEX" | sort
npx supabase db dump --local | grep "CREATE POLICY" | sort
npx supabase db dump --local | grep "ENABLE ROW LEVEL SECURITY" | sort2. Identify RLS Performance Issues
auth.uid() wrapping — The single most impactful optimization. Bare auth.uid() is re-evaluated per row; (SELECT auth.uid()) caches the result for the entire query.
# Find policies with bare auth.uid()
npx supabase db dump --local | awk '/^CREATE POLICY/,/;/' > /tmp/policies.sql
# Then search for auth.uid() not preceded by SELECTPattern: Every RLS policy that references auth.uid() must wrap it:
-- BAD: re-evaluated per row
WHERE user_id = auth.uid()
-- GOOD: cached for entire query
WHERE user_id = (SELECT auth.uid())3. Find Duplicate Indexes
# Look for identical index definitions
npx supabase db dump --local | grep "CREATE INDEX" | sort -t'"' -k4 | uniq -dCommon duplication patterns:
- Single-column index superseded by a composite starting with the same column
- Index created in baseline + re-created in a later migration
- Different names, identical definitions
4. Check RLS Coverage
# Tables with RLS enabled
npx supabase db dump --local | grep "ENABLE ROW LEVEL SECURITY" | wc -l
# Tables WITHOUT RLS (compare with all tables)
# Any table accessible via PostgREST without RLS is fully openRLS Performance Patterns
Tenant Membership Check (Most Common Pattern)
Nearly every RLS policy checks tenant membership. The optimized pattern:
CREATE POLICY "tenant_read" ON some_table
FOR SELECT
USING (tenant_id IN (
SELECT user_tenants.tenant_id FROM user_tenants
WHERE user_tenants.user_id = (SELECT auth.uid())
));Key elements:
(SELECT auth.uid())— cached subquery, not per-rowIN (SELECT ...)— semi-join, efficient with indexidx_user_tenants_userindex covers the subqueryidx_user_tenants_tenantindex covers reverse lookups
authorize() Function
The authorize() function checks the materialized user_permissions table. It's scoped by tenant:
-- Uses idx_user_permissions_lookup (user_id, tenant_id, permission)
RETURN EXISTS (
SELECT 1 FROM user_permissions up
WHERE up.user_id = (SELECT auth.uid())
AND up.tenant_id = v_tenant_id
AND up.permission = requested_permission
);get_active_tenant_id() Optimization
The function tries current_setting('app.tenant_id') first (set by the application layer), falling back to a profile table lookup. The session variable avoids a query per RLS check.
Index Management
Index Principles
- Always lead with tenant_id — Every query goes through RLS which filters by tenant
- Composite indexes cover prefix queries —
(tenant_id, entity_type_id, created_at)also covers(tenant_id, entity_type_id)and(tenant_id) - Partial indexes for status fields —
WHERE status IN (...)reduces index size - GIN indexes for arrays and full-text —
tags text[],fts tsvector - Don't index product-specific JSONB fields in platform code — add per product via seed
Current Index Coverage (entities table)
| Index | Columns | Purpose |
|---|---|---|
idx_entities_tenant_type | (tenant_id, entity_type_id) | List by type |
idx_entities_tenant_type_created | (tenant_id, entity_type_id, created_at DESC) | Sort by created |
idx_entities_tenant_type_updated | (tenant_id, entity_type_id, updated_at DESC) | Sort by updated |
idx_entities_tenant_type_slug | (tenant_id, entity_type_slug) | Filter by slug |
idx_entities_tenant_created | (tenant_id, created_at DESC) | Activity feed |
idx_entities_owner_created_at | (owner_id, created_at DESC) | My items |
idx_entities_fts | GIN (fts) | Full-text search |
idx_entities_tags | GIN (tags) | Tag filter |
idx_entities_status_partial | (entity_type_id, tenant_id, content->>'status') | Status filter |
idx_entities_parent_type | (parent_id, entity_type_id) | Hierarchy |
Adding Product-Specific Indexes
For product-specific JSONB fields, add indexes in seed scripts:
-- Example: PE product needs ticker lookup
CREATE INDEX IF NOT EXISTS idx_entities_content_ticker
ON entities (lower((content ->> 'ticker')::text))
WHERE (content ->> 'ticker') IS NOT NULL;pgTAP Testing
Setup
pgTAP is built into Supabase's local dev environment. Tests live in supabase/tests/.
# Run all tests
npx supabase test db
# Create a new test file
npx supabase test new my_testTest Structure
BEGIN;
SELECT plan(N); -- N = number of assertions
-- Assertions
SELECT ok(expression, 'description');
SELECT is(actual, expected, 'description');
SELECT has_function('schema', 'name', ARRAY['arg_types'], 'description');
SELECT function_returns('schema', 'name', ARRAY['args'], 'return_type', 'desc');
SELECT * FROM finish();
ROLLBACK; -- Always rollback to keep tests isolatedWhat to Test
| Area | Test Functions | Example |
|---|---|---|
| RLS enabled | ok((SELECT relrowsecurity FROM pg_class WHERE relname = 'X')) | Verify all tables |
| Indexes exist | ok(EXISTS (SELECT 1 FROM pg_indexes WHERE indexname = 'X')) | After migration |
| Functions exist | has_function('public', 'name', ARRAY['types']) | Schema validation |
| Function behavior | is(authorize('perm'), false, 'denied without auth') | Logic testing |
| Policy presence | Check pg_policies catalog | After RLS changes |
Running Specific Tests
# All tests
npx supabase test db
# Tests are run in alphabetical order
# Prefix with numbers: 00_rls.sql, 01_indexes.sql, 02_functions.sqlBenchmarking
Local Benchmarking with EXPLAIN ANALYZE
Connect to local Supabase and run query analysis:
# Connect via supabase
npx supabase db dump --local # verify connection works
# Or use the REST API to analyze query timing
curl -X POST 'http://127.0.0.1:54321/rest/v1/rpc/get_entity_counts_by_type' \
-H 'apikey: YOUR_ANON_KEY' \
-H 'Content-Type: application/json' \
-d '{"p_tenant_id": "00000000-0000-0000-0000-000000000000"}'pg_stat_statements
Enabled by default. Access via Supabase Dashboard > Query Performance to find:
- Most time-consuming queries — Total execution time
- Most frequently called — Call count
- Slowest average — Mean time per call
- Index recommendations — Built-in Index Advisor
Sentry Supabase Integration
The platform has supabaseIntegration() enabled in both client and server Sentry configs:
- Automatic
db.select/insert/update/deletespans for every Supabase query - RLS error capture
- Query Insights dashboard in Sentry
Before/After Comparison
When making database optimizations, measure before and after:
- Before: Record query plan with
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) - Apply migration:
npx supabase db reset --local - After: Re-run same query plan
- Compare: Planning time, execution time, buffer hits
Migration Checklist
Before submitting a database optimization migration:
-
npx supabase db reset --localsucceeds -
npx supabase test db— all pgTAP tests pass -
pnpm db:types— regenerate TypeScript types -
pnpm typecheck— no type errors -
pnpm test— application tests pass -
pnpm build— build succeeds - Document what was changed and why in migration comments
- For production: use
CREATE INDEX CONCURRENTLY(not in transaction)
Production Deployment Notes
Index Changes
- Use
CREATE INDEX CONCURRENTLYoutside transactions for production - Drop indexes only after verifying they're truly redundant via
pg_stat_user_indexes - Monitor
idx_scancounts before dropping — zero scans means safe to remove
RLS Policy Changes
DROP POLICY+CREATE POLICYis atomic within a transaction- Test with real data volumes — RLS performance varies with table size
- Monitor query times after deployment via pg_stat_statements
Function Changes
CREATE OR REPLACE FUNCTIONis safe — no downtime- Test with both authenticated and anonymous contexts
- Verify SECURITY DEFINER functions have
SET search_path
Supabase CLI Inspection Tools
The supabase inspect db command provides built-in diagnostics:
npx supabase inspect db cache-hit # Cache hit rate (target >99%)
npx supabase inspect db unused-indexes # Indexes with low scan activity
npx supabase inspect db index-usage # Index efficiency per table
npx supabase inspect db seq-scans # Sequential scan counts (index candidates)
npx supabase inspect db outliers # High exec time, low frequency queries
npx supabase inspect db bloat # Wasted disk space estimates
npx supabase inspect db table-sizes # Storage analysis
npx supabase inspect db locks # Lock contentionKey pg_stat_statements Queries
-- Most time-consuming queries (optimization priority)
SELECT query, calls, total_exec_time, mean_exec_time, rows
FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 20;
-- Cache hit rate (should be >99%)
SELECT sum(heap_blks_hit) / nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0) as ratio
FROM pg_statio_user_tables;
-- Index hit rate (should be >99%)
SELECT sum(idx_blks_hit) / nullif(sum(idx_blks_hit) + sum(idx_blks_read), 0) as ratio
FROM pg_statio_user_indexes;Index Advisor
The index_advisor extension recommends indexes for specific queries:
CREATE EXTENSION IF NOT EXISTS index_advisor;
SELECT * FROM index_advisor('
SELECT * FROM entities
WHERE tenant_id = $1::uuid AND entity_type_slug = $2::text
ORDER BY created_at DESC LIMIT 50
');Access via Dashboard > Database > Query Performance > click query > Indexes tab.
Limitation: Only recommends single-column B-tree indexes. Manual analysis still needed for composite, partial, GIN, and BRIN indexes.
Row Count Estimation
Use estimated count for UI (fast, approximate) and exact only when precision matters:
// Fast: uses pg_class.reltuples for large tables, exact for small
const { count } = await supabase
.from('entities')
.select('*', { count: 'estimated', head: true })
.eq('tenant_id', tenantId);Application Query Patterns to Watch
Findings from the codebase audit (see query-pattern-auditor report):
- Sequential relation updates —
syncEntityRelationsFromConfigloops with sequential INSERT/DELETE per key. Batch withPromise.all(). - Nested joins returning full entities —
entity_relationsqueries select*, entity_types(*)on both sides. Project only needed columns. - Unbounded
.in()arrays —listEntitiesKeyedaccepts unlimited ID arrays. Chunk to max 100 items. - Graph API fetches dangling relations — Relations reference entities outside the result set. Filter to only connected entities.
- Feed selects all columns — Pagination queries fetch full
contentJSONB. Project only display columns.