Supabase
How the Sprinter Platform uses Supabase for authentication, Postgres with row-level security, file storage, and realtime subscriptions.
Supabase
Supabase provides the entire backend for the Sprinter Platform: authentication, Postgres database with row-level security, file storage, and realtime subscriptions. The hosted project ID is mhfzqccnyqxyteedsrdi.
Authentication
Supabase Auth handles user signup, login, and session management. A database trigger auto-provisions new users on signup:
- Creates a
profilesrow with the user's email and display name - Creates a
user_tenantsrow linking the user to the default public tenant (00000000-0000-0000-0000-000000000000) with theguestrole (UUID00000000-0000-0000-0000-000000000014) - Rebuilds the user's materialized permissions via
rebuild_user_permissions()
The platform never calls supabase.auth.getUser() in API routes. Instead, all auth checks go through the auth adapter at features/tenant/auth.ts.
getClaims() vs getSession() vs getUser()
| Method | Where | What it does | Network calls |
|---|---|---|---|
getClaims() | Middleware (proxy.ts) | Calls getSession() internally to refresh the JWT if within 90s of expiry, then verifies the JWT signature against the project's JWKS. Recommended for middleware. | 0 (refresh only when needed) |
getUserId() | API routes, server components | Reads the JWT from cookies, validates locally. | 0 |
getUser() | Auth provisioning only | Makes a network call to Supabase Auth to validate the session server-side. ONLY use in ensureUserProvisioned(). | 1 |
Use getClaims() in proxy.ts middleware — it is the method the Supabase SSR guide prescribes for this layer. Do not run any code between createServerClient and the getClaims() call; the SSR client captures the incoming cookie state at construction time and inserting logic between these two calls can cause it to operate on a stale snapshot.
Middleware cookie handling on rewrite paths
When the middleware builds a NextResponse.rewrite() (e.g., for tenant-scoped /t/[tenantSlug]/... URLs), auth cookies from supabaseResponse must be copied with all attributes intact:
// Correct — preserves httpOnly, secure, path, sameSite, maxAge
supabaseResponse.cookies.getAll().forEach((cookie) => {
rewritten.cookies.set(cookie);
});
// Wrong — strips security attributes, causes intermittent "Not authenticated" errors
supabaseResponse.cookies.getAll().forEach(({ name, value }) => {
rewritten.cookies.set(name, value);
});A rewrite response is a fresh NextResponse — it does not inherit cookies from supabaseResponse automatically. If a token refresh happened during the request (which getClaims() / getSession() may trigger), the refreshed cookie must be forwarded to the browser with the original httpOnly, secure, sameSite, and maxAge attributes. Passing only name + value causes the browser to store the token with incorrect attributes, leading to stale or mis-attributed cookies on subsequent requests. See Supabase SSR guide for the canonical pattern.
All auth checks go through the auth adapter at features/tenant/auth.ts:
| Function | Purpose | DB calls |
|---|---|---|
getUserId() | Get user ID from JWT (optional auth) | 0 |
requireAuth() | Get user + tenant context (required auth) | 1 (cached) |
requireAdmin() | Require admin role | 1 (cached) |
hasPermission(perm) | Check a specific permission | 1-2 (cached) |
requirePermission(perm) | Enforce a specific permission | 1-2 (cached) |
Under the hood, getUserId() calls getClaims() which validates the JWT locally with zero network calls. getUser() is reserved exclusively for ensureUserProvisioned().
Database (Postgres + RLS)
Every table is tenant-scoped. All queries must include a tenant_id filter, either through RLS policies or explicit WHERE clauses.
RLS patterns
RLS policies use (SELECT auth.uid()) (wrapped in a subquery) rather than bare auth.uid(). PostgreSQL evaluates the subquery once and caches it, while bare auth.uid() is re-evaluated per row.
Tenant membership checks go through the user_tenants table (not tenant_members, which does not exist):
CREATE POLICY "Tenant members can read"
ON some_table FOR SELECT
USING (
tenant_id IN (
SELECT tenant_id FROM user_tenants
WHERE user_id = (SELECT auth.uid())
)
);Key tables
The full schema is documented in documents/DATABASE.md. Notable gotchas:
user_tenants-- nottenant_membersmessages-- notchat_messages(supports AI SDK v6partscolumn)entity_types.idhas no default -- must generate viacrypto.randomUUID()entity_type_slugonentitiesis denormalized and auto-synced via trigger -- use it for filtering instead of joining throughentity_types
Audit tracking
New tables can opt into audit logging:
SELECT enable_audit_tracking('table_name');This creates a trigger that logs all INSERT/UPDATE/DELETE operations to the audit_logs table with old_data and new_data JSONB columns.
Client selection
The platform provides two Supabase client factories:
Authenticated client (createClient)
Located at lib/supabase/server.ts. Created from the user's session cookies via @supabase/ssr. RLS policies apply -- the user can only access rows their tenant membership allows.
Use this for all user-scoped reads and writes.
import { createClient } from "@/lib/supabase/server";
const supabase = await createClient();
const { data } = await supabase
.from("entities")
.select("*")
.eq("tenant_id", tenantId);The factory is wrapped in React cache(), so multiple calls within the same server request return the same instance.
Admin client (createAdminClient)
Located at lib/supabase/admin.ts. Uses the service role key to bypass RLS entirely. Use this only for:
- Cross-user or system operations (tenant management, user provisioning)
- AI enrichment and extraction (agent-sourced operations)
- Background jobs (Inngest functions, heartbeat execution)
import { createAdminClient } from "@/lib/supabase/admin";
const admin = createAdminClient();Query safety rules
- Never use
.single()on UPDATE or DELETE queries -- use the.select()array pattern and checkdata.length === 0 - Use
.maybeSingle()for SELECT lookups that may return no rows .single()is safe for SELECT by primary key and INSERT (always returns exactly one row)
Storage
Document files are stored in a Supabase Storage bucket named documents. The document system (features/documents/) handles:
- Upload via server actions with file validation
- Signed URL generation for secure, time-limited downloads
- File metadata tracked in the
documentstable (title, file_name, file_path, file_size, mime_type, status)
After upload, the document/uploaded Inngest event triggers background processing: parsing, chunking, embedding, and extraction for linked entities.
Realtime
Supabase Realtime powers live updates across the platform:
- View realtime (
features/views/hooks/use-view-realtime.ts) --useViewRealtime(viewId)subscribes to UPDATE events on a specific view record. When an agent iterates on a view via themanageViewtool, the UI re-renders automatically by invalidating the React Query cache. - Entity presence -- tracks which users are viewing which entities (on the
feature/realtime-presence-messagingbranch)
Realtime subscriptions use the authenticated client so RLS applies to the channel.
Environment variables
| Variable | Purpose |
|---|---|
NEXT_PUBLIC_SUPABASE_URL | Supabase project URL (always required) |
NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY | Anon/publishable key for client-side (fallback: NEXT_PUBLIC_SUPABASE_ANON_KEY) |
SUPABASE_SECRET_KEY | Service role key for admin operations (fallback: SUPABASE_SERVICE_ROLE_KEY) |
Set these in your .env.local for development and in Vercel environment variables for production.
Local development
Start a local Supabase instance:
pnpm db:start # Start local Supabase (Postgres, Auth, Storage, Realtime)
pnpm db:reset # Reset and re-run all migrations from scratch
pnpm db:types # Regenerate TypeScript types from the live schema
pnpm db:push # Push migrations to the remote project
pnpm db:diff # Generate a migration from schema changes
pnpm db:dump # Dump the current schema to baseline SQLAfter any migration, always run pnpm db:types to regenerate lib/supabase/database.types.ts. The build will fail if types are out of sync.
Migration file naming
Migrations live in supabase/migrations/ and follow the naming convention:
YYYYMMDD_NNN_descriptive_name.sqlThe baseline migration at 00000000000000_baseline.sql contains the full initial schema. All subsequent migrations are forward-only additions.