Architecture
System design, request flow, service boundaries, module organization, registry patterns, and key architectural decisions.
Architecture
This document covers the high-level architecture of the Sprinter Platform -- how requests flow through the system, how services interact, how code is organized, and why certain design decisions were made.
System diagram
User Browser
|
v
proxy.ts (middleware)
| - JWT validation via getClaims() (zero network calls)
| - Tenant slug extraction from /t/[slug]/... URLs
| - Sets x-tenant-slug header, rewrites URL
|
v
Next.js App Router
|
+-- Server Components (SSR)
| |-- getTenantContext() (cached per request)
| |-- Server Actions (features/*/server/)
| |-- resolveView() for view data
| v
| Supabase Postgres (RLS enforced)
|
+-- API Routes (app/api/)
| |-- Zod validation at boundary
| |-- Auth adapter (requireAuth, hasPermission)
| v
| Supabase Postgres / External APIs
|
+-- Client Components
| |-- React Query for data fetching
| |-- DefaultChatTransport for AI streaming
| v
| API Routes / Supabase Realtime
|
+-- AI Chat Route (app/api/chat/)
| |-- Agent resolution (code registry -> DB fallback)
| |-- resolveAgentTools(config, permissions)
| |-- streamText() with tool execution
| v
| AI Providers (OpenAI, Anthropic, etc.)
|
+-- Inngest (background jobs)
|-- entity/created -> extraction
|-- agent/heartbeat -> autonomous execution
|-- document/uploaded -> processing
|-- extraction/result-rejected -> feedback rerun
v
Supabase Postgres / AI ProvidersRequest flow
A typical user request follows this path:
- Browser sends request to Next.js server
proxy.tsmiddleware intercepts the request:- Validates JWT via
supabase.auth.getClaims()-- local JWKS validation, zero network calls - If the URL starts with
/t/[slug]/, extracts the tenant slug, setsx-tenant-slugheader, rewrites the URL to strip the prefix - Redirects unauthenticated users to
/loginfor protected pages (API routes return 401)
- Validates JWT via
- App Router resolves the route:
- Server Components call
getTenantContext()(cached per request) to resolve the active tenant, then execute server actions or direct Supabase queries - API Routes validate input with Zod, check auth via the adapter, and return JSON responses
- Server Components call
- Supabase processes the query with RLS policies enforcing tenant isolation. All queries are scoped by
tenant_id. - Response flows back through Next.js to the browser
AI chat flow
- Client sends message via
DefaultChatTransportto/api/chat - Chat route resolves the agent (code registry first, DB fallback)
resolveAgentTools(config, permissions)builds the agent's ToolSet, filtering out tools the caller lacks permission to usestreamText()executes with the agent's system prompt, tools, and message history- Tool calls execute inline (entity CRUD, web search, delegation)
- Streamed response renders in the chat panel with auto-expanded tool results
Background job flow
- Application code fires an Inngest event (e.g.,
entity/created) - Inngest dispatches to the matching function (e.g.,
action-dispatch,session-executor, ordocument-processing) - The function runs with admin-level Supabase access, using the agent's
role_idpermissions for tool gating - Results are written back to the database (
sessions,session_events,entity_responses, processed documents, and related audit rows)
Service boundaries
The platform relies on four external services:
| Service | Role | Connection |
|---|---|---|
| Supabase | Auth, Postgres, RLS, Storage, Realtime | Server-side client (cached), client-side client, admin client |
| Inngest | Background job orchestration | Event-driven functions, cron scheduling |
| AI providers | LLM inference (chat, extraction, capture) | AI SDK v6 via streamText, generateObject |
| External agents | OpenClaw, A2A, MCP connections | agent_connections table, connection-based routing |
Additional external integrations:
- Exa API -- Web search tool (
webSearch) - Sentry -- Error monitoring (client, server, edge configs)
Module organization
Directory structure
app/ # Thin route handlers, delegate to features
(app)/ # Authenticated app routes
api/ # API endpoints
(auth)/ # Login/signup pages
features/ # Platform modules (domain-agnostic)
entities/ # Entity CRUD, views, scoring, extraction, tags
agents/ # Agent registry, delegation, heartbeat, org chart
tools/ # Tool registry, execution, AI bridge, sessions
blocks/ # 20 block types, BlockGrid, bridge functions
views/ # Config-driven layouts, regions, tabs
chat/ # Chat persistence, history, agent selection
tenant/ # Multi-tenant context, auth adapter, routing
navigation/ # Sidebar customization
documents/ # Document storage, processing, chunking
capture/ # Natural language -> entity creation
context/ # Shared corrections and learnings
comments/ # Threaded comments
analytics/ # Event tracking
api-keys/ # API key management
charts/ # Recharts wrappers
inngest/ # Background job functions
features/custom/ # Product-specific (replaced per fork)
components/ # Custom entity type UI
tools/ # Custom tool definitions + UI
lib/ # Product utilities
server/ # Product server actions
components/
ui/ # shadcn/ui primitives (60+)
app-shell/ # Sidebar, agent sidebar, command palette
lib/
utils.ts # cn(), slugify(), humanize()
api-utils.ts # apiErrorResponse()
chart-colors.ts # CHART_COLORS palette
supabase/ # Server, client, admin Supabase clients
supabase/
migrations/ # SQL migration files
seed.sql # Dev seed data
scripts/ # Seed scripts for entity types and demo dataModule conventions
Each feature module follows a consistent internal structure:
types.ts-- Shared types, Zod schemas, constantsserver/actions.ts-- Server actions (database operations)components/-- React componentslib/-- Pure functions, utilities, transformshooks/-- React hooks (client-side)
The app/ directory contains thin route handlers that delegate all logic to feature modules. Route files should contain minimal code -- just imports and wiring.
Registry pattern
Four systems use runtime registries to enable extensibility without code coupling:
Agent registry
// features/agents/default-agents.ts
registerAgent({
slug: "amble",
name: "Amble",
systemPrompt: "...",
model: "gpt-4o",
config: { toolGroups: ["entity", "web"] },
});Code-defined agents are registered at import time. The chat route checks the code registry first, then falls back to DB-managed agents (agents table). Both sources produce the same AgentDefinition shape via dbAgentToDefinition().
Tool registry
// features/custom/tools/roi-calculator/definition.ts
registerTool({
slug: "roi-calculator",
name: "ROI Calculator",
inputSchema: roiInputSchema,
execute: async (input) => { /* ... */ },
});Tools register server-side definitions with Zod schemas and execute functions. A separate client-side UI registry (registerToolUI()) maps tool slugs to custom input/output components. Unregistered tools get auto-generated generic UI from their JSON Schema.
Block registry
// features/blocks/components/stat-cards-block.tsx
registerBlock("stat-cards", {
component: StatCardsBlock,
editComponent: StatCardsBlockEditor,
});All 20 block types register their view and edit components. BlockRenderer dispatches to the correct component based on block.type. A barrel import (@/features/blocks/components) ensures all blocks are registered before rendering.
Skill registry
Skills are reusable instruction modules stored in the skills table. Agents reference skills via the agent_skills junction table. At runtime, skill instructions are composed into the agent's system prompt.
Data flow: entity lifecycle
An entity moves through a well-defined lifecycle:
1. CREATE
User creates via UI, quick capture, agent tool, or API
-> entities row inserted
-> activity logged
-> entity/created Inngest event fired
2. EXTRACT
Inngest picks up entity/created event
-> System actions/tasks resolve field dependencies and priorities
-> Field-population runs execute in sessions with ordered session_events
-> Each field gets its own agent loop with tools and provenance
-> Values are submitted as entity_responses for review/promotion
3. REVIEW
User sees submitted responses in the review surface
-> Approve: value committed to entity.content
-> Reject with reason: feedback rerun starts a new session
-> Lock: field added to entity.metadata.lockedFields (skipped on re-extraction)
4. CONNECT
Relations created between entities
-> entity_relations row inserted
-> Connection fields auto-resolve related entities
-> Cascade extraction creates sub-entities from list results
5. RENDER
Entity detail page loads
-> View blocks resolved against entity data
-> BlockGrid renders the layout (bento, stack, grid)
-> Connection fields show linked entities inline
-> Scoring radar in sidebarKey architectural decisions
Entity-centric data model
Everything revolves around entities. Rather than building separate tables for each domain concept (opportunities, companies, contacts), all structured data lives in the entities table with a content JSONB column. The shape of each entity is defined by its entity_type and json_schema.
Why: This makes the platform genuinely reusable. Adding a new data type is a database insert, not a code change. The UI renders any entity type generically from its schema.
Async-first with Inngest
All non-trivial background work runs through Inngest: action dispatch, session execution, heartbeat, document processing, webhook delivery, cascade creation, and feedback reruns. The user-facing request returns immediately; results appear asynchronously.
Why: Agent field population can take 30+ seconds per field. Heartbeats run on schedules. Document processing involves chunking and embedding. None of these should block the UI. Inngest provides reliable execution with retries, observability, and cron scheduling.
Config-driven rendering
Views, blocks, navigation, and entity type schemas are all stored as configuration in the database. The platform interprets this configuration at runtime to render the appropriate UI.
Why: This enables AI agents to modify the UI without code deployments. An agent can create a new view, rearrange blocks, or update a field schema -- all through tool calls that write to the database.
Blocks as the universal rendering primitive
Every visual surface renders through the same BlockConfig -> resolve -> render pipeline. Entity details, dashboards, tool outputs, chat messages, and standalone pages all use the same block system.
Why: One rendering pipeline means one set of components to maintain. Bridge functions translate from different data sources (entities, tools, chat) into the same ResolvedBlock shape. New block types automatically work everywhere.
Tenant isolation via RLS
Rather than application-level tenant filtering, all isolation is enforced at the database level via Supabase Row Level Security. Every table has a tenant_id column, and RLS policies ensure queries only return rows matching the authenticated user's tenant membership.
Why: Defense in depth. Even if application code has a bug that omits a tenant filter, the database rejects the query. This is critical for a multi-tenant SaaS platform handling sensitive business data.
Permission-gated tool access
Agents never see tools they lack permission to use. The ToolSet is filtered at resolution time based on the caller's permissions (user permissions in chat, agent role permissions in autonomous mode).
Why: Security by exclusion. Rather than checking permissions at execution time and returning errors, tools that the agent cannot use are simply absent from its context. The agent cannot even attempt to call them.
Performance patterns
getEntityTypes()cached per request via Reactcache()-- safe to call from multiple server components without duplicate DB hits- Entity counts via DB function --
get_entity_counts_by_type(tenant_id)runs a singleGROUP BYinstead of N+1 per-type COUNT queries - Composite indexes on
entities(tenant_id, entity_type_id),entities(tenant_id, entity_type_slug),entity_relations(from/to_entity_id),chats(user_id, tenant_id) - Denormalized
entity_type_slugon entities -- filter directly instead of joining to resolve slug to UUID - React Query
staleTimeset to 5 minutes for entity counts (not real-time critical) (SELECT auth.uid())in RLS -- PostgreSQL evaluates the subquery once and caches it; bareauth.uid()re-evaluates per row