Skip to content
Verticla

Each realtor runs in a private agent container with its own row-level-security scope at the database. Buyer PII is redacted before any data leaves the container for embedding generation on the platform's local data layer. Anthropic Claude Haiku — the agent reasoning model — receives only the structured prompts a skill explicitly assembles, never the raw buyer financial information. Every tool call is recorded in agent_audit_log and the realtor exports the full trace on request. The architecture is designed so the realtor owns their data, their learned weights, and their outreach voice — and so cancelling a subscription doesn't change that.

TL;DR

  • Tenant isolation: per-tenant container, per-tenant Supabase row-level-security scope
  • PII handling: redacted before any cross-host call (including the local data layer)
  • Audit trail: every tool call recorded in agent_audit_log, exportable
  • Data ownership: realtor owns the learning loop, the trace data, the voice fine-tune
  • Data export: full export in standard formats (CSV, JSON, PDF) on request
  • Data deletion: 90 days after account closure unless extended

How is one realtor's data isolated from another's?

Three layers of isolation, each enforced independently:

  1. Container-level: each realtor runs in a separate agent container provisioned by the platform's container scheduler. Containers do not share volumes, network namespaces, or process trees.
  2. Database-level: every domain table in Supabase has a tenant_id foreign key and a row-level-security policy that filters on auth.jwt()->>'tenant_id'. Even with database credentials, no query can read across tenants — the database enforces the filter at the row level, not the application.
  3. Tool-call level: every MCP tool invocation goes through requireTenantId() middleware that pulls the tenant claim from the JWT and refuses calls without it. Tools cannot accidentally call cross-tenant.

A failure in any one layer is caught by the others.

What happens to PII before it leaves the realtor's container?

The platform's pipeline-kit ships a PII redactor that runs on every payload going to either the agent reasoning model (Anthropic Claude Haiku) or the local data layer (gemma4-realty:v1 running on a private host). The redactor strips Social Security numbers, full credit card numbers, full bank account numbers, and the patterns matching the configured redaction list per vertical.

Names and contact info are not redacted — they're operational data the realtor needs in drafted outreach. But financial PII never leaves the container in raw form. When the agent needs to reason about a buyer's budget, it receives a normalized range, not the underlying financial document.

Which third parties see the realtor's data?

Third partyWhat they seeWhy
Anthropic (Claude Haiku)Structured prompts assembled by skills — buyer first names, listing addresses, criteria, normalized budget bands. Never raw PII or financial documents.Agent reasoning
Mazopen-2 (gemma4 host)Embedding inputs and extraction targets — listing text, anonymized criteria. PII redacted before send.Data layer (embeddings, classification, extraction) — runs on the platform's private infrastructure, not a third-party API
Vercel (frontend hosting)Page renders, no agent data — the dashboard fetches from Supabase via authenticated server components, Vercel doesn't store the dataFrontend
Supabase (data layer)All structured data, encrypted at rest, RLS-enforcedDatabase
Resend (transactional email)Outbound email metadata when the realtor sends platform-generated outreach via the email channelEmail delivery
Stripe (billing, post-pilot)Payment information only — no operational dataBilling

No data is shared with advertising networks, analytics vendors, or any party not on this list. The list is the list.

What's in the audit trail?

agent_audit_log records every tool call with:

  • Tenant ID
  • Tool name and parameters (PII-redacted in logs)
  • Skill that initiated the call
  • Timestamp
  • Result hash and outcome
  • Cost (Anthropic tokens used or gemma4 request count)
  • Error or stop-point trigger if any

The trail is exportable on request and is never deleted while the account is active. After account closure, the trail follows the same 90-day deletion policy as the rest of the realtor's data.

How does the cost-guard prevent runaway agent behavior?

Two independent circuit breakers run on every tool call:

  • Anthropic guard: per-tenant daily dollar cap and per-minute tool-call rate limit. Trips on the platform_cost_ledger threshold and kills the active session before further work runs.
  • Gemma4 guard: per-tenant requests-per-minute cap (the constraint is GPU contention on the local data layer host, not dollar cost). Trips on gemma4_usage_ledger threshold.

Both circuit breakers write a structured incident row when they fire. The realtor's dashboard surfaces the incident immediately. McKinsey's trust scaffolding principle applies: clear review points for higher-risk actions, concise summaries of what the system did and what it touched.

What if the realtor wants to leave?

The realtor exports their data on request — match history, outcome verdicts, scoring weights, voice fine-tune samples, contacts, deals, audit log — in standard formats (CSV, JSON, PDF for audit log). Export is available at any time during the subscription and is automatic 30 days before any account closure.

The platform deletes all per-tenant data 90 days after account closure unless the realtor extends the retention window. Aggregate, anonymized model evaluation metrics may persist longer for platform-level model improvement but are never tied to the individual realtor.

McKinsey's strategic frame applies here verbatim: the realtor owns the learning loop, the trace data, and the voice fine-tune. Cancelling a subscription doesn't change that.

How does the platform handle a security incident?

A security incident is defined as any unauthorized access to a tenant's data, any cross-tenant data exposure, any breach of the third-party services in the table above, or any compromise of the platform's signing keys.

Response procedure:

  1. Affected realtors notified within 72 hours, with a description of what was exposed and what remediation is in progress
  2. Audit log of the incident's blast radius published to affected realtors within 7 days
  3. Public post-mortem published within 30 days with root cause and prevention plan

No security incidents have been recorded as of the most recent revision of this page. Updates posted on a /status page once the platform reaches general availability.

Is the platform SOC 2 compliant?

SOC 2 Type 1 audit is targeted within 12 months of general availability. Type 2 follows after 12 months of operating-period evidence. During the pilot phase, the platform operates to SOC 2-aligned controls (RLS, audit trail, encrypted storage, access controls) but is not formally audited.

Realtors handling regulated data (mortgage information, regulated investment advice) should treat the pilot as appropriate for non-regulated buyer-pipeline workflows only.

Where can a security researcher report a vulnerability?

A coordinated-disclosure email channel and PGP key will be published once the platform reaches general availability. Pilot-phase reports go through the realtor's onboarding contact.

Start a pilot

Free 14-day pilot. Full per-tenant container architecture from day one. Request access — no credit card.