Trust · Privacy First AI

Privacy First AI.

Sociail is built for shared AI work — without invisible memory, hidden actions, or uncontrolled automation.

Read the AIX backbone
01 · 02The trust ledger

What Sociail stores —
and what it doesn't.

What we store

Sociail does store

  • +
    Your room content
    Messages, artifacts, decisions inside rooms you own.
  • +
    Your AI prompts and responses
    So you can scroll back, audit, and reuse.
  • +
    Receipts on AI actions
    What happened, when, under whose authority.
  • +
    Account & workspace info
    Name, email, billing — what's needed to run the service.
  • +
    Browser Control approvals
    Which page, which action, when you said yes.
What we don't

Sociail does not do

  • ×
    Train external models on your content
    Provider contracts prohibit it. Your work doesn't leak into someone else's foundation model.
  • ×
    Sell your data
    To anyone. Ever. There's no business model behind us doing this.
  • ×
    Mix data across workspaces
    One team's context never informs another's AI.
  • ×
    Run AI in the background unprompted
    If it's consequential, it's an approval card — not a silent action.
  • ×
    Read browser pages you didn't approve
    The extension acts on the page you said yes to. Nothing else.
03How memory works

Memory is room-scoped by default.

AI teammates know the room they're in — its messages, its artifacts, the decisions that happened there. They don't automatically know other rooms, even in the same workspace. They never know other workspaces.

If you want a teammate that spans rooms — for example, a research assistant working across multiple project rooms — you grant that broader access explicitly. Broader access is visible, not implicit. You'll see what each teammate can see, on the teammate's settings.

There's no hidden “AI memory” silently collecting context outside the rooms it's in. If we're remembering something, you can find where it came from.

04How approvals work

Every consequential action waits for a human.

AI proposes; you approve, edit, or reject. The approval card is visible in the room — the people doing the work see it, the decision is recorded, the AI cannot route around it.

Approval requirements are configurable per teammate, per room, per category of action. Low-stakes actions can run without approval if you choose. Consequential actions cannot.

We don't think of approvals as friction. We think of them as the contract that makes AI as a teammate possible.

See Trust & Approvals
05How Browser Control is bound

The most concrete trust commitment in the product.

Browser Control is bounded by three rules — together, they're what makes a browser-acting AI teammate something you can actually trust:

1. User-enabled. The extension is installed and turned on by you. We don't silently activate.
2. Room-scoped. Browser Control activates from a specific Sociail room — not from the browser tab.
3. Per-action approved. Every action gets its own approval card. Every time. There is no “approve once, run forever” mode.

Sociail does not read browser pages in the background. Sociail does not use the extension to monitor your browsing. The extension only acts when you've approved a specific action — and you can see exactly what it did, after.

See Browser Control
06How AI providers are used

We don't pretend the AI runs in our basement.

Sociail uses third-party AI model providers — like Anthropic — to power AI teammates. We're transparent about this because anyone reading carefully would figure it out anyway.

When you direct a prompt to an AI teammate:

The necessary inputs are sent to the provider for that request, and only that request.
The provider processes the request and returns a response.
The provider is contractually prohibited from using your content to train their general models.
We don't send more than the request needs to fulfill itself.

We don't pretend providers see nothing. We do make sure providers can't keep what they see — for their own models, or anyone else's.

07How you delete, revoke, or correct

You stay in control of what AI knows.

Trust without an exit is a trap. The controls below are designed so you can change your mind — at any layer — without having to ask permission.

Delete a room
Its content is removed from AI memory. Teammates that operated in that room lose the context. Receipts of past actions remain in audit, scrubbed of room content.
Delete an artifact
It's removed. Subsequent AI runs no longer have it as context.
Revoke a teammate
The teammate is gone. Action history stays for audit. The teammate cannot run again unless re-invited.
Correct a receipt
Edit or annotate. Corrections are tracked alongside the original — no rewriting history.
Export your data
Request via workspace settings (or through Help during Early Access). You get your content in a portable format.
Delete your workspace
All content removed within a reasonable period — except where retention is required by law.
08Early Access limitations

Honest about what's still being built.

Heads up

A few trust controls aren't fully self-serve yet.

We'd rather tell you now than have your team discover the gap. If any of these are dealbreakers for you today, ask us directly — or wait until they ship.

  • Self-serve data export is in development. For now, request it through Help and we'll deliver it manually.
  • Provider list in workspace settings is finalized but not yet displayed in-product. Ask if you need the current list before signing up.
  • Per-action audit logs are being expanded. Current logs are room-scoped and visible in the room itself.
  • Per-room retention controls are on the roadmap. Workspace-level retention is in place; granular controls are coming.

If any of these are critical for your team today, tell us — we'd rather know than have you discover the gap.

/backboneThe technical foundation

Trust grammar that's enforced, not promised.

Everything on this page is operational because the architecture beneath it makes it operational. Eight layers — SOUL, Cortex, Context Firewall, Policy & Grants, Memory & Data Fabric, Tool Broker, Receipts, Work Graph — that turn each commitment above into something the system can verify, not just something the marketing page claims.

· Try it
AI you can actually bring into the work.

Sociail is in invite-only Early Access. We're working with small teams ready for AI as a teammate — with the trust grammar in place from day one.

Read the Privacy Policy