Inside the quiet engine that stands in.
No magical inference. No ambient surveillance. Just structured context that people publish on purpose — and a small set of rules about what to do with it.
Declared, not observed
Most productivity AI tries to reverse-engineer context by watching everything: keystrokes, DMs, screen time, mouse movement. That creates noise, anxiety, and a steady background rate of hallucination.
StandIn refuses that bargain. It only knows what someone deliberately publishes in a wrap, plus what a linked system of record like Jira or GitHub already contains.
Observer model
Ingests everything. Guesses intent. High noise, low trust.
StandIn model
Pulls work from connected tools into a draft. The author reviews and approves before it publishes. Nothing enters the record without a signature. Zero guessing.
Infrastructure, not a destination
A standalone app is just another silo to check. StandIn is built as a layer that sits underneath the tools your team already lives in.
Where your team already is
No separate app, no second inbox. StandIn lives in Slack, in your code host, in your ticket tracker. The surfaces you already check are the surfaces it answers on.
One canonical record
Drafts pull from Jira, Linear, GitHub, and your calendars. The published wrap becomes the source of truth — not a duplicate to maintain, but the thing teammates get quoted back to them when they ask.
Three representatives, three jobs
One model for everything is how you get confident nonsense. StandIn runs scoped representatives, each answering only for what it actually knows.
Personal representative
Built from one person's published wraps and the tickets they own. Answers questions the way they would — or stays quiet.
Project representative
Scoped to a squad, service, or initiative. Pulls from every wrap on that surface and cites each contributor individually.
Routing representative
Sits above the others. Decides which representative should take a question, or points to a human when no rep has coverage.
The constraints are the product
Most AI adoption stalls on privacy. We didn't fix that with a policy page. We fixed it by making the tool incapable of doing the things people worry about.
Context has a half-life
Wraps decay. A note from three weeks ago carries less weight than one from this morning, so stale context doesn't masquerade as current.
Your words stay yours
No wrap is sent to train a foundation model. Retrieval is in-context, per question. When the conversation ends, nothing persists in anyone's weights.
What changes when presence isn't the point
Decouple context from the person who created it and some things get quietly better.
A question at 2am in Berlin gets the same answer as a question at 3pm in San Francisco. It's the wrap answering, not the author.
A teammate logs off for a week and their last wrap keeps answering. No one checks Slack from the beach.
Every published wrap joins a longer record. New hires read what senior engineers read. Context ages well when it's written down.