Silence over speculation.
A system that guesses is a liability. A system that stays silent is infrastructure. Refusal is not a gap in what StandIn can do — it's the reason every other answer is worth trusting.
What it costs when a tool decides to guess.
A San Francisco engineer asks whether Sarah's API migration is complete. The AI skims her last Slack messages and replies, "Looks on track." The engineer deploys. The migration wasn't done. The build fails at 02:00 JST and pages half the team.
The engineer asks Sarah's Personal Representative the same question. It checks Sarah's published wrap. There's nothing about the migration finishing, so it says so, and names Dave as the handoff owner this week. The engineer waits, checks with Dave, and ships clean in the morning.
"Silence is not a failure mode. Confident wrong answers are."
Most AI is built to fill the silence.
Almost every assistant you've used is optimized to answer at all costs. If the data isn't there, it synthesises. If intent is unclear, it infers. It will rather bluff than say, "I don't know."
That's a fine trade for trivia. It's a catastrophic trade for running a team across time zones. Every confident guess your coworkers accept becomes a decision they thought was grounded. It wasn't.
If an AI speculates that "Sarah is probably done" because she hasn't posted in four hours, and you act on that speculation, trust breaks down quietly. You ping Sarah. She wakes up. The cycle of interruption continues — only now, with a tool sitting in the middle of it.
Every question resolves one of three ways.
There is no fourth category for guessing, fudging, or rounding up.
Answer
The data is explicit. A wrap was published, a ticket changed status, a commit landed — and the source is within the asker's scope.
Only when the source is explicit, published, and in scope. If any of those are missing, it won't answer.
Redirect
No one has written the answer, but the ownership is known. The Representative points you at the person listed in the record.
Redirection never exposes private context. It only surfaces an owner already named in a published wrap.
Refuse
The question asks for something the system is not allowed to produce. No source, no owner, no speculation — a clean decline.
Refusal is enforced at the system level. You can't rephrase, escalate, or prompt-engineer past it.
Three questions it will never answer.
Some refusals aren't about missing data. They're about questions the system was designed to reject, no matter what's in the record.
Surveillance queries
Anything that treats StandIn as a way to watch people instead of read their work.
Private channels
Direct messages, personal Slack threads, private notes — they aren't in scope for any Representative, ever.
Inner state
Mood, motivation, sentiment, or what someone meant versus what they wrote.
Every Representative inherits the same silence.
Personal, Team, and Project Representatives all route to the same three outcomes. A Personal Rep that would rather guess than admit it doesn't know would be pretending to be its owner — a betrayal of the person and their team.
So the refusal lives below the Representatives. It's part of the engine itself. When a Rep says "I don't know," that's the architecture, not a failed call to a model.
In a distributed team, a system that would rather be quiet than wrong is the difference between trust and more meetings to fix its mistakes.
You can't train an ethical model if its access is unlimited.
StandIn's refusals aren't a prompt asking the model to behave. They're scope rules at the data layer — what goes in is what can ever come out. See how that's structured.
See how it's builtRefusal is the feature.
If that sounds like the posture your team actually needs, the rest of the product follows from it.