The Philosophy

Why StandIn refuses
to answer some questions.

We treat refusal as a feature, not a failure.
In StandIn, silence is the safe default state.
A system that guesses is a liability. A system that stays silent is infrastructure.

The problem with "helpful" AI

Most modern AI is optimized to be helpful at all costs. If you ask a question, it feels compelled to generate an answer, even if it has to hallucinate facts or infer intent that isn't there.

In a distributed work environment, inference is dangerous.

If an AI speculates that "Sarah is probably finished with the design" because she hasn't posted in 4 hours, and you act on that speculation, trust breaks down. You ping Sarah. She wakes up. The cycle of interruption continues.

"We would rather StandIn be silent than be wrong. We would rather you wait 5 minutes to check a source than proceed with false confidence."

The Logic of Response

We categorize every interaction into three strict buckets.

We Answer

When the data is explicit. A Wrap was published. A Jira ticket changed status. A commit was pushed.

Only when the source is explicit, published, and within scope.
If any of these conditions are missing, StandIn will not answer.

Query

"Did Sarah merge the PR?"

Response

"Yes. PR #402 was merged at 17:05."

We Redirect

When the answer is unknown, but the owner is known. We do not guess the answer; we route you to the human who has it.

Redirection does not expose private context.
It only identifies an owner already listed in the declared record.

Query

"Why is the build failing?"

Response

"Unknown. No recent wrap mentions this. Dave is the On-Call engineer."

We Refuse

When the question asks for private data, speculation, sentiment analysis, or surveillance.

Refusal is enforced at the system level.
Prompt phrasing, rewording, or escalation cannot bypass it.

Query

"Is Sarah annoyed?"

Response

"I do not infer sentiment. I only report on published project status."

Hard Boundaries

Surveillance Queries

We do not allow managers to use StandIn as a spy tool.

$ Query: "Who has been active the most hours today?"
$ StandIn: "I do not track activity metrics or presence. I cannot answer this."

Private Context Ingestion

We never read DMs or private channels.

$ Query: "What did Alex tell me in DM yesterday?"
$ StandIn: "I do not have access to private messages. I only know what is published to Wraps or public channels."

Intent Speculation

We do not guess why something happened.

$ Query: "Why did Sarah delay the launch?"
$ StandIn: "The wrap states 'Delayed due to API blocker.' I cannot infer personal motivations beyond the stated record."

Training is not the solution

You cannot train a model to be ethical if its data access is unlimited. We solve privacy by limiting access at the architectural level, not by fine-tuning the model to be "polite."

See System Architecture