pages.whyWeRefuse.heroHeadline
pages.whyWeRefuse.heroLede
pages.whyWeRefuse.sectionStoryTitle
pages.whyWeRefuse.storyInferenceText
pages.whyWeRefuse.storyStandinText
"pages.whyWeRefuse.storyBlockquote"
The failure mode of "helpful" AI
Most AI is optimized to answer at all costs. If the data doesn't exist, it invents. If intent is unclear, it infers.
Most AI is optimized to answer at all costs. If the data doesn't exist, it invents. If intent is unclear, it infers.
If an AI speculates that "Sarah is probably done" because she hasn't posted in 4 hours, and you act on that speculation, trust breaks down. You ping Sarah. She wakes up. The cycle of interruption continues.
How it decides what to say
Every question falls into one of three categories.
Answer
When the data is explicit. A wrap was published. A Jira ticket changed status. A commit was pushed.
pages.whyWeRefuse.spectrumAnswerCondition
Redirect
When the data is explicit. A wrap was published. A Jira ticket changed status. A commit was pushed.
pages.whyWeRefuse.spectrumRedirectCondition
Refuse
When the data is explicit. A wrap was published. A Jira ticket changed status. A commit was pushed.
pages.whyWeRefuse.spectrumRefuseCondition
Hard Boundaries
pages.whyWeRefuse.hardBoundariesLede
Surveillance Queries
StandIn cannot be used as a monitoring tool.
Surveillance Queries
StandIn cannot be used as a monitoring tool.
Surveillance Queries
StandIn cannot be used as a monitoring tool.
The failure mode of "helpful" AI
Most AI is optimized to answer at all costs. If the data doesn't exist, it invents. If intent is unclear, it infers.
Most AI is optimized to answer at all costs. If the data doesn't exist, it invents. If intent is unclear, it infers.
In a distributed work environment, inference is dangerous.
The failure mode of "helpful" AI
You cannot train a model to be ethical if its data access is unlimited.
See how it's builtRefusal is the feature.
pages.whyWeRefuse.ctaLede