Engineering
How we build public-facing AI. Architecture decisions, trust boundaries, risk scoring, and the principles behind the code.
You Can't Trust What You Can't Trace
Knowledge sources passed as document blocks return citations linked to specific generated text. The portal renders cited claims with inline source badges, so owners can see exactly where every answer came from.
Risk Scoring for Public-Facing AI: Eight Dimensions, Compound Scores, Hard Stops
A scoring engine that evaluates AI deployments on eight independent risk dimensions, combines them with a weighted geometric mean, and hard-stops dangerous combinations. Runs inside MCP write tools so every configuration change is evaluated before it ships.
Progressive Disclosure as Data Labeling: A Different Kind of AI Safety Loop
When a configuration change shifts a deployment's risk profile, the trust layer doesn't block -- it generates a contextual interface that explains what changed, captures the owner's response as labeling signal, and applies the change with the guardrails they just configured.
Why We Don't Sanitize User Messages in Our AI Agent
The correct boundary for prompt injection defense is between system content and user content -- not between safe and unsafe words. Here's why regex filters on user input do more harm than good.
How Customer-Facing AI Agents Determine Trust
Phone numbers, portal sessions, and API keys -- three ways an AI agent decides who it's talking to and what actions it can take. The architecture behind trust boundaries.
Efficiency Is a Design Decision, Not an Optimization
The biggest efficiency gains in AI systems aren't in the code. They're in what you decide not to build. Every scaffolding layer costs energy, money, and human effort. Less machine is the goal.
Trust the Model, Save the Energy
Every layer between the user and the model costs energy. Sierra, Decagon, and Intercom build scaffolding because they don't trust the model. The scaffolding depreciates every 2 months. The model improves.

