Back to feed

Thoughts on slowing the fuck down

Simon Willison's Weblog

Mar 25, 2026

3/25/2026

Cap AI-Generated Code Deployment by Team Review Capacity to Preserve Thoroughness

Thoughts on slowing the fuck down · Simon Willison's Weblog

Science, Technology & Innovation · Mar 25, 2026

Cap deployment of AI-generated code to a team’s actual review and reasoning capacity—use explicit throttling and deliberate pauses so speed doesn’t outpace understanding, reframing slowdowns as productivity safeguards.


3/25/2026

Excessive Delegation to Automated Agents Undermines Human Agency and System Understanding

Thoughts on slowing the fuck down · Simon Willison's Weblog

Science, Technology & Innovation · Mar 25, 2026

The main risk is loss of human agency when teams delegate decision-making to agents for speed—causing epistemic failure, slower debugging, weaker architecture, and poorer strategic decisions—so governance should preserve decision rights and system comprehension rather than only accelerating delivery with autonomous tooling.


3/25/2026

Humans Should Control Architecture And APIs While Low Risk Code Is Automated

Thoughts on slowing the fuck down · Simon Willison's Weblog

Science, Technology & Innovation · Mar 25, 2026

Keep system-defining decisions—architecture, APIs and other gestalt-defining elements—under deliberate human authorship rather than automated agents, because foundational mistakes propagate widely; separate low-risk generated code from high-consequence design choices.


3/25/2026

Agent Assisted Software Development Shifts Failure Mode From Individual Mistakes To Systemic Complexity As Review Capacity And Architectural Understanding Become The Constraint

Thoughts on slowing the fuck down · Simon Willison's Weblog

Science, Technology & Innovation · Mar 25, 2026

Agent-assisted development can transform isolated human errors into compounded system-level failures because agent-driven code generation can outpace human review, letting many small defects accumulate and interact until the codebase becomes hard to reason about, so throughput looks good while review capacity and architectural understanding become the real constraints.