Thoughts on slowing the fuck down · Simon Willison's Weblog
Science, Technology & Innovation · Mar 25, 2026
Cap deployment of AI-generated code to a team’s actual review and reasoning capacity—use explicit throttling and deliberate pauses so speed doesn’t outpace understanding, reframing slowdowns as productivity safeguards.
Thoughts on slowing the fuck down · Simon Willison's Weblog
Science, Technology & Innovation · Mar 25, 2026
The main risk is loss of human agency when teams delegate decision-making to agents for speed—causing epistemic failure, slower debugging, weaker architecture, and poorer strategic decisions—so governance should preserve decision rights and system comprehension rather than only accelerating delivery with autonomous tooling.
Thoughts on slowing the fuck down · Simon Willison's Weblog
Science, Technology & Innovation · Mar 25, 2026
Keep system-defining decisions—architecture, APIs and other gestalt-defining elements—under deliberate human authorship rather than automated agents, because foundational mistakes propagate widely; separate low-risk generated code from high-consequence design choices.
Thoughts on slowing the fuck down · Simon Willison's Weblog
Science, Technology & Innovation · Mar 25, 2026
Agent-assisted development can transform isolated human errors into compounded system-level failures because agent-driven code generation can outpace human review, letting many small defects accumulate and interact until the codebase becomes hard to reason about, so throughput looks good while review capacity and architectural understanding become the real constraints.