martinfowler.com
|
ksl
|
|
Kief Morris at Thoughtworks, writing on Martin Fowler’s site, lays out a framework for where developers should actually sit when AI agents write their code. The core tension is real: agents generate output faster than humans can review it, which makes line-by-line inspection a bottleneck rather than a safeguard. His answer is “humans on the loop” – engineers build the harness of tests, specs, and quality gates that constrain agent behavior, instead of approving individual outputs. The piece coins “harness engineering” as an emerging practice, and the logic tracks with how teams at Vercel, Shopify, and other engineering orgs have started restructuring their workflows around agent-driven development. What stands out is the argument that code quality still matters, not for craft reasons, but because messy codebases make agents slower and more expensive to run.
