AI Sauce & Digital Asbestos: Creating the New Legacy
It's a sunny but cold day in Connecticut. I was running errands and listening to a podcast where Cory Doctorow discussed "Software as a Liability, not an Asset." While recommending a podcast called craphound.com is tough, he offers a strong contrarian view to what much of the tech industry considers the best path forward.
Many senior IT leaders are excited about using AI to dramatically increase productivity and reduce development costs by supplementing or ultimately replacing coders. This seems promising, but recent studies show AI-generated code tends to be repetitive, include unused constructs, and carry more security and quality issues than human-written code.
Doctorow’s central thesis is that every line of code is a liability that must be managed. The asset it creates must be more valuable than the technical debt it introduces, making "lines of code produced" a potentially dangerous metric. He pairs this with the colorful observation that tech leadership aiming for scale often dislikes their developers—the developers become more expensive as they become effective and often challenge risky, illegal, or impossible ideas. This explains why senior leaders prefer scalable, often sycophantic AI and "vibe coding" over their human teams. On my "AI Sauce scale," they are deep into "AI Unicorns and Rainbows."
Having spent my career writing, managing, and critiquing software in the insurance industry, I agree that legacy insurance application software is a risky liability that needs careful, ongoing management and replacement. But we don’t want to replace it with more bad code. For many years, the worst code liability in the insurance industry wasn't developer-written; it was user-created "mutant spreadsheets" and code generated by CASE tools like Texas Instrument’s IEF in the 1980s. IEF generated functional but indecipherable code from business process models, requiring regeneration for every change—highly impractical for modern, connected application architectures.
Doctorow compares this bloated, low-quality code in our systems to "digital asbestos." Like physical asbestos sealed safely in a building's walls, this digital mess remains long after it should be removed, often due to the expensive, hazardous-waste-like removal process.
I believe that while the current state of AI software development is useful when applied correctly, it’s not the dramatic solution tech leaders envision. My quick research (using AI tools, but with my critical thinking hat on) suggests AI coding operates at the level of a high-throughput junior developer: excellent for generating drafts and routine code, but requiring human architecture, testing, and review for production-grade quality and reliability. In the insurance industry, most applications are part of a larger ecosystem connected with APIs and a regenerate and replace model may sound efficient, it will create the same problems we had with CASE tools in defining , testing and managing a complex enterprise architecture
The sad irony for tech leaders is that AI replaces the cheapest programmers per unit cost while requiring more of the most expensive software professionals per unit cost. Using AI for th entry level stuff is fine except this constrains the pipeline for experienced resources without providing an entry-level path to develop the experts.
Ultimately, I believe we must use AI to create better solutions in the insurance industry, but we need to craft new, sustainable development models. These models must take both a short-term and long-term view to reduce code liability and ensure we partner people with AI, evolving that partnership over time.