Code Is a Liability, Not an Asset
I listened to a really interesting podcast by Cory Doctorow today, based on his essay “Code is a Liability.” There are a lot of points that I find extremely interesting, and I agree with many of them.
First, the core idea: code is a liability, not an asset. He explains that executives today, the ones pushing for massive AI integration, are “peeing green” when they hear how many lines of code were generated. In my own work, I’ve seen leaderboards for employees to see who generates the most lines of code. It’s awful because every new line is an expansion of the attack surface, another fracture, another tiny hole in your ship that you don’t know about.
His second point is that AI is the asbestos we’re putting in our world’s walls. For those who don’t know, asbestos can cause cancer. This technology was popular in Spain about a century ago, and some people are still forced to live in those old, unhealthy buildings. AI, he argues, is a similar hidden danger.
Third, there’s a faulty idea that code becomes stable and unbreakable after its initial release and stabilization phase. The assumption is that it just works, with no moving parts, which is obviously not the case. Code is a brutal machine that requires heroic efforts to make it work and keep it running.
This leads to another crucial point: writing code and software engineering are two different things. When writing code, you care about performance and beautiful syntax that runs on anything. You focus on the language, memory usage, and getting the code to run. With software engineering, you care about the long-term things: the operations of the system, the downstream and adjacent systems, everything that runs in parallel. You are a system thinker, optimizing a complex machine with many integrated pieces, external systems, and, most importantly, humans. Software engineering is much more difficult. You know your system has to fail well. It has to be understandable and maintainable by newcomers, because people don’t stay at companies for long these days. Imagine every line of AI-generated code becoming an orphan in a year when its creator leaves. New employees will have to apologize for all this shitty work that was done.
The longer a piece of code is in operation, the bigger the issues. Doctorow gives the example of the Bloomberg Terminal. Their systems run on a specific RISC architecture. Now, they have to pay for special hardware, hosting, maintenance, and engineers who understand that architecture. Everything they do has to be backward compatible with both older and newer hardware. Keeping such a system safe, performant, and backward compatible is nearly impossible.
This brings up the assumptions we make as engineers, which often come from experience-what the Germans call Fingerspitzengefühl, or a “fingertip feeling.” The more experience you have, the more you know what to touch and what to avoid. You can’t always explain it, but this intuition for production, software, and architecture is invaluable. The problem is that junior and mid-level engineers today won’t have the incentive or the time to dive deep into Python, Go, or any other language. There will be an army of people using AI to generate something impromptu, without caring about the long-term consequences. This is a huge problem, and it means that at some point, planes will go down and cars will break down on the interstate because mistakes will happen. They are already happening.
Some issues have to be solved again and again. He mentioned a house in the US where, for some reason, a default GPS setting puts coordinates onto a small town. People constantly arrive there trying to find their lost devices because of inaccurate data. This kind of problem requires continuous attention.
Ultimately, the best code is the code you never wrote. You don’t have to maintain it or make it backward compatible. The question a good engineer should ask is not just can we write this, but should it exist at all? Sometimes, the answer is no.
When you write something yourself, you get that specific fingertip feeling for the issues. You can predict them, you can understand them in the logs. Without writing it yourself, you don’t develop that muscle memory and you won’t be able to solve much because you won’t understand what happened in the code.
Microsoft’s idea of having AI agents for every little task, managed by a master agent, is basically an admission that these agents aren’t very workable on their own. Someone at Microsoft even said they want to rewrite their entire codebase with AI. This is not possible. They promise 95% reliability for these agents, but when you multiply the probabilities of failure across a swarm of them, things are bound to go wrong.
I was extremely surprised by the depth of Doctorow’s explanations. He’s a really bright guy. I’ve bought a couple of his books, and they seem special. An exciting, and perhaps dangerous, time is ahead of us. The more liability we create today, the more work there will be for future generations who will have to figure out what’s happening with just a textbook and a mountain of unmaintainable code. As someone said on another podcast, your responsibility with AI grows exponentially, because now you have more lines of code being shipped than ever, and you can’t guarantee how any of it communicates