AI Too Human
There’s a tricky dynamic we, as engineers, must navigate around organisational—especially in non-tech firms—expectations of AI’s role in scaled software teams.
The problem is that AI is deceptive in its mannerisms of humanity. It’s built to appear self-assured, trustworthy—human. And both leaders and developers risk being led astray by that confidence.
It’s tempting (but fraught) to project human traits like judgement, creativity, and taste onto AI and accept it as such. These are, incidentally, the very qualities that make human developers uniquely valuable.
The hype around AI as a solution to human limitations has led some organisations to pressure teams toward using it—and to measure its adoption—adding to the already complex priorities engineers must juggle.
In my experience, this creates a growing internal tension—commented on too by Sean Goedecke. It places developers in a difficult position: at once safeguarding the shared mental model of the system while controlling and continuously mediating the influx of AI-generated code suggestion.
We are often the last line of defence against the sprawl of a codebase and the erosion of the intentionality that makes it cohesive and changeable. Companies should be diligent in asking whether new hires have experienced the consequences of poor design decisions—even their own—and whether they can recognise how that rot creeps in.
I say this half in jest, but it feels true: it’s only by shooting yourself in the foot, learning the hard way, and growing from it, that you learn when The Genie needs to stay in the bottle.
Code is liability—and cost. Quality software lies in its design: building the right things in an intentional way. AI struggles with this, especially given its narrow context window. Yet the hype can imply it’s an elixir for delivering quality—just doing the same thing faster.
This is a kind of false equivalence: AI merely performs the latter part of the process—the easy part—faster. But to the outsider, it appears equally capable at the former.
AI might help get a greenfield project off the ground—the first 90%. But it’s the final 10% where good design decisions matter most for long-term cost and sustainability—and where AI struggles.
Even with common CRUD apps, AI is effective at accelerating rote, well-scoped tasks. But building quality software means accounting for unique dependencies and user intent. Creative solutions to the kinds of problems competitive businesses face require human factors: ourselves, our users, and their context—things AI cannot access… yet.
Leaders are right to prioritise AI—but to protect long-term value, especially in mature products, we must acknowledge its limits alongside its promise.
The appearance of productivity that AI offers is not the same as the cohesive design, optionality, and intent that developers focus on. In fact, my sense is that this apparent value often doesn’t reflect what engineers actually gain from it. Much of the illusion stems from AI’s human-like interaction mode.
We risk normalising its use in ways that reduce our optionality, as Kent Beck describes in Tidy First?—our ability to change, which is the true cost of software.
As developers, then, we must live in our professionalism: be responsible, and outspoken in the face of pressure, if we want our codebases to remain pleasant places to work—and remain the driver of our product’s success—until deeper understanding of AI’s limitations for novel, domain-specific problems becomes widespread in tech leadership discourse.
Nevertheless, learning to be effective with AI will be a superpower. The opportunity is in adopting it smartly—to build leverage, focus on second-order challenges, and uplift our teams at large. I’m here for that.
.