Mantra
Make it work, make it nice, make it fast is a nice way of thinking about software development. It’s not perfect because these are not clear cut activities, but it’s still a good mental model of how one should sort their priorities as they approach some programming task.
It’s also mostly self explanatory. First, get your basic version going. Then refactor it, improve readability, follow some architeture pattern, etc. Finally, optimize it.
One problem with AI-assisted coding is that we’ve emphasized make it work and are paying less attention to the rest.
AI coding tools have crushed the cost of that first step. You describe what you want, maybe iterate a few times, and minutes later you’ve got something that runs. Maybe it even passes tests. Ship it.
Make it work, but also make it right and fast
Without LLMs, when you were forced to raw-dog code with nothing but your ingenuity and Stack Overflow, the process of making it work also involved making it right and fast to some extent: the first functional version that solves the task had a higher quality than what we get in the new paradigm.
That doesn’t mean it didn’t need refactors and improvements. That’s why we’ve always had tech debt. However, it would generally pass code review without large modifications unless your reviewer was having a bad day.
Make it work used to mean more than “it runs.”
It meant code that fit naturally into the system, used the right abstractions, matched the project’s tone, didn’t surprise the next person who touched it. Not perfect, some code smells. But good enough.
Now the code works but it often feels off.
The vibe era
The tests are green (sometimes because the model created a test that asserts that a mocked value equals that same value), even the PR message looks professional (because it’s also generated and mostly slop).
But you can generally tell when the work is heavily vibe coded by the obvious cues:
- Comments explaining obvious code, or even stuff like
# Delete the load_dataset function since we moved it to its own module(not everyday you see comments for code that isn’t there). - Methods and functions living in the wrong file.
- Over-defensive code. There’s no Pydantic model validation or type hint that can convince Claude that some function argument cannot be
Noneand doesn’t need an if nor a try/except. - New dependencies that weren’t actually needed.
- Abstractions and layers of indirection. LLMs tend to think of contiguous lines of related code as code that should be placed in a function, even if it means that I now have to scroll 200 lines down the file to find what it does.
- Markdown files littered with emojis.
It’s all working code but you can tell no one really wrote it. Someone asked for it1.
The above points are mostly harmless. But they are sort of the canary in the coalmine that makes you go “okay, let me look at the rest of the code more carefully”. Because hidden among the AI slop are more insidious modifications that progressively make the code more difficult to follow and slower.
Maybe it will go away on its own
There’s a tweet that’s been making the rounds:
Maintaining vibe-coded apps is a job for GPT-7, not a human.
Tech debt inflates away with every better coding model. The notion that you need to personally understand the code is hilariously short sighted, and clearly from someone who has never worked a large co with turnover.
It’s a clever way of looking at it, and in principle it makes sense. If every new model gets better at writing and refactoring code, the next one will clean up after this generation.
Here’s where I disagree:
- The chances of someone having to clean up the last mess before the better model drops are pretty high.
- I can’t tell my boss that we broke production because our current LLM made a mistake, but the next one will fix it.
My opinion is from someone that works at a place where people regularly rotate between teams and find themselves working on new codebases, and where the microservice architecture often means that getting something done requires submitting a pull request to someone else’s application (Mercado Libre). You’re not going to get far with AI slop, and if you do it will cost someone else in the near future (probably just yourself)
The missing step
The irony is that AI actually gave us more time by making building that first version orders of magnitude faster.
But instead of spending that time making things right or fast, we just make more things that work, but introduce more tech debt in the process.
That second and third steps are where readability, maintainability, and long-term performance live. They are the parts that make your future self or your teammate not hate you for making their lives miserable.
I’m not saying we shouldn’t ship faster. We definitely should deliver more value to shareholders. But maybe we can ship faster but not that fast so we can use that time to actually avoid shooting ourselves in the foot with poor quality code.
-
Your first reaction might be “skill issue”. That’s right, you can steer the models away from these practices with good prompting, good context management, good tool or MCP use, but that’s not how people generally use their programming environments in my experience. ↩︎