Vibe coding
Mar 18, 2025 7:46 am
Hi ,
"Vibe coding" - using AI to generate code without understanding what it actually does - isn't a new phenomenon. I've encountered countless developers throughout my career who add code that seems to work but can't explain why.
The difference today? The barrier to entry is lower. All you need to do is prompt an AI and code appears. But the fundamental problem remains unchanged: If you don't understand why code works, how can you possibly diagnose it when something inevitably breaks?
In the past, developers copy-pasted code from forums without truly grasping how it worked. Today's AI tools have simply accelerated this process. The risk is creating a generation of developers who can prompt effectively but struggle with debugging and architectural decisions.
Recently, I reviewed a project with several instances of AI-generated code implemented verbatim. The code worked in isolation but failed to align with the broader architecture, resulting in integration issues and performance bottlenecks.
As senior developers, we must share our knowledge about architecture and business domains to help mitigate these risks. We should:
- Create architecture decision records (ADR) that explain not just what decisions were made, but why
- Invest time in mentoring to help juniors understand the reasoning behind patterns
- Implement thorough code reviews focused on understanding, not just correctness
- Document domain concepts to create a shared language
When we use AI without understanding the code, we're creating technical debt in the form of increased cognitive load for future maintainers - including ourselves!
It takes many times more time to fix broken code than it takes to write it in the first place. What you lose on the swings, you gain on the roundabouts.
In a recent project, debugging an AI-generated component took approximately 5x longer than writing it correctly from scratch with proper understanding. That's valuable time wasted.
Use AI for isolated, well-defined tasks and review the output carefully. Treat AI as a junior developer whose work requires thorough code review, not as a magic solution.
Some practical guidelines:
- Use AI for boilerplate code, but review it line by line
- Let AI suggest approaches, but make architectural decisions yourself
- Ask AI to explain its generated code to ensure your understanding
- Use AI for learning, not just for production code
Remember what I shared in my "Brain-Friendly Programming" article - every line of code should justify its cognitive cost. When we use AI without understanding, we're creating a cognitive debt that will eventually come due.
Let's commit to using these tools to enhance our understanding, not replace it. Our future selves - and our colleagues - will thank us when they need to maintain that code six months from now.
What strategies have you found effective for balancing AI productivity with understanding? I'd love to hear your experiences.
Enjoy,
Markus