Stop Calling It 'Vibe Coding'
Disclaimer: These are my views, but the article was written with AI assistance. I own the content.
TL;DR
- “Vibe coding” sounds cool but it’s misleading—serious AI programming requires more thinking, not less
- Trust AI, but verify everything. The people who understand AI best trust it least blindly
- Context engineering and iterative feedback matter more than the model itself
- DHH can refuse AI. Most of us can’t afford to
- Don’t cheap out on models—the debugging time you waste isn’t worth it
My Journey
Looking back, my relationship with AI went through stages.
First came ChatGPT. Basically replaced StackOverflow. No more digging through forum threads—just ask.
Then Copilot. AI entered the IDE, started helping inside the codebase. Still pretty basic—mostly in-file completions. A smart autocomplete.
The real shift was Claude Code.
Started with isolated features. Then I noticed it could work in complex existing projects—understanding context, following code styles, finding the right entry points. I began trusting it on details. Then discussing architecture.
When I realized how much had changed, I felt my ten years of experience suddenly fragile. FOMO hit hard. For a while, I tried doing everything with Claude Code.
Until I burned out.
After stepping back, I realized: my experience wasn’t useless. Opposite, actually—it’s what let me use AI well. Knowing which questions to ask. Knowing how to verify answers. Knowing where to stay alert. AI freed me to focus on architecture instead of drowning in implementation details.
Why “Vibe Coding” Is Wrong
Time to get something off my chest.
Andrej Karpathy coined “vibe coding” in early 2025—describing a state of fully surrendering to the vibes, embracing exponential growth, forgetting you’re even writing code.
Sounds cool. In practice? Completely misleading.
Using AI seriously for programming is not a “vibe.” You think about when and how to involve AI. You interact, correct, debug repeatedly before getting what you want.
Calling it a “vibe” makes it sound casual. It’s not. The thinking required isn’t less than writing code by hand—it’s just different. Instead of “how to implement,” it’s “how to describe requirements, how to verify results, how to catch the AI’s logical gaps.”
Simon Willison nailed it: “I won’t commit any code to my repository if I couldn’t explain exactly what it does to somebody else.” Doesn’t matter who wrote it.
Trust, but Verify
If vibe coding isn’t the answer, what is?
Anthropic’s blog introduced “Trust, but Verify.” That’s the right mindset.
Here’s the interesting part: according to reports, more than half of Anthropic’s engineers say they can only “fully delegate” less than 20% of work to Claude. They still feel the need to check outputs. The people who understand AI’s limits best are the ones who trust it least blindly.
Anthropic’s blog has a good analogy—AI programming is like using Google Maps:
- Early stage: Watch every turn, make sure GPS isn’t leading you into a ditch. Review every line of generated code.
- Mature stage: Trust navigation for route planning, only check at key junctions.
- But never follow blindly: If GPS says “drive into the lake,” you override it.
AI-generated code has a quirk: it often looks perfect. Nice variable names, clean indentation, even comments. But the logic might be completely wrong. That’s “hallucination” in the code domain.
Verification isn’t optional. It’s core. Some methods that work:
1. Dual-Agent Verification
One AI writes, another reviews. After writing, open a new chat, have AI play “senior security auditor” to find issues. AI is often sharper at finding bugs than generating code.
2. Test First
Before implementation, have AI generate test cases. Review the test logic, confirm tests are reasonable, then generate implementation. Tests don’t pass? Code has no value.
3. Static Analysis as Safety Net
Set up TypeScript, ESLint. If AI code can’t pass type checking, reject it immediately.
4. Human Review Last
Even if code runs, check readability and maintainability. Machines ensure “works.” Humans ensure “maintainable.”
How to Use AI Seriously
Verification is the last line. Better approach: improve AI output at the source. Two areas matter—interaction and correction.
Interaction: Context Engineering
Don’t just ask “how do I write a login page.”
Do this instead:
- Gather context: Feed relevant schemas, component docs, API definitions
- Set a role: “You’re a senior frontend architect focused on security and accessibility”
- Define constraints: “Must use React Hook Form, must use Zod for validation”
- Step-by-step: “First explain design approach, then list file structure, finally generate core code”
Correction: Iterative Feedback
When AI output is wrong, don’t rush to fix manually. Don’t just regenerate either.
- Feed compilation errors or stack traces directly to AI
- Ask why: “Why this library? Pros and cons vs XX?”
- Guide fixes: “Code throws on empty arrays, add boundary checks”
This iterative loop fixes immediate problems and helps you understand how AI thinks—building experience for next time.
DHH Can Refuse. You Probably Can’t
After all this about using AI well, some might ask: what if I just don’t want to?
DHH recently said even with AI being powerful, he wants to preserve the right to write code by hand. It’s something he enjoys.
I respect that. But being able to refuse something means you have the privilege to refuse.
DHH has massive industry reputation and wealth. He codes for artistic expression and personal satisfaction, not survival or deadlines. He can afford to choose less efficient but more enjoyable approaches.
Most developers face different reality:
- Job market competition intensifies, efficiency demands keep rising
- Junior-level tasks are getting automated
- For indie developers, AI is survival leverage—one person can do what used to need a team
DHH’s view is craftsman romanticism. Beautiful. But regular people face industrialization. Learning AI isn’t about giving up programming joy—it’s about gaining footing in this era.
AI is a great lever for regular people. Amplifies individual capability. One person becomes a team.
Don’t Skimp on the Subscription
If you’re using AI, don’t cut corners on tools.
Many try AI programming with free models, get disappointed, conclude “AI is useless.” Classic cognitive bias from wrong tool choice.
Problems with free or lightweight models:
- Weak at complex reasoning, code full of subtle logic errors
- Small context windows, can’t understand project structure
- High hallucination rate, fabricates non-existent APIs
Inferior models create massive “error correction costs.” You spend hours debugging AI mistakes, end up feeling you should’ve just written it yourself.
Invest in AI. Use the best model available. For professional developers, a few dozen dollars monthly is a fraction of your hourly rate. What you get: a 24/7 senior pair programmer with massive knowledge base who generates code in seconds.
Not buying a tool. Buying time and cognitive bandwidth.
I use Claude Code daily, always prioritizing the most powerful model (currently Opus 4.5). Costs more, worth it. Only with the best model can you practice “trust and verify”—if the model isn’t strong enough, you don’t even have a foundation for trust. Just endless debugging.
The Bottom Line
World’s chaotic. Old order crumbling, new order not established.
In this transition, neither blindly proclaim “programming is dead” nor stubbornly refuse change. Use AI with pragmatic, rigorous, critical attitude—probably the best survival strategy for regular people in the AI era.
The future belongs to those who master AI, not those who depend on it.
So stop calling it “vibe coding.” Call it whatever. But this thing? Anything but casual.