Tags
Like many others, I’ve used AI tools like Claude Code and Cursor to build software. The biggest unlock is the speed of exploration. From an idea to a working app in hours is amazing!
Vibe coding works because it lowers test-and-learn costs of ideas. You don’t need to look at the code. You may even use an AI voice session to brainstorm and build a spec. You don’t overthink architecture or edge cases. You just build, see something real, and iterate. That’s incredibly powerful, especially for PMs, designers, and PMMs who historically couldn’t get to code that quickly.
The problem is what comes next.
Most prototypes are disposable. You learn something, then rebuild the “real” version later with proper engineering. In that process, you lose the components you built, the interaction patterns you discovered, and even bugs you already fixed. Each iteration resets instead of building on the last one.
Viable coding is building prototypes so they create reusable, promotable assets so that the best one can ship after normal review instead of being rebuilt.
In practice, this means a project contains:
- A shared design system that is always importable
- A consistent data model, even if it starts simple
- Reusable components extracted early when they prove useful (e.g. into CLAUDE.md, skills, rules, and guardrails).
- A running log of plans, decisions, bugs, learnings, and tradeoffs that the AI can reference
This changes a few behaviors. You are slightly more intentional about where logic lives. You name things so they can be reused (e.g. data models). You think a bit more about architecture, security, and scale even in early versions. You also introduce a lightweight review step. A builder skill proposes a plan but does not code yet. A reviewer skill, acting like a principal engineer, challenges that plan. Only after that do you generate code. This catches obvious issues early without killing momentum.
Occasionally, something interesting happens. A prototype stops feeling like a prototype. The design fits. The components are clean. The logic holds up. At that point, the question is no longer “should we build this into our product?” but “can we promote this to production?”
If that work exists outside your main codebase, you almost always rebuild. But if it already lives inside your main codebase, you can raise a pull request (PR) including feature flags and test cases, have it go through normal review by lints/bots and engineers, and refine and ship without starting over. This becomes Responsible AI for our users and Agentic AI engineering for us.
For non-engineering product people, the goal is to preserve product learning in a form that can pass engineering rigor and solve a real user/business need beyond demo-ware.
I’m still learning, and this approach is evolving, and always curious to hear more perspectives in this fast-moving field.
PS: I write about building products to pay it forward and to sharpen my thinking.