On Learning How to Build Software in the AI Era, With and Without Coding
Despite working at software companies for over a decade, I held roles where I often looked at code but rarely touched it. I never felt comfortable making changes, or even reviewing others’ pull requests.
This was especially true for complex systems, like those I experienced at Lyft while working on new rider products. While I felt at home in a product or business review, or even an engineering retrospective, as a product generalist with no formal coding background, I was afraid of getting into the code itself, let alone modifying it.
Now, as a startup founder working with an extremely technical co-founder,
(a Pivotal Labs alum and repeat founder), I find myself increasingly drawn to understanding not just what software does or what value it adds, but how it works and how we build it.In the past, when I tried to learn to code, the biggest roadblock was my time. I never felt I had enough time to deeply understand concepts, debug issues, or go beyond simple examples. I tried low-code and no-code tools as well, but the results were underwhelming. They worked well for basic prototypes or simple websites, but as soon as I wanted to create something unique or complex, I hit a wall.
Curiously, AI has helped me on both these fronts:
With AI, it’s easier to learn how to code and debug issues than with previous resources;
With agentic experiences and “vibe coding” tools, it’s easier to go from natural language to software application.
More recently, I’ve been pushing on learning how to code (traditionally), how to build software (agentically), and how to find the best of both worlds with AI-native software development and the emerging wave of new tools, editors, and agents.
While AI has made software development more accessible, a big gap still exists between prototyping and production.
AI Rubber Duck: Learning How to Code and How to Think About Coding
At the end of last year, I completed Harvard’s (in)famous CS50: Introduction to Computer Science class through the free online version. One of their newer tools is an “AI rubber duck debugger” with five hearts that decrease as you ask for help. When you run out, you get a “zzzzz” response until they replenish. It’s part video game, part token-meter, part rubber ducky. It was a godsend.
It helped me when I got the most stuck, but it also encouraged me to pause and think critically, rather than just offloading the problem to the AI.
Throughout this experience, I didn’t want the AI to write my code. I wanted it to help me think through problems, debug my logic, and occasionally show me a missing concept or better structure.
As an undergrad at Harvard years ago, there was an emphasis on learning how to think, not just what to know. AI enables this style of learning more efficiently, if used intentionally.
But AI is a double-edged sword. It can also help you skip learning and do more with less foundational knowledge. Feed some code with a bug into Claude or a spec and see what happens; odds are, you’ll get (re)written code returned. That’s powerful, but risky. In software, ignorance isn’t bliss. It’s bugs, inefficiencies, and security vulnerabilities.
This brings me to learning to “vibe code”.
Vibe Coding: A New Way to Build Software Applications
ChatGPT defined vibe coding as follows:
“Vibe coding is intuitive, flow-based programming driven by creativity and mood—not structure or strict plans.”
This can take the form of agentic experiences like Devin, or AI-enhanced editors such as Zed, Windsurf, Cursor, and GitHub Copilot. For me, it often means using natural language-to-app tools like v0 by Vercel, Lovable, bolt.new, or Firebase Studio.
These tools are a big improvement over the low and no-code platform experiences I tried in the past. But they still have limitations.
You can go from nothing to something quickly, often with a decent UI and some functional logic. But getting from an initial build to something production-grade is much harder.
I’ve been building low to moderately complex web apps with these tools, such as calendars and multi-page to-do apps. Here’s the workflow I’ve found most effective:
Prompt the AI to generate a product spec first, not the app itself.
Iterate on the spec until it looks solid.
Once it’s ready, ask the AI to build it. Usually this requires connecting to a service like Supabase first.
Iterate repeatedly, using both the preview window and a live URL to test.
There are quirks. For example, v0 sometimes gave me human-oriented multi-week roadmaps instead of just building what I asked. I had to be very clear that the tool was supposed to do the work.
Other experiences were frustrating rabbit holes. For instance, I couldn’t figure out how to switch my Supabase integration from a free to a pro account in v0. As a temporary hack, I’ve been reusing two free databases. That’s not ideal for anything beyond disposable side projects.
With Lovable, a deployed site showed a broken sign-in page that didn’t match the preview. It took five or more prompts to fix. Eventually it worked, but I still don’t know why.
Still, it’s been fun, and sometimes intensely annoying. Even something as “simple” as building a to-do app feels wildly different across tools.
I suspect my background as a product manager helped. Years of working with engineers trained me to think clearly about software, write strong specs, and communicate ideas well. All of that improves prompt quality. And with vibe coding tools, prompting well is the most important skill.
AI-Enabled Feedback Loops Are Critical, No Matter How You Build
The more I experiment, the more convinced I am that we need new tools to help people interact with the software they’re building through AI. Prompting alone isn’t enough.
That’s what we’re exploring at
. We’re building a simulation-based IDE to make navigating, understanding, and debugging AI-generated code easier. This is something I have wanted on nearly every project I worked on.Just as the rubber duck debugger saved me hours while learning to code, CodeYam could have saved me hours dealing with strange bugs in my current projects. We’re still in R&D, but I’m excited about what’s coming.
Right now, both experienced developers and newer builders like me can feel that AI is transforming how we build software. But building software has never just been about writing code. It’s about creating user value, iterating fast, testing, debugging, maintaining, and evolving your product.
The future I imagine is one where more people can participate meaningfully in software creation. It’s also a future where professionals can work with more power, precision, and creativity than ever before.