Getting paid to vibe code: Inside the new AI-era job | Lazar Jovanovic (Professional Vibe Coder)
Episode
102 min
Read time
3 min
Topics
Artificial Intelligence
AI-Generated Summary
Key Takeaways
- ✓Parallel Prototyping Method: Start five simultaneous projects when beginning any build: one brain-dump voice prompt, one typed detailed prompt, one with Mobbin or Dribbble design references, one with actual code snippets from libraries like Twenty First Dev, and one with template files. This approach saves hundreds of credits long-term by establishing design clarity upfront rather than iterating endlessly on a single flawed direction. Switch between tabs while agents process to maximize productivity.
- ✓Context Window Management: Spend 80% of time planning in chat mode and only 20% executing. Create four core markdown files before building: masterplan.md for high-level intent, implementation-plan.md for build sequence, design-guidelines.md with CSS specifications, and tasks.md breaking work into discrete steps. Reference these files in every prompt so the AI agent allocates tokens to execution rather than re-reading entire codebases, preventing the obedient-but-wrong responses that occur when context runs out.
- ✓Three Wishes Rule: AI tools have token limitations similar to a genie granting three wishes at once. Vague requests like wanting to be taller result in thirteen-foot outputs because specificity matters. Provide exact references, file names, and context with each request. When codebases exceed twenty files, the agent spends 80% of tokens reading and only 20% thinking and executing, leading to surface-level fixes rather than actual solutions.
- ✓Four-by-Four Debugging Framework: When blocked, attempt four methods once each: click the tool's auto-fix button, add console logs to increase visibility then share output with the agent, import code to Codex or external LLM for diagnosis while keeping fixes in primary tool, and revert three steps back with a clearer prompt after a break. After resolving issues, ask the agent how to prompt better next time and add learnings to rules.md to prevent recurrence.
- ✓Design Over Code Optimization: In a world where everyone produces good-enough output with AI, magic-level design and taste become the differentiator. A simple gradient can require fifty layers with varying opacity levels. Expose yourself to world-class design through newsletters, follow elite designers building publicly, and study UI style libraries. Fonts alone constitute 60% of output quality perception. Technical stack choices like HTML versus React no longer matter to end users who only experience the interface.
What It Covers
Lazar Jovanovic, the first professional Vibe Coding Engineer at Lovable, explains how he builds production software using AI tools without traditional coding skills. He shares frameworks for maximizing AI tool effectiveness, including the four-parallel-build method, context management through PRD files, and the four-by-four debugging approach. The conversation explores emerging career paths and skill requirements in AI-assisted development.
Key Questions Answered
- •Parallel Prototyping Method: Start five simultaneous projects when beginning any build: one brain-dump voice prompt, one typed detailed prompt, one with Mobbin or Dribbble design references, one with actual code snippets from libraries like Twenty First Dev, and one with template files. This approach saves hundreds of credits long-term by establishing design clarity upfront rather than iterating endlessly on a single flawed direction. Switch between tabs while agents process to maximize productivity.
- •Context Window Management: Spend 80% of time planning in chat mode and only 20% executing. Create four core markdown files before building: masterplan.md for high-level intent, implementation-plan.md for build sequence, design-guidelines.md with CSS specifications, and tasks.md breaking work into discrete steps. Reference these files in every prompt so the AI agent allocates tokens to execution rather than re-reading entire codebases, preventing the obedient-but-wrong responses that occur when context runs out.
- •Three Wishes Rule: AI tools have token limitations similar to a genie granting three wishes at once. Vague requests like wanting to be taller result in thirteen-foot outputs because specificity matters. Provide exact references, file names, and context with each request. When codebases exceed twenty files, the agent spends 80% of tokens reading and only 20% thinking and executing, leading to surface-level fixes rather than actual solutions.
- •Four-by-Four Debugging Framework: When blocked, attempt four methods once each: click the tool's auto-fix button, add console logs to increase visibility then share output with the agent, import code to Codex or external LLM for diagnosis while keeping fixes in primary tool, and revert three steps back with a clearer prompt after a break. After resolving issues, ask the agent how to prompt better next time and add learnings to rules.md to prevent recurrence.
- •Design Over Code Optimization: In a world where everyone produces good-enough output with AI, magic-level design and taste become the differentiator. A simple gradient can require fifty layers with varying opacity levels. Expose yourself to world-class design through newsletters, follow elite designers building publicly, and study UI style libraries. Fonts alone constitute 60% of output quality perception. Technical stack choices like HTML versus React no longer matter to end users who only experience the interface.
- •Agent Output as Education: Read agent explanations religiously, not code syntax. The agent describes what it did, why, and what to test next. This output teaches what's possible with tools and reveals thinking patterns. When models like Claude show reasoning streams, they reveal token allocation to tasks like managing user anxiety versus solving problems. Learning from agent communication builds judgment about feasible requests and appropriate context provision without traditional computer science education.
- •Career Path Through Public Building: Professional vibe coding roles emerge from building publicly and sharing all knowledge without gatekeeping. Create YouTube tutorials, post projects on LinkedIn with detailed explanations, participate in hackathons, and apply to roles by submitting Lovable apps instead of resumes. Companies including S&P 500 firms now list Lovable skills in job descriptions and hire dedicated vibe coders to migrate entire legacy systems. The role exists because speed and judgment matter more than traditional engineering credentials.
Notable Moment
Jovanovic reveals he spent an entire week trying to build image generation into a Lovable app immediately after OpenAI announced the feature in ChatGPT, failing because the API did not exist yet. One week later when OpenAI released the API, he built the same app in thirty seconds. This failure taught him the boundary between productive delusion and impossible tasks, demonstrating how non-technical backgrounds create both advantages and blind spots.
You just read a 3-minute summary of a 99-minute episode.
Get Lenny's Podcast summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Lenny's Podcast
Snapchat CEO: Why distribution has become the most important moat | Evan Spiegel
Apr 26 · 70 min
The Model Health Show
The Menopause Gut: Why Metabolism Changes & How to Reclaim Your Body - With Cynthia Thurlow
Apr 27
More from Lenny's Podcast
How Anthropic’s product team moves faster than anyone else | Cat Wu (Head of Product, Claude Code)
Apr 23 · 85 min
The Rest is History
664. Britain in the 70s: Scandal in Downing Street (Part 3)
Apr 26
More from Lenny's Podcast
We summarize every new episode. Want them in your inbox?
Snapchat CEO: Why distribution has become the most important moat | Evan Spiegel
How Anthropic’s product team moves faster than anyone else | Cat Wu (Head of Product, Claude Code)
Why half of product managers are in trouble | Nikhyl Singhal (Meta, Google)
Hard truths about building in the AI era | Keith Rabois (Khosla Ventures)
Head of Growth (Anthropic): “Claude is growing itself at this point” | Amol Avasare
Similar Episodes
Related episodes from other podcasts
The Model Health Show
Apr 27
The Menopause Gut: Why Metabolism Changes & How to Reclaim Your Body - With Cynthia Thurlow
The Rest is History
Apr 26
664. Britain in the 70s: Scandal in Downing Street (Part 3)
The Learning Leader Show
Apr 26
685: David Epstein - The Freedom Trap, Narrative Values, General Magic, The Nobel Prize Winner Who Simplified Everything, Wearing the Same Thing Everyday, and Why Constraints Are the Secret to Your Best Work
The AI Breakdown
Apr 26
Where the Economy Thrives After AI
Cognitive Revolution
Apr 26
AI in the AM: 99% off search, GPT-5.5 is "clean", model welfare analysis, & efficient analog compute
Explore Related Topics
This podcast is featured in Best Product Management Podcasts (2026) — ranked and reviewed with AI summaries.
Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.
You're clearly into Lenny's Podcast.
Every Monday, we deliver AI summaries of the latest episodes from Lenny's Podcast and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime