What Is Vibe Coding?
8 minutes
Hey there!
Vibe coding is the new trend these days spawning its own meme culture. Andrej Karpathy, the ex-Tesla AI lead and OpenAI founding member, recently coined the term to describe how large language models (LLMs) are getting so good that devs can simply "give in to the vibes" and allow LLMs to handle most of the coding.

The Rick Rubin Wearing Headphones meme is making a comeback, now embraced as a classic illustration of vibe coding. Originally popularized in January 2025, this image macro format pairs Rubin with captions—both ironic and sincere—depicting him as a master in programming, producing, engineering, or other creative fields.
Vibe coding already got into real businesses. Y Combinator mentioned that nearly a quarter of the code in their latest startup batch is almost entirely AI-generated. Founders are witnessing dramatic speed improvements: one founder from Train Loop reported a leap from a 10Ă— improvement to a staggering 100Ă— increase in code generation speed within just one month.
A particularly striking example comes from @levelsio, who broke new ground by creating a fully AI-generated game using only two tools Cursor and Anthropic's Claude. The project soared from $0 to $1 million in annual recurring revenue (ARR) in only 17 days, selling ad placements and amassing a user base of 320,000 players.
What Exactly Is "Vibe Coding"?
The roots of vibe coding trace back to early advancements in AI coding assistants. By late 2022, tools such as OpenAI's Codex and GitHub Copilot had already begun generating useful code snippets from natural language prompts. In 2023, Karpathy famously predicted that "the hottest new programming language is English," hinting at a future where high-level prompts might replace traditional, low-level coding.
In essence, vibe coding means using AI as your co-pilot, or more of a primary coder, while you provide the guiding vision. As Karpathy puts it, "It's not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."
Andrej Karpathy's tweet that coined the term "vibe coding" and sparked a new trend in AI-assisted development.
In his own words, he describes a workflow where he talks to a coding assistant like Claude Composer (powered by Anthropic's Claude Sonnet) and SuperWhisper, issuing simple commands such as "decrease the padding on the sidebar by half." and let the AI output those code changes, and simply click "Accept All" without even reading diffs or understanding every line.
Karpathy's candid reflections on this workflow can be seen in his tweet here and in a follow-up tweet detailing his process here, which garnered an ambiguous "Hmm" reaction from Elon Musk.
The important nuance here is how much understanding the human maintains. Benj Edwards, Ars Technica's Senior AI Reporter, made a great point on a distinction between using AI as an assistant and vibe coding highlighted by AI researcher Simon Willison.
Here's the gist:
- If you carefully review, test, and understand all the AI-written code, that's just using an LLM as an assistant, not vibe coding.
- True vibe coding implies a level of trust (or risk) where you accept code without fully understanding it. It's coding on autopilot: extremely fast and flexible, but a bit "loose" on the rigorous comprehension side.
Vibe Coding Impact: Shifting Roles and Business Dynamics
The concept of vibe coding is becoming increasingly intertwined with broader trends in AI, startup culture, and modern software development. We analyzed insights from Andrew Chen, General Partner at a16z (read his take), along with key takeaways from a recent YC panel discussion (watch here).
Here's a condensed look at the biggest shifts impacting businesses, developers, and users:
-
For business: As AI-generated code becomes more reliable, startups can drastically reduce their time-to-market and shorten product iteration cycles.
-
For users: With more accessible and intuitive AI tools, even non-traditional developers like young enthusiasts or self-taught programmers can now build functional applications.
Robert Keus, Social Entrepreneur at Brthrs AI and GreenPT, shared that both of his daughters (8 & 11) have built their own webshops using Lovable, selling their handmade jewelry online.
- For developers: You are now expected to focus more on product sense rather than on writing code line by line. The traditional coding interview is losing relevance. Instead, companies are likely to prioritize a candidate's ability to use AI tools effectively, assess the quality of AI-generated code, and demonstrate strong product-thinking skills.
Something to keep in mind: While the promise of exponential productivity gains is enticing, current AI tools still fall short when it comes to debugging complex issues. Developers often find themselves "spoon-feeding" these systems explicit instructions to iron out errors, a process impossible without human oversight and deep technical expertise.

Vibe coding is fun until you have to debug.
The development process is likely to bifurcate into two phases:
- Zero-to-one: Rapid feature shipping enabled by AI-generated code.
- One-to-n: Scaling, refining, and checking system robustness
Striking the right balance between embracing AI's potential and maintaining rigorous quality standards will be key to navigating AI integration in software development and other industries.
Now, let's get to the news!
Generative AI Tools Updates
Google DeepMind Introduces Gemma 3
What is it? A new multimodal open model designed to run efficiently on consumer hardware while offering advanced AI capabilities like vision processing and extended context handling.
Key Features
- Multimodal Processing with custom SigLIP vision encoder for image handling
- Extended Context support up to 128K tokens (32K for 1B model)
- Enhanced multilingual support for 140+ languages
- Models ranging from 1B to 27B parameters
- Training innovations including knowledge distillation
- Optimized for deployment across phones, laptops, workstations, and cloud
Tavus Introduces Emotionally Intelligent CVI
What is it? An updated Conversational Video Interface that leverages emotional intelligence to create more natural, human-like video interactions.
Key Features
- Phoenix-3 for advanced full-face animation and emotion control
- Raven-0 for real-time visual cue interpretation
- Sparrow-0 for optimized conversational flow timing
- Integrated system for digital twins and AI-powered interviews
China Introduces Autonomous Agent Manus
What is it? A new autonomous AI agent that executes complex tasks from start to finish without continuous user input through a multi-agent system approach.
Key Features
- Autonomous task decomposition and execution
- Versatile capabilities for various use cases
- Integration with existing LLMs (Claude 3.5 Sonnet, Qwen)
- Complete workflow automation
Hedra Unveils Character-3 and Studio
What is it? The first omnimodal AI model capable of processing image, text, and audio simultaneously, alongside a unified platform for AI-driven video creation.
Key Features
- Simultaneous image, text, and audio processing
- Dynamic video generation with natural expressions
- Synchronized lip movements with speech
- Integrated workflow with intuitive controls
- Real-time previews and API integration
Read More About Hedra Studio's Character 3
OpenAI Updates ChatGPT for macOS
What is it? A significant update to the ChatGPT macOS app enabling direct code editing within popular development environments.
Key Features
- In-context code editing in Xcode, VS Code, and JetBrains
- Auto-Apply Mode for automatic code edits
- Streamlined workflow without copy-paste
- Real-time code modification
Learn More About ChatGPT on macOS Direct Code Editing
Sakana AI Introduces AI Scientist-v2
What is it? An AI system that achieved a breakthrough by independently generating a peer-reviewed scientific paper accepted at an ICLR workshop.
Key Features
- Complete autonomous research generation
- End-to-end scientific paper writing
- Successful peer-review process
- Ethical oversight and transparency
- Competitive scoring in review
Read More About the AI Scientist's Peer-Reviewed Publication
Other News
- AI21 Labs introduces Maestro to enhance enterprise AI application reliability. Read More
- Meta shares insights on AI business tools and agents development strategy. Read More
- Mistral AI releases new OCR capabilities for document processing. Read More
- Google's Chirp 3 adds eight new voices for 31 languages, expanding text-to-speech capabilities. Read More
- Google announces Gemini will replace Google Assistant on mobile devices by end of year. Read More
- OpenAI redesigns Chat Playground as Prompts Playground with enhanced testing and iteration tools. Read More
- Google updates NotebookLM with Gemini 2.0 Thinking and improved citation features. Read More
- Anthropic upgrades Console for streamlined Claude-powered application development. Read More
- Alibaba's Qwen Team open-sources QwQ-32B, a 32B parameter reasoning model. Read More
- OpenAI releases Agents SDK for production-ready AI applications with integrated orchestration. Read More
Curated Gems: 5 Chain-of-Thought-Inspired Prompting Techniques You've Probably Missed
We've just published a new set of docs in our Prompt Engineering Guide for 5 chain-of-thought-inspired prompting techniques you've probably missed:
- Chain-of-Code: combines code execution and language-based reasoning, merging the strengths of Chain-of-Thought (CoT) and Program of Thoughts (PoT).
- Chain-of-Density: enhances text summarization by iteratively refining summaries and integrating missing details while maintaining a fixed length.
- Chain-of-Dictionary: enhances multilingual machine translation by incorporating external dictionary entries into the prompt, helping LLMs translate rare or low-frequency words more accurately, especially in low-resource languages.
- Chain-of-Draft: optimizes LLM reasoning by generating concise, information-dense outputs while being a more efficient alternative to Chain-of-Thought (CoT) prompting.
- Chain-of-Knowledge: improves reasoning in LLMs by structuring knowledge representation and verification, reducing hallucinations common in Chain-of-Thought (CoT) prompting.
Check them out and let us know what you think!
From Learn Prompting Team: Our AI Security Masterclass Now Features 9 Top Experts

Time is running out. In just 3 days, we kick off our 6-week Masterclass on AI Security, where you'll learn from leading experts in Generative AI, Cybersecurity, and AI Red Teaming.
And there's more: we've added four new live guest speakers, bringing the total to nine AI security specialists who will share their cutting-edge insights and hands-on expertise with you.
Meet your instructors and guest speakers:
- Sander Schulhoff: CEO of Learn Prompting, creator of HackAPrompt, and leader of AI security workshops at Microsoft, OpenAI, Deloitte, Dropbox, and Stanford.
- Jason Haddix: Former CISO at Ubisoft, Head of Security at Bugcrowd, and a top-ranked bug bounty hacker, with extensive experience in penetration testing and AI security.
- Richard Lundeen: Principal Software Engineering Lead at Microsoft's AI Red Team, developing PyRit, a foundational AI security framework.
- Sandy Dunn: Cybersecurity leader with over 20 years of experience, project lead for the OWASP Top 10 Risks for LLM Applications, and an adjunct professor in cybersecurity.
- Joseph Thacker: Principal AI Engineer at AppOmni, top AI security researcher, and winner of Google Bard's LLM bug bounty competition.
- Donato Capitella: Offensive security expert and AI researcher with over 300,000 YouTube learners, teaching how to build and break AI systems.
- Akshat Parikh: Elite bug bounty hacker, ranked in the top 21 in JP Morgan's Bug Bounty Hall of Fame, and AI security researcher backed by OpenAI, Microsoft, and DeepMind researchers.
- Pliny the Prompter: Well-known AI jailbreaker, specializing in bypassing major AI model defenses.
- Johann Rehberger: Former Microsoft Azure Red Team leader, known for pioneering techniques like ASCII Smuggling and AI-powered C2 attacks.
Final spots are available. Sign up today!
Thanks for reading this week's newsletter!
If you enjoyed these insights about AI developments and would like to stay updated, you can subscribe below to get the latest news delivered straight to your inbox.
See you next week!
Valeriia Kuka
Valeriia Kuka, Head of Content at Learn Prompting, is passionate about making AI and ML accessible. Valeriia previously grew a 60K+ follower AI-focused social media account, earning reposts from Stanford NLP, Amazon Research, Hugging Face, and AI researchers. She has also worked with AI/ML newsletters and global communities with 100K+ members and authored clear and concise explainers and historical articles.
On this page