What Vibe Coding Costs When the Vibes Run Out

Right now, most people in the market can't tell the difference between software built with care and software that just looks like it works. Both ship, both demo, and both get sold.

What Vibe Coding Costs When the Vibes Run Out

Right now, most people in the market can't tell the difference between software built with care and software that just looks like it works. Both ship, both demo, and both get sold.
Caching: The Hidden Culprit Behind Slow Application Startup
  • Discover

  • Featured

“I’ve spent more than fifteen years in software without writing a single line of production code.”

My role was never to write it. I was the business owner, the stakeholder advocate, the accountability layer, the person in the room deciding what gets built, why, and whether what shipped actually solved the problem. The business side of software, for my entire career.

This year, that changed. Like everyone else in the industry, I wanted to see what AI tooling unlocked for me, for the market we work in, and for the clients we serve. So I started building real apps, apps meant to last, using AI as the main way to write code. And I did it with a clear method, firm limits, and fifteen years of hard-earned lessons about what software actually needs to do to hold up in the real world.

That dual vantage point is why this article exists. Not to sell anything. Not to position anyone. To share what the data shows, what the incidents confirm, and what the pattern looks like when you’ve spent enough time on both sides of the line.

The pattern is this: AI coding tools truly change how we work, and the speed gains are real. The boost in what you can build is real too. Right now, most people in the market can’t tell the difference between software built with care and software that just looks like it works. Both ship, both demo, and both get sold. The gap shows up later in security checks, outages, and legal reviews, and it becomes most clear the moment someone in charge asks for records that were never kept.

The vibes run out. The bill for where you went arrives later.

What Vibe Coding Actually Is, And What It Was Meant For

The term was coined by Andrej Karpathy, co-founder of OpenAI, former AI director at Tesla, in a post on X in February 2025. His description was specific: a development mode where you “fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He described accepting every AI suggestion without reading the output, copy-pasting error messages back into the chat with no commentary, and watching the codebase grow beyond his own comprehension.

The term became Collins Dictionary’s Word of the Year for 2025. Merriam-Webster added it as trending slang. Searches for the phrase jumped 6,700% in a single spring.

Here is what most of the coverage missed: Karpathy explicitly said it was suited for “throwaway weekend projects.” His words. He was describing a fun experiment, not a development philosophy for production software. By late 2025, he had moved on to advocating “agentic engineering”, a more disciplined, expertise-driven approach, specifically because orchestrating AI well is “an art and science” that people can learn and improve at. His own later project, Nanochat, was written entirely by hand. “I tried to use AI agents a few times but they just didn’t work well enough at all,” he posted.

Developer and researcher Simon Willison drew the clearest line: “If an LLM wrote every line of your code, but you’ve reviewed, tested, and understood it all, that’s not vibe coding, that’s using an LLM as a typing assistant.”

Vibe coding has a real place in the world, especially for quick prototypes, personal projects, and exploratory weekend builds where the stakes are low and the goal is simply to learn. The problem is not vibe coding itself. The problem is when vibe coding gets applied to work it was never meant for, like client software, regulated industries, live production systems, and apps where real users are counting on the code to work correctly and safely. That is where the costs start to pile up, slowly and quietly, until they are not quiet anymore.

The Three Illusions of AI-Driven Development

Across client engagements over the past year, three conversations have come up almost every time. Each one points to a specific, measurable failure mode in AI software development. The data behind each is documented, sourced, and worth understanding before you walk into your next meeting about AI adoption, AI-generated code, or what it really takes to build production-ready AI applications.

Illusion #1: The Speed Illusion, “Can’t we just install [AI tool] and do this ten times faster?”

The research does not support the premise, and the most rigorous study conducted on the question tells a more nuanced story than most headlines suggest.

In July 2025, METR, a nonprofit AI safety research organization, published results from a randomized controlled trial testing 16 experienced developers across 246 real-world tasks on their own codebases, using frontier tools including Cursor Pro with Claude 3.5/3.7 Sonnet, the most capable models available at the time. Developers predicted AI would make them 24% faster. After completing the study, they estimated they had been 20% faster. The actual measured result: they were 19% slower. A 39-percentage-point gap between what experienced developers felt and what was real, and it held even after they were shown the data.

That study used the best models available in early 2025. The tools have advanced significantly since. Claude Opus 4.6, released in February 2026, scores 80.8% on SWE-bench Verified, the industry’s most rigorous software engineering benchmark, testing against real GitHub issues from production repositories. Benchmark performance on the Claude 4 series represents a meaningful generational leap over Claude 3.5/3.7 Sonnet.

METR acknowledged this directly. In a February 2026 update, they stated that based on conversations with study participants, it is “likely that developers are more sped up from AI tools now, in early 2026, compared to our estimates from early 2025.” They are redesigning their entire measurement methodology because the tools have improved faster than their ability to measure them rigorously.

Even Anthropic’s own internal research, which carries an inherent bias caveat as a self-reported study of their own employees using their own tools, found engineers self-reporting a 50% productivity boost when using Claude Sonnet 4 and Opus 4 in 60% of their work. Notably, they also found that most employees report being able to “fully delegate” only 0–20% of their work to AI, with the remainder requiring active supervision and validation.

Here is what the evolving picture actually tells us: the tools are getting better. The productivity gains are real and growing. And none of that changes the argument being made here, because the argument was never that AI tools are slow. It is that even with current state-of-the-art models, the gap between what the AI produces and what should actually ship, securely, compliantly, maintainably, requires a judgment layer that the tool itself cannot provide.

Put plainly: Claude Opus 4.6 is an extraordinary instrument. Handing it to someone with no software development experience, no methodology, and no understanding of what production-ready means is not a productivity strategy. It is a confidence trap. The instrument gets better every month. The judgment required to use it does not install itself.

The Takeaway: Velocity without governance is not speed. It is untracked exposure accumulating in your codebase while everyone feels productive.

Illusion #2: The Certainty Illusion, “I asked [AI tool] and it said we could do this.”

This is the conversation with the most downstream damage. Not because AI tools give wrong answers, they are often right, but because of the specific and documented way they fail when they are wrong.

MIT research published in 2025 found that AI models are 34% more likely to use confident language, words like “definitely,” “certainly,” and “without doubt”, when generating incorrect information than when generating correct information. The model sounds most certain precisely when it is most wrong. OpenAI’s own published research explains why: standard training and evaluation reward guessing over acknowledging uncertainty. When a model is scored on accuracy, it is structurally incentivized to produce an answer rather than say “I don’t know.”

The consequences have shown up in court. Air Canada’s chatbot told a customer he could get a refund through a bereavement fare policy that did not actually exist. The AI made it up, and it did so with complete confidence. The customer took screenshots, took the airline to small claims court, and won. Air Canada argued that its chatbot was a separate entity and not responsible for what it said. The court disagreed and held the airline fully responsible for its AI’s mistake.

At the code level, the risk is structural. A peer-reviewed study accepted to the USENIX Security Symposium 2025 analyzed 576,000 AI-generated Python and JavaScript code samples and found that approximately 20% of AI-recommended software packages do not exist, names invented by the model and presented as real dependencies. Across 16 tested coding models, the average package hallucination rate was 19.6%. This is not a theoretical attack surface. Security researcher Bar Lanyado registered one of these hallucinated package names on PyPI as a proof of concept. Within months, Alibaba had copy-pasted the hallucinated install command into the README of one of their public repositories. The package was harmless. His point was not.

An organization without experienced technical oversight has no reliable way to catch any of this. The AI does not flag its own mistakes, and it cannot tell the difference between what might work in theory and what will actually work in a specific environment, on a specific system, under specific compliance rules. That kind of judgment belongs to the person directing the work, and it takes real expertise to get it right.

The Takeaway: AI tools are trained to answer. They are not trained to know when the answer does not apply to your situation. That gap is where confident, wrong advice lives.

Illusion #3: The Capability Illusion, The Orchestrator No One Talks About

The prevailing narrative around AI development tools holds that experience is no longer the differentiator it once was. That anyone with a vision and a laptop can now build what used to require a team.

What this narrative leaves out: AI tools are not company strategists. They do not know an organization’s history, its security posture, its compliance obligations, its technical debt, or the decisions made three years ago that constrain what is actually buildable today. They generate output. The question of whether that output is correct, appropriate, and safe for the specific context it will be deployed into requires judgment. That judgment cannot be prompted into existence.

In software, the distance between “theoretically possible” and “practically achievable in your environment” is real and consequential. An AI tool will confirm that a feature can be built. It is often right in the abstract. What it cannot assess is whether that feature is achievable on your current architecture, within your regulatory constraints, in your timeline, with the existing codebase it will have to integrate with. That assessment is not a technical task. It is an experienced judgment call.

The practitioner community has been clear about this. As one senior engineer put it after the METR study results were published: “Experience shows up in what you reject, not what you accept. AI doesn’t remove responsibility. It amplifies it.” Raymond Kok, CEO at Mendix, a Siemens company, framed it in enterprise terms: senior developers with deep experience are more adept at catching AI mistakes and course-correcting, making AI a tool of genuine leverage. Junior developers and non-technical operators, by contrast, struggle to create real productivity gains because they lack the baseline to evaluate what the AI produced.

Enterprise research from 2025 bears this out at scale. Analysis of AI initiative failures found that the organizations that succeeded treated AI as an engineering discipline requiring strategy, rigor, and expertise. The ones that failed treated it as a silver bullet. MIT research found that businesses attempting to build AI-assisted software entirely in-house, without experienced external partners, were significantly more likely to fail than those that brought in qualified outside expertise. The bottleneck was never the technology. It was the judgment layer above it.

What experienced orchestration actually looks like in practice is worth naming. It is not simply prompting an AI and reviewing the output. It is managing the state of what the AI knows at each step, ensuring context does not drift, degrade, or contradict earlier decisions. It is validating agent handoffs so that when one part of the system passes work to another, the receiving agent has accurate, complete context rather than a hallucinated summary of what came before. It is auditing outputs at defined gates before they progress: does this code do what was specified, does it introduce dependencies that do not exist, does it pass security review, does it comply with the constraints established at the start of the session? These are not optional checks for careful practitioners. They are the difference between AI-assisted development and AI-directed chaos.

The first time a non-technical operator discovers that their AI agent has been building on a hallucinated assumption for three sprints, the cost of that discovery is rarely recoverable within the original timeline or budget.

The Takeaway: Anyone can direct an AI to build something. Experienced orchestration is knowing how to verify that what was built is what was intended, at every stage, not just at the end.

The Security Reality Is Not Improving the Way People Expect

The security data on AI-generated code has now accumulated across enough independent studies that the pattern is clear. AI coding tools generate functional code faster than humans. They do not generate secure code at rates comparable to experienced human developers. And critically: newer, larger models have not closed this gap.

Veracode’s 2025 GenAI Code Security Report tested more than 100 large language models and found that 45% of AI-generated code contains security vulnerabilities. That rate held consistent across GPT-4, GPT-5, Claude, and Gemini model generations. Veracode’s October 2025 update showed that models from Anthropic, Google, and OpenAI released between July and October 2025 showed no meaningful security improvement. Scale and recency are not solving this problem.

Security firm CodeRabbit’s December 2025 analysis of 470 open-source GitHub pull requests found that AI co-authored code contained approximately 1.7 times more major issues than human-written code. According to research firm Apiiro, AI-generated code in Fortune 50 enterprise environments showed privilege escalation paths 322% more frequently and architectural design flaws 153% more frequently than human-written equivalents, the categories of vulnerability that automated scanners are least likely to catch.

The documented incidents are no longer hypothetical warnings. They are a timestamped record:

May 2025, CVE-2025-48757: Security researchers discovered that 170 out of 1,645 production applications built on the Lovable vibe coding platform had missing Row Level Security on their database tables. Authenticated users were blocked. Unauthenticated users had full access. User data, authentication credentials, and business data were exposed to anyone with a public API key. The AI had implemented access control. It had simply implemented it backwards.

July 2025, SaaStr: Founder Jason Lemkin publicly documented Replit’s AI agent deleting his company’s entire production database during an explicitly declared code-freeze period. The agent then fabricated status reports claiming the data was irrecoverable and only acknowledged what had happened after being pressed. He recovered from backups. The incident became one of the most-cited examples of AI agents taking destructive autonomous action contrary to explicit instructions.

December 2025 – February 2026, Orchids: Security researcher Etizaz Mohsin discovered a zero-click remote code execution vulnerability in the Orchids vibe coding platform, which claimed approximately one million users. He demonstrated it live to a BBC technology journalist by gaining full remote access to the journalist’s laptop without any interaction from the victim. Mohsin had sent the company twelve warning messages before going public. The company’s response: their team of fewer than ten people had possibly missed them because they were overwhelmed.

These are not isolated failures. A December 2025 security study across 15 production applications built with five major AI coding tools found that every single application lacked CSRF protection and every tool introduced server-side request forgery vulnerabilities. Security firm Escape.tech’s scan of over 1,400 vibe-coded applications found 65% with security issues and 58% with at least one critical vulnerability, including over 400 exposed credentials and 175 instances of exposed PII. The root cause is consistent across all of it: AI generates code that functions but skips the security fundamentals that experienced developers apply instinctively, because they know firsthand what happens when those fundamentals are absent.

The Standards Problem: The Floor Is Still Being Poured

There is a dimension of this risk that does not get enough attention, especially in conversations about AI governance and compliance: the regulatory frameworks that organizations need to build against are not settled. They are being written right now, and in some cases were finalized or issued for public comment in the last sixty days.

In February 2026, NIST, the National Institute of Standards and Technology, announced the AI Agent Standards Initiative, created specifically because AI agents can now autonomously write and debug code, manage workflows, and execute transactions, and no formal standards yet governed them. NIST issued a Request for Information on AI Agent Security with a submission deadline of March 9, 2026. A concept paper on AI Agent Identity and Authorization was due April 2, 2026. These are not retrospective guidelines for practices already in place. They are the beginning of a framework that does not yet fully exist.

The EU AI Act, finalized in 2024, requires providers of general-purpose AI models, large language models included, to comply with obligations that became effective August 2, 2025. European Commission enforcement begins August 2, 2026. Organizations operating internationally are now subject to a regime many have not fully reviewed.

In the United States, nineteen states enforced comprehensive privacy laws by the end of 2025, with additional statutes effective in 2026. Colorado’s AI Act, the first comprehensive state-level AI governance law in the country, takes effect June 30, 2026. California passed multiple AI transparency and sectoral laws scheduled for this year. A growing divergence between state mandates and federal posture is creating legal uncertainty that does not resolve itself. The patchwork is expanding, not consolidating.

NIST’s December 2025 preliminary draft of the Cybersecurity Framework Profile for AI was open for public comment through January 30, 2026. The final version is expected to continue evolving through 2026. These are draft documents in active revision. Organizations deploying AI-built software today are operating in a regulatory environment that is being written around them in real time.

Only 14.4% of organizations report that their AI agents go live with full security approval, according to the Gravitee State of AI Agent Security 2026 Report. That means more than 85% of organizations deploying autonomous AI agents are doing so without completing their own security review process. In an environment where the standards themselves are still being written, that is not a calculated risk. It is an unexamined one.

What Experience Actually Buys

To be precise: this is not a case against AI development tools. That position would be wrong, and the evidence would not support it.

AI coding tools are extraordinary leverage. In the hands of someone who understands what they are looking at, who has the product knowledge, the security instincts, the architectural judgment, and the context to evaluate what the AI produced, the velocity gains are real and significant. The question is not whether to use these tools. That question is settled.

The question is what happens when the judgment layer is missing. When the person directing the AI cannot evaluate its output. When “it works in the demo” is the only test applied. When the organization has no Acceptable AI Usage Policy, no defined guardrails, and no partner who understands what the regulatory environment requires right now, this month, as the frameworks continue to evolve.

METR’s own February 2026 update captures something worth sitting with. Developers are now so reliant on AI tools that 30–50% refused to participate in study conditions where AI was unavailable. They are selecting tasks based on AI suitability. The measurement problem has become: you can no longer run a clean control group because the developers who benefit most from AI are the ones who’ve stopped working without it. That is a signal about adoption. It is not a signal that the judgment layer has become optional.

The Fast Company assessment from September 2025 captured the production reality accurately: senior software engineers are reporting “development hell” when working with AI-generated codebases, not because the code does not run, but because it runs while hiding problems that compound over time. AI-generated code tends to be simpler and more repetitive, yet more prone to hardcoded debugging artifacts and unused constructs. The structural debt accumulates silently until it becomes expensive.

The organizations that navigated 2025 well were not the ones that moved fastest. They were the ones that moved with appropriate governance. Morgan Stanley did not simply deploy AI internally. They tested it with domain experts, established daily monitoring, built evaluation frameworks, and achieved 98% adoption with no major incidents. The technology was the same technology everyone else had access to. The difference was the judgment layer applied to it.

Martin Fowler, one of the most respected voices in software engineering, put it plainly: treat every AI-generated code contribution as “a PR from a rather dodgy collaborator who’s very productive in the lines-of-code sense, but you can’t trust a thing they’re doing.” That is not pessimism. It is professional discipline. It is exactly what an experienced orchestrator brings to AI-assisted development that a first-time user cannot.

The Vibes Will Run Out

If you are a founder, executive, or decision-maker who has been told that AI means your organization no longer needs experienced technical partners, the data in this article is the response to that claim.

The tools are getting better. Genuinely, measurably better. Claude Opus 4.6 can do things that were not possible twelve months ago. The benchmark improvements are real. The agentic capabilities are real. The trajectory is real.

None of that means the judgment layer has disappeared. It means the stakes attached to that judgment layer have increased. A more capable tool, pointed in the wrong direction by someone who does not know what they are pointing at, produces a more capable failure. Faster. At greater scale. With more confidence.

AI tools do not know your business. They do not know your risk tolerance, your compliance obligations, what “production-ready” means in your regulatory context, or the difference between code that works in a controlled demo and code that will survive contact with real users, real attackers, and real regulators. They generate output. Evaluating that output, knowing what to keep, what to reject, and what quietly introduces a problem that surfaces six months later, requires judgment that cannot be automated away.

Vibe coding has a place. It is legitimate for exploration, for personal projects, for prototypes that will never touch production. What it is not is a consulting methodology. It is not a substitute for engineering discipline in high-stakes development. It is not something a client should be paying production rates for when what they are receiving is the output of a tool being operated without the expertise to validate what it built.

I have sat in the rooms where software survives and in the rooms where it fails. The difference is rarely the tools. It is the experience, discipline, and accountability of the people directing them. That was true before AI. It is more consequential now.

What This Means in Practice

None of this is an argument for slowing down. It is an argument for knowing what you are doing before you go fast. If you are leading an organization that is deploying AI coding tools, evaluating AI development platforms, or deciding how to govern AI-generated code across your team, here is the minimum operating standard:

Treat every line of AI-generated code as untrusted until reviewed. Not because the tools are bad, but because the tools are not accountable. You are. A mandatory human review layer for AI-assisted development is not optional overhead. It is the minimum condition for knowing what is actually in your codebase.

Write an Acceptable AI Usage Policy before your team installs anything. Define what AI coding tools are permitted, what data can be shared with large language models, who owns AI-generated output, and what review process governs AI contributions to your codebase. This protects your IP, your software, and your clients. If you do not have an AI governance policy in place, you are absorbing risk you have not priced.

Enforce security and compliance gates before any AI-generated code reaches production. AI-assisted code must pass the same security scanning, dependency auditing, and compliance review as human-written code, and given the documented vulnerability rates in AI-generated software, arguably more rigorous review. The AI regulatory frameworks covered in this article are not future considerations. Several are already in effect.

Pair AI development velocity with experienced human oversight. This is non-negotiable in consulting and high-stakes software development. AI coding tools provide speed. The experienced practitioner provides direction, validation, and the judgment to know when an AI model is confidently wrong. One without the other is either too slow or too risky.

The competitive advantage in this market is not access to AI tools. Every developer and every firm has access to the same large language models and AI coding assistants. The advantage is knowing how to use them responsibly, at production quality, inside an AI development methodology that holds up when the momentum runs out.

All sources verified as of April 2026.

  1. Karpathy, A. (February 2, 2025). Original “vibe coding” post. X (formerly Twitter). https://x.com/karpathy/status/1886192184808149383
  2. Wikipedia. (April 2026). Vibe coding. https://en.wikipedia.org/wiki/Vibe_coding
  3. Willison, S. (March 19, 2025). Not all AI-assisted programming is vibe coding. https://simonwillison.net/2025/Mar/19/vibe-coding/
  4. The New Stack. (February 10, 2026). Vibe coding is passé. Karpathy has a new name for the future of software. https://thenewstack.io/vibe-coding-is-passe/
  5. Quartz. (November 26, 2025). AI vibe coding has gone wrong. Time for a vibe check [Karpathy Nanochat]. https://qz.com/ai-vibe-coding-software-development
  6. METR. (July 10, 2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
  7. Becker, J. et al. (July 2025). arXiv:2507.09089. https://arxiv.org/abs/2507.09089
  8. METR. (February 24, 2026). We are Changing our Developer Productivity Experiment Design. https://metr.org/blog/2026-02-24-uplift-update/
  9. Anthropic. (2026). How AI Is Transforming Work at Anthropic. https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic
  10. Anthropic. (February 5, 2026). Introducing Claude Opus 4.6. https://www.anthropic.com/news/claude-opus-4-6
  11. Dubach, P. (March 5, 2026). 93% of Developers Use AI Coding Tools. Productivity Hasn’t Moved. https://philippdubach.com/posts/93-of-developers-use-ai-coding-tools.-productivity-hasnt-moved./
  12. Escape.tech / Cloud Security Alliance. (March 2026). CSA Research Note: AI-Generated Code Security, Vibe Coding. https://labs.cloudsecurityalliance.org/research/csa-research-note-ai-generated-code-security-vibe-coding-202/
  13. Tenzai Security. (December 2025). 15-application study. Cited in CSA Research Note above.
  14. Veracode. (October 2025). GenAI Code Security Report. https://www.veracode.com/blog/ai-generated-code-security-risks/
  15. Apiiro. (September 4, 2025). 4x Velocity, 10x Vulnerabilities. https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/
  16. CodeRabbit / The Register. (December 17, 2025). AI-authored code needs more attention, contains worse bugs. https://www.theregister.com/2025/12/17/ai_code_bugs/
  17. Crackr.dev. (March 5, 2026). Vibe Coding Failures: Documented AI Code Incidents. https://crackr.dev/vibe-coding-failures
  18. Getautonoma. (March 2026). Vibe Coding Failures: 7 Real Apps That Broke in Production. https://www.getautonoma.com/blog/vibe-coding-failures
  19. MIT / Suprmind. (2026). AI Hallucination Statistics Research Report. https://suprmind.ai/hub/insights/ai-hallucination-statistics-research-report-2026/
  20. OpenAI. (September 2025). Why language models hallucinate. https://openai.com/index/why-language-models-hallucinate/
  21. Spracklen, J. et al. (USENIX Security 2025). We Have a Package for You! https://arxiv.org/html/2406.10279v3
  22. Aikido Security. (February 2026). Slopsquatting [Alibaba/huggingface-cli]. https://www.aikido.dev/blog/slopsquatting-ai-package-hallucination-attacks
  23. NIST. (February 18, 2026). Announcing the AI Agent Standards Initiative. https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure
  24. NIST. (December 17, 2025). Draft NIST Guidelines Rethink Cybersecurity for the AI Era. https://www.nist.gov/news-events/news/2025/12/draft-nist-guidelines-rethink-cybersecurity-ai-era
  25. EC Council. (March 2026). EU AI Act vs NIST AI RMF vs ISO/IEC 42001. https://www.eccouncil.org/cybersecurity-exchange/responsible-ai-governance/eu-ai-act-nist-ai-rmf-and-iso-iec-42001-a-plain-english-comparison/
  26. Nixon Peabody LLP. (February 9, 2026). Data Privacy, Cybersecurity, AI Developments Shaping 2026. https://www.nixonpeabody.com/insights/alerts/2026/02/09/data-privacy-cybersecurity-ai-developments-shaping-2026
  27. Nemko Digital. (March 2026). AI Agent Standards: Navigating New NIST Governance [14.4% stat]. https://digital.nemko.com/news/ai-agent-standards-navigating-new-nist-governance
  28. ninetwothree.co. (December 15, 2025). The Biggest AI Fails of 2025 [Air Canada; Morgan Stanley]. https://www.ninetwothree.co/blog/ai-fails
  29. Technology Magazine. (November 10, 2025). Vibe Coding: The Future of Code or Just a ‘Short-Term Con’? [Kok/Mendix]. https://technologymagazine.com/news/vibe-coding-the-future-of-code-or-just-a-short-term-con
  30. romenrg.com. (December 25, 2025). Vibe Coding vs Prompt Engineering. https://www.romenrg.com/blog/2025/12/25/vibe-coding-vs-prompt-engineering-ai-and-the-future-of-software-development/
  31. Fast Company. (September 2025). The “vibe coding hangover.” Cited in Wikipedia / Vibe coding article.

Don Schminkey is an Engagement Manager at EX Squared with over 15 years of experience spanning the entirety of the Software Development Lifecycle.

Talk with us

EX Squared is a creative technology agency that creates digital products for real human beings.

Get Started 

Talk with us

EX Squared is a creative technology agency that creates digital products for real human beings.

Get Started 

How to Improve Mobile App Performance

Is your app in tip-top shape? How is the performance of your mobile app?

How Can You Incorporate AR into Your Business?

Did you know you can now place your products in the hands of potential customers using augmented reality technology?

5 Things You Need For Your App

Wondering what it takes to make an app that lasts?

How Much Does It Cost To Make An App?

So you want to build an app–congratulations! We’re big fans of apps, truly! Now to address the elephant in the room: how much does it cost to create an app?

How Long Does It Take To Build An App

Countless times, people have been asking How long does it take to build an app?
Well, let me ask some questions also; how big is your application? How many features does your app have? And what does it need to do?

Appreneur Tip: Suicide By Release Date

When you are an Appreneur, it’s easy to get ahead of yourself. You’re an idea person. A money person. A vision person. You’re looking ahead, anticipating your success, and planning for the next phase. If you are savvy about the industry, you’re thinking about...