Vibe Coding + Cybersecurity: The Good, The Bad, and The Ugly
Stop treating AI-generated code like it's production-ready just because it runs
Pop Quiz!! Before we start parsing Vibe Coding’s cybersecurity implications, can you spot the fake package?
import requests
import fastxml
import numpy as np
import ultrajson
Answer: fastxml is a common AI hallucination. The real package is lxml or xml.etree.ElementTree.
I’m not trying to be facetious, but if you looked at the above code and took it as implicitly true, that is exactly why we need to be pragmatic, and thorough when it comes to AI code generation.
Look, I get it. Vibe coding is awesome. There's something undeniably satisfying about throwing a problem at ChatGPT and watching it spit out a working Flask app in under 20 minutes. I've been there myself, copy-pasting my way to a proof of concept while my coffee's still hot, gleefully sending screenshots of front-end applications that look like they really work.
After a few months spinning up dozens of vibe coded apps, my verdict is that today vibe coding is way better for mockups than actual code. Like, significantly better. You want to show stakeholders what a feature might look like? Perfect. Do you need to validate an API structure before committing to it? Great use case.
I actually ditched my Balsamiq subscription because AI got so good at generating quick diagrams and wireframes. That said, I admit I still keep my Figma subscription. I'm not totally sold on AI for anything that needs real polish or will actually face customers.
And honestly, I think a lot of us have completely overrotated here. Everyone's so excited about the speed of development with AI that they've forgotten basic CI/CD hygiene. Teams can generate a microservice in 10 minutes, but can they tell you when they last updated their lockfiles?
The appeal is obvious. You fire a prompt at GitHub Copilot, get a function that compiles, tests pass, boom, you're done. But when you're trusting unverified code in environments that actually matter? That's when vibe coding stops being fun and starts being dangerous.
You know I love a good “Good, Bad and Ugly” analysis. While we’ve done a few of these, I’m particularly excited to dig into Vibe Coding this week. The truth is, vibe coding has its place, but knowing where that place is might save you from becoming the next security horror story.
The Good: When Vibe Coding Actually Rules
Prototyping Paradise
If you're still treating AI-assisted coding like some experimental side project, you're living in 2023. Recent data shows about half of developers have tried tools like Copilot in the past year. And honestly? Good for them.
Because when you're in prototype mode, and I mean actual prototyping, not "let's call this production code a prototype," vibe coding is genuinely transformative. Need to test an API integration? Want to mock up a quick UI? AI can knock out the boilerplate in minutes instead of hours.
In my opinion, AI shines at creating convincing mockups, not production-quality code. It's fantastic at showing you what something could look like, terrible at handling edge cases, error states, and the gnarly real-world stuff that makes software actually work.
Truth be told in the prototyping phase, AI generated code rocks. If the AI hallucinates a fake package or gives you wonky code, so what? You're exploring ideas, not shipping to customers. The occasional bad suggestion is just friction, not catastrophe. Teams report better sprint velocity, faster feedback loops, more time for the interesting problems. In competitive markets where time-to-demo matters, those efficiency gains can literally make or break your momentum.
TLDR; Vibe coding excels when you're exploring, not when you're shipping.
The Bad: When "Move Fast" Meets Production
AI's Confidence Problem
Here's where things get uncomfortable. AI assistants are incredible at sounding confident about things that are completely wrong. Current research shows leading models hallucinate fake package names about 3-5% of the time. Open source models are closer to 20-30%.
So back to our pop quiz…if you ask for an XML parser, get import superfastxml back, and it looks totally legitimate. Except it doesn't exist. Yet.
Enter the Slopsquatters
This is where attackers get clever. They've figured out that if ChatGPT keeps suggesting the same fake package name, they can just... register it. With malware inside. That’s called slopsquatting. We discussed this a few weeks ago on this Substack, and what was once a proof of concept is becoming very real.
There's a documented case where malicious actors noticed ChatGPT suggesting @async-mutex/mutex for JavaScript concurrency. Within hours, they'd registered that exact name on npm with a trojanized version. Developers copied the snippet, installed the package, and boom. CI pipelines compromised, backdoors planted.
These "future-told package" attacks, where the AI basically tells attackers which fake packages to create. I wrote about this phenomenon in detail when I first started tracking slopsquatting attacks, and it's only gotten worse as more teams adopt AI coding tools.
It's not typosquatting anymore, where attackers hope you'll mistype requests as requets. Now they're proactively registering packages that AIs are likely to hallucinate. It's like they have a crystal ball, except the crystal ball is just monitoring what ChatGPT suggests most often.
And this isn't rare. Over 90% of organizations dealt with some kind of supply chain incident last year. About two-thirds had to clean up an unexpected open source vulnerability.
The CI/CD Amnesia Problem
And here's what's really grinding my gears: the same engineers who can prompt their way to a Flask app in 20 minutes are the ones who haven't touched their dependency management in months. We've gotten so good at generating code that we've forgotten how to secure it.
I'm seeing teams with blazing-fast development cycles and absolutely prehistoric CI/CD practices. They can spin up microservices faster than ever but couldn't tell you the last time they audited their package.json. It's like buying a Ferrari and forgetting to change the oil.
When you're prototyping, you can afford to trust first, verify later. Small blast radius, easy to roll back, no customers affected. But in production? You need to flip that script. And too many teams haven't made that mental shift.
I'm seeing developers copy AI snippets verbatim into production branches. No package verification, no security review, just "it compiles, ship it." And look, maybe most of the time this works out fine. But given what we know about slopsquatting attacks and how attackers are already exploiting AI hallucinations, this feels like we're setting ourselves up for the next wave of supply chain breaches.
The problem isn't the AI. It's applying prototype-level trust to production-level code.
The Ugly: When It Goes Wrong
Supply chain breaches average about $5 million and 13 days of downtime. And those aren't just numbers on a spreadsheet. Each hour means missed SLAs, angry customers, and trust that takes years to rebuild. When developers blindly install packages suggested by AI without verification, they create an attack vector that scales instantly across entire orgs.
Think about it. You wake up to security alerts, realize some AI-generated code introduced a rogue package three weeks ago, and by now? Attackers have had time to exfiltrate data, plant cryptominers, and move laterally through your network.
Supply chain compromises don't stay contained. One bad package propagates through microservices, container images, infrastructure code. Everything downstream gets infected.
When these attacks succeed, the cleanup is brutal. Security teams spend weeks digging through logs, finding secondary exploits hidden in container clusters. Meanwhile, your business is hemorrhaging trust, deals get paused, and you're explaining to investors why your "AI-accelerated development" became an "AI-accelerated breach."
And here's the real kicker for security leaders: you're caught between wanting your teams to use the best tools available and keeping the environment secure. Your developers are shipping features faster than ever with AI assistance, productivity is up, stakeholders are happy. But you're also watching them install dependencies you've never heard of, generated by a system that confidently suggests packages that don't exist.
Do you lock it down and slow everyone back to manual coding? Risk looking like the "Department of No" while competitors race ahead with AI-powered development? Or do you let the velocity continue and hope your detection tools catch the next slopsquatted package before it hits production?
It's an impossible position. The tools that make your team most productive also create the biggest blind spots in your security posture.
Traditional security tools struggle here because hallucinated packages don't have vulnerability histories. They're brand new, so signature-based (and next-gen) detection fails. You can't scan for threats that don't exist in any database yet.
Silver Lining? What You Can Actually Do
The purpose of this blog is not to disparage vibe coding (I vibe coded earlier today). But we need to stop pretending that development speed and security hygiene are mutually exclusive. They're not.
The solution isn't going back to writing everything by hand. It's remembering that good engineering practices don't disappear just because an AI wrote the first draft.
Start with lockfile validation. Make your CI pipeline reject any branch trying to install packages not declared in package-lock.json or requirements.txt. If someone wants a new dependency, the build fails until they manually verify it exists and is safe. This catches hallucinated packages before they ever reach production.
Add basic package age checks. Any package published in the last 24 hours should trigger manual review. Attackers typically publish fake packages hours before expecting developers to install them. Check for GitHub repos, commit activity, reasonable download counts. Simple heuristics catch most slopsquatting attempts.
For teams ready to invest more, run an internal package mirror. Route all installs through your vetted whitelist. Integrate supply chain scanners like Snyk or Socket into your pipeline. They'll catch suspicious patterns even when signatures fail.
And don't skip the human element. Run a "Slopsquatting 101" session showing real examples of fake packages. Once people see how easy it is to be fooled, they develop better instincts.
What Vibe Coding’s Good at
Embrace vibe coding for what it's actually good at. Mockups, prototypes, exploring ideas quickly. But stop treating AI-generated code like it's production-ready just because it runs.
The difference between smart AI adoption and security disaster isn't the tool you're using. It's whether you've maintained basic engineering discipline while using it.
Because the fastest way to ship code isn't worth much if you're shipping vulnerabilities along with features.
Keep using AI to explore ideas and validate concepts. But before any AI-generated code touches production, apply the same rigor you'd use for any external code. Because that's what it is, external code you didn't write and don't fully understand.
Here's your homework (if you wish): audit one AI-generated component in your current codebase. Not just "does it work," but trace its dependencies, understand its permissions, check when packages were published. You might find some surprises.
It’s a brave, exciting new world. Let's not forget the lessons of the past and ensure our software supply chains, and best practices remain resilient.
Stay curious, stay secure my friends.
Damien
In 3 weeks they achieved persistence in the network while moving laterally towards other devices. The thing with CI/CD pipeline and secure coding is that most of devs are not taking into consideration the Agile development framework where you need to verify before you implement it live. The breaches will get so big that it will an ordinary daily event. Nice article and keep up the good work.. 😉