Two Dudes and a GPU
Also known as: How AI-dependent operators have changed our threat model
I had a conversation recently with someone who has been at the bleeding edge of infosec since the Cult of the Dead Cow’s early-ish days. We talked about what happens to attribution when everyone can automate tailored access operations: when the threat isn’t just Fancy Bear, but with two dudes and a GPU who now have a similar offensive capability? How valuable is that exercise when the gap between state-sponsored and self-funded starts to collapse? I have been chewing on it since, and I think the answer is more uncomfortable than most people want to admit.
Once attackers get meaningful agency from machines, the economics of intrusion change. Once that happens, a lot of the assumptions network defenders rely on start to wobble, especially the quiet ones we rarely bother to examine. That concern is one of the reasons I started Nebulock. I did not think the future of defense could continue to revolve around waiting for alerts, triaging noise, and hoping a human being eventually pieced together the story in time. That model was already straining. Now it looks positively dated.
Anthropic’s reporting from late 2025 is useful not because it introduces a brand-new idea, but because it turns a debated implication into a documented one. Anthropic described what it called the first reported AI-orchestrated cyber espionage campaign: an actor using AI to support vulnerability discovery, exploit development, malware modification, credential harvesting, network scanning, data analysis, and exfiltration against about 30 targets, with the system handling the majority of tactical operations with significantly less human involvement than earlier misuse cases. In a separate case, Anthropic described a criminal operation using Claude Code to automate reconnaissance, penetration activity, victim profiling, stolen-data analysis, and extortion messaging across at least 17 organizations, with ransom demands exceeding $500,000 in some cases. The model was not sitting on the sidelines helping somebody draft an email. It was participating in the operating loop.
Google’s Threat Intelligence Group has been tracing the same arc. In its reporting on threat actor use of AI tools, GTIG said adversaries were moving beyond generic productivity gains into operational use cases across the attack lifecycle, including AI-enabled malware behavior, code generation, reconnaissance, and social engineering, while a criminal market for AI-enabled tooling continued to mature. This is not a one-off story about a few unusually motivated actors. It is a story about a widening capability surface.
Two dudes and a GPU
For a long time, one of the rough heuristics in security was that sophisticated outcomes usually implied sophisticated operators. If an intrusion was disciplined, tailored, well sequenced, and technically clean, that told you something. It suggested resources, time, and a level of talent or sponsorship that narrowed the field of plausible actors. That heuristic no longer holds.
If a model can help write tooling, adapt exploits, localize lures, reason through roadblocks, sort stolen data, and suggest the next move, then the operator behind the keyboard does not need to possess the same depth of mastery those outcomes once implied. The capability still exists, but it is no longer entirely resident in the operator. Some of it is rented. Some of it is available on demand to whoever is persistent enough to keep the loop moving. That changes more than tactics. It changes interpretation.
TTPs give us a shared language. They help us cluster activity, compare cases, and build intelligence over time. None of that stops being valuable. But when the means of producing clean tradecraft become widely accessible, the techniques themselves start telling you less about who you are dealing with and more about what is cheaply available. This is where the real signal lives. Behavior and business context matter more.
If an attacker resets MFA, pivots into SaaS, enumerates high-value identities, stages unusual data pulls, and moves directly against the systems that matter to your revenue or crown-jewel data, that tells you something more durable than the mere fact that they used a polished script or a convincing lure. It tells you how they think, what they value, and where the business is soft in ways an ATT&CK label alone cannot capture. Sophistication stops being a reliable marker of who built the capability and becomes a marker of who had access to it.
Breaches used to take months. Now they take tokens.
When the cost of generating code, iterating malware, testing lures, translating content, prioritizing stolen information, and scripting the next action collapses, the distance between intent and impact collapses with it. The breach does not need to be assembled by a large, patient team the way it once did. It can be driven forward by one operator with enough persistence to keep steering the machine. Anthropic’s two public cases, taken together, point directly at that future.
The Bloomberg reporting on the Mexico incidents tells the same story. Bloomberg reported in February 2026 that a hacker used Anthropic’s Claude, and to a lesser extent ChatGPT, in attacks affecting Mexican government entities and sensitive public-sector data. One operator…using commercial models…with government-level impact.
We need to confront something uncomfortable. The barrier to launching sophisticated attacks is dropping faster than the barrier to detecting them. Attackers get initiative at machine speed. They do not need to stop because they are tired. They do not get bored halfway through reconnaissance. They do not need to manually rewrite every lure, every script, every variation of an exploit path. Meanwhile, many network defenders still must live inside workflows built for a slower world: alerts, queues, handoffs, triage, escalation, and after-the-fact reconstruction. That mismatch is dangerous.
The interpretive gap
The biggest risk is not simply that attackers become more capable. The bigger risk is that we will continue to interpret attacks through an outdated model of attacker effort. If you still assume that polished activity implies a polished operator, you will misread the field. If you still assume that serious attacks require serious time, you will be late to the story. If you still assume your alert stack will reconstruct intent before damage lands, you are betting on workflows that were already fraying before agentic offense started maturing.
This is why I keep returning to behavior and business context. They are harder to commoditize than TTP aesthetics. An attacker can borrow code quality, cleaner language, more convincing malware, faster iteration. What they cannot avoid is behaving in the environment in pursuit of something that matters. They have to authenticate, enumerate, pivot, collect, and exfiltrate. They have to touch the business somewhere. And when they do, the question is not which technique they used, but what business reality they were exploiting and how fast they were compressing the kill chain. That is the signal I trust.
Think about what it actually looks like. A service account that has never touched your data warehouse starts running queries against customer PII tables at 2 AM (ok this one’s a bit obvious). An identity that just completed a password reset begins enumerating admin group memberships across three SaaS platforms in sequence. A developer’s cloud credentials, freshly rotated, start provisioning infrastructure in a region your organization has never operated in. Most of this won’t trigger detection rules, or fire off as high severity alerts. All of it tells you something about intent.
I believe that network defenders still have the better terrain, but only if we are willing to use it. We have telemetry and institutional context. We know which identities, applications, data stores, and operational dependencies matter most. Attackers may get help with execution, but they still have to leave traces of intent in the environment.
One of cyber’s lazier assumptions is decaying right in front of us: the assumption that visible sophistication maps neatly to operator sophistication. Once you accept that, detection becomes less about waiting for obvious signals and more about understanding sequence and business consequence. The question stops being “how advanced does this look?” and becomes “what is actually happening here, how fast is it moving, and what part of the business is in play?”
The threat model changed, and the interpretive model needs to change with it.
Stay curious and stay secure, my friends,
Damien


