“Signal to noise is a usefulness ratio.”
Signal to Noise: An Interview with Natasha Eastman, Head of Threat Intelligence at CoreWeave
In this edition of Signal to Noise, I sat down with Natasha Eastman from CoreWeave. Natasha has one of those backgrounds that makes the usual security career advice feel too small. She has also held roles in government where operationalizing intelligence is the job, not collecting more information for its own sake.
Our conversation was operator to operator. We covered what signal to noise actually means when you are running real operations, why so much “intelligence” becomes wasted effort, how she thinks about operationalizing cyber threat intelligence, and what she believes teams must do when vulnerability exposure is real and time is not on your side.
Cave Dolphins and Non Linear Careers
We started with an icebreaker that turned into a family legend. Natasha’s animal is a bat, but her husband once guessed dolphin, so now she is a “cave dolphin.” It is a perfect tell. She is serious about the work, but not precious about herself.
On the career side, she made a point I wish more people heard earlier. Do not copy someone else’s path. If anyone tells you your career must be a straight line, ignore them. Hers certainly was not.
Natasha began in counterterrorism work about twenty years ago, then moved into consulting. In her mid twenties she took a complete left turn and joined a wine importer in California, helping run operations at a twenty person company. Then she pivoted back to security and focused on cyber more directly, with IT and cyber always present in the background across everything she had done.
On the personal side, she is a New Yorker by origin, now in Northern Virginia with her husband and a daughter approaching two. For fun, she dives deep wrecks and maintains her own rebreather.
Signal to Noise as an Operator Problem
When I asked Natasha how she defines signal to noise, she did not default to the usual “false positives” conversation. Her definition is more basic and more useful.
Signal to noise is a usefulness ratio, and in Natasha’s view, the problem shows up in two places.
First, in intelligence products themselves. If you are producing intelligence for other people to act on, how much of what you deliver is actually usable by the consumer?
Second, in operations. Even if something is important, if you cannot action it, it is not useful. It becomes noise.
That second point is where her definition becomes unusually actionable, because it turns signal to noise into an operational design problem, not just a tooling complaint.
The Operational Priority Matrix, and the Real Reasons Things Don’t Get Done
Natasha described building an operational priority matrix at CISA, rooted in a familiar concept. Many teams use some version of an XY axis that maps risk against impacted systems. That part is straightforward.
What takes longer, and matters more, is understanding operationalization. Why can’t we action this? Can we change that?
She described a set of blockers that will sound painfully familiar to anyone who has tried to convert intelligence into operations.
We do not have the resources. If it is truly high priority, pull resources from lower priority work, or force a leadership tradeoff.
Leadership clarity. If something matters but the team cannot take it on, make the choice explicit. Ask which other high priority operation should be removed.
Technical feasibility. Do we have the visibility? Do our hunting platforms allow us to search the indicators we are being asked to search? If we do not have the telemetry, searching is wasted man hours.
Responsibility clarity. Who is on first? Are we actually the team that should be fielding this, or are we spinning wheels to look busy?
Signal is work that lands. Noise is work that keeps you busy without changing the outcome.
How Operators Apply This in Their Own Environment
When I asked how an operator should apply these principles, Natasha went straight to the two prerequisites most teams skip.
First, role clarity. Early stage organizations often treat the SOC as responsible for everything. As the org grows, responsibilities fragment, sometimes without anyone noticing. She used a baseball analogy, then pushed it further. Define what each position actually does. Use a RACI style chart, whiteboarding, tabletop exercises, anything that forces clarity.
Second, telemetry truth. Know what you collect. Know where the gaps are. Know what indicators you can operationalize.
If your threat intelligence says “here are network indicators” but you have no network telemetry, do not spend time searching for it. You will not find it. If you have gaps in one place, ask whether partial coverage exists somewhere else. The goal is not perfect visibility. The goal is not wasting human time on impossible hunts.
She also called out a painful version of this in incident response work. If an IR team knows the fire is in environment A, but the customer restricts access to environment B, everyone loses. Money is burned, teams spin their wheels, and nothing useful happens.
Threat Intelligence Routing, and Why a TIP Exists
Natasha then gave a concrete example from CoreWeave that ties the entire signal to noise idea together.
When building their program, one of the foundational decisions was to centralize threat intelligence into a TIP. Her reasoning was simple. A SIEM is not designed to adequately control and manage threat intelligence data feeds. That does not mean the SIEM is bad. It means it is not what it is for.
A TIP lets you manage intelligence properly. You can assign confidence to indicators and sources. You can decide what to send downstream, and for what purpose.
She described two different kinds of feeds that should not be confused.
A high confidence feed that goes to the incident response team or SOC. This must be clean, because the last thing you want is to send them a bunch of garbage and create “boy who cried wolf” syndrome. And we need to think through what the detection means to understand if it’s truly something the SOC needs to action or if it’s just noise.
A different feed that supports threat intelligence. This feed can generate hits of less confidence and need further investigation, but shouldn’t be ignored.
Essential to this process is also considering how the team derives insights for leadership. This is where questions like “who is looking at us?” live. There might be a hit on a high confidence indicator in a firewall log but the inbound traffic was blocked. Valuable information for threat intelligence, but not something that should trigger an incident response motion.
The throughline was consistent. Who is on first? What is the purpose of the data? What does “action” mean for this audience?
Intelligence Is Bigger Than Cyber
Speaking with an Intelligence pro- I couldn’t help but ask: “What is one intelligence source people overlook?”
Natasha’s perspective is that diversity of sources matters. Curated vendors cover what they cover. Broad OSINT can be noisy but necessary. Specialized vendors collect in domains others do not, like dark web.
Then she widened the frame. Intelligence is not just cyber.
At CoreWeave, intelligence includes cyber, physical, geopolitical, and insider risk. Her team is multi-domain. When she looks at open source information, she is not only watching for the next vulnerability or proof of concept. She is also tracking threats against executives, supplier risk in countries approaching instability, and signals relevant to export controls.
The point is not that every team needs all of these domains overnight, but that many of the risks leaders care about are multi-domain by nature.
Prioritizing Time, Talent, and Treasure in the Age of AI
When we shifted to resourcing and prioritization, Natasha’s answer was pragmatic.
We live in the age of automation and AI. Some things can be done extremely well at scale, but only if you engineer systems to keep pace. At the same time, you still need humans who can think, produce quality analysis, and maintain the integrity of the output.
Her model is a balance.
Build infrastructure and engineered systems to scale.
Maintain analytic capacity and quality, with humans in the loop.
Hire people with skills you do not have, because intelligence is a team sport.
Not everything can be automated, and not everything should be done manually.
If You Do One Thing Today
When I asked Natasha what one thing a practitioner must do today, she went straight to threat hunting, but not in the way most vendors pitch it.
She described her prior role at CISA as chief of operations for threat hunting, where threat intelligence, incident response, and detection engineering worked in close concert, alongside vulnerability management. She has seen the impact that comes when those activities are connected.
Her core message was a myth she wants to kill. The idea that you have a vulnerability, you patch it, and all is well, is not true.
If you were vulnerable for any period of time, especially if the system was internet facing, patching and moving on is not sufficient. There is action required in concert with patching, including threat hunting and forensics. You also have to understand tradeoffs, including how patching can impact forensic capability, and plan accordingly.
Patching is not closure. Closure comes from validating whether exposure turned into compromise, and acting accordingly.
Vendor Trust, in One Word: Transparency
We ended with a question vendors should take personally, but rarely do. If a vendor wants Natasha to trust them, what should they do?
Be as truthful as possible.
Smart operators know when they are being sold a story. She also acknowledged the reality vendors face when things go wrong, lawyers, requirements, angry customers, and the fact that it is genuinely hard.
But the principle is simple. Transparency enables operators to take action in their environment. When operators have questions and cannot see what risk they are facing, distrust grows. When vendors reduce uncertainty, trust builds.
Finding Signal
My conversation with Natasha reinforced something that gets lost in a market obsessed with more data.
Signal is not volume, signal is usefulness.
It is intelligence that can be operationalized. It is work routed to the right owner, backed by the right telemetry, and designed for the right outcome. It is the discipline to stop doing what cannot be actioned, and the courage to make tradeoffs explicit.
And it is the recognition that closure does not come from patching alone. Closure comes from validating whether exposure turned into compromise, and acting accordingly.
Stay secure, and stay curious, my friends.
Damien


