Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!
Why is the "Secure-by-Design" movement gaining so much momentum now, and is it a response to the failure of "bolted-on" security, or just a natural evolution of cloud maturity?
In a future Secure-by-Design world, is identity the only perimeter that actually matters anymore? Or is this a cliche?
As we move toward a world of autonomous agents, how does our approach to machine identity need to change? Are we just talking about more complex Service Accounts, or do we need a fundamental shift in how we authorize "intent"
What is your advice to people who want to move fast and cannot wait for Secure by Design / Default AI to be decided by consensus or IETF, NIST or OASIS committee?
We love the argument that modern AI agents are effectively repeating the mistakes of 1960s payphones - mixing the data plane and the control plane. What is your rebuttal? How do we build "Agentic Security" that doesn't fall for 60-year-old traps?
Customers are torn between their Zero Trust implementations and their AI adoption. Is Zero Trust now "legacy," or is it the prerequisite for everything we’re trying to do with AI agents?
Is there Zero Trust for AI? Is this a fake buzzword or technical reality?
Is Network Detection and Response (NDR) coming back after being shoved to the side by EDR a bit? Is this for real?
What's the value proposition of NDR in 2026, because some people still don't understand it? How does NDR apply to the world of WFH, cloud/SaaS, encryption, high bandwidth, etc?
Is the value of NDR the same, or different, when it comes to public (or private) cloud?
How does NDR fill visibility gaps that identity and agent-based solutions cannot?
What does NDR offer that built-in cloud security tooling (as of right now) does not? Would you call NDR a key cloud security control?
“10X SOC” sounds great. But for an organization stuck in "SIEM 1.0" with poor data quality and manual workflows, is “AI-native MDR” a "leapfrog" opportunity or a recipe for disaster?
We’ve seen the rise of "Decoupled SIEM" and security data lakes. Does a "Modern SIEM" even need to exist if an MDR platform has an agentic layer doing the heavy lifting?
You’ve argued for AI-native over AI-bolted-on. For an end user, what are the tangible differences of using "AI inside a legacy SIEM" versus using an "AI-native separate product"?
What is the one task you thought AI would handle by now that still requires a senior human analyst to step in?
If a CISO is using an AI MDR, "Mean Time to Detect" (MTTD) starts to look like a vanity metric because the machine is instant. What is the new golden metric for an AI-powered SOC? Is it "Time to Context," "Reduction in Human Toil," or something else?
How do you help a skeptical SOC Manager—who has been burned by false positives for a decade—trust an autonomous agent to perform a "containment" action at 3:00 AM?
We just saw a security tool (Trivy) get used to pop an AI infrastructure tool (LiteLLM) to eventually pop end users. Have we reached the point where our security tooling is actually our largest unmanaged attack surface?
Why now? Software supply chain security had the perennial vibe of “not top concern” for most organizations, right?
TeamPCP pushed malicious code to existing GitHub tags. We’ve been screaming about pinning versions to SHAs for years, but clearly, nobody is listening. Is it time to admit that 'convenience' is the primary enemy of supply chain security?
The Axios incident showed a victim compromised in under two minutes. In a world of auto-updating dependencies, is the concept of a human-in-the-loop for software updates officially dead, or do we need to look very hard at version pinning and such?
With XZ Utils case, we saw a long-game social engineering attack. Beyond just 'watching npm closely,' what are the realistic architectural safeguards for an org that knows they can't audit every line of an update?
We’ve spent the last three years talking about SBOMs (Software Bill of Materials) like they were a pill for supply chain health. But if the scanner producing the SBOM is the one that's compromised, isn't the SBOM just a signed receipt for your own house being on fire?
What is the one practical thing they can do to ensure their CI/CD isn't a credential-exfiltration-as-a-service platform?
You argue that declaring existing SIEM being obsolete is a "marketing slogan" rather than a true thesis. What is the real pain point and the actual gap in traditional SIEMs as opposed to the more sensational claims?
You highlight that "correlation, state, timelines, and real-time detection require locality," making centralization a necessary trade-off. Can a truly federated or decoupled SIEM architecture achieve the same fidelity and real-time performance for complex, stateful detections as a centralized one?
You call the rise of independent security data pipelines the "SIEM Trojan Horse." How quickly is this abstraction layer turning SIEM into a “swappable” component, and what should SIEM vendors have done differently years ago to prevent this market from existing?
This "AI SOC" thing, is this even real? Is AI in a SOC a better label? Do you think major SIEM vendors will own this very soon, like they did with UEBA and SOAR?
If volume-based pricing is flawed because it penalizes good security hygiene, what is a better SIEM pricing model that fairly addresses compute, enrichment, and retention costs without just shifting the volume cost to unpredictable query charges?
You question the idea that startups can find a better way to release detection rules than large vendors with significant content teams. What metrics should security leaders use to evaluate the quality of a vendor's detection engineering (DE) output beyond just coverage numbers? Can AI fix DE?
Your book focuses on the US, China, and Russia. When you were planning the book did you also want to cover players like Israel, Iran, and North Korea?
Most of our listeners are migrating to or operating heavily in the cloud. As nations refine their “digital battlefield” strategies, does the "shared responsibility model" actually hold up against a nation-state actor?
How does a company’s detection strategy need to change when the adversary isn't a teenager looking for a ransom, but a state-funded group whose goal might be long-term persistence or subtle data manipulation? How should people allocate their resources to defending against both of these threats?
How afraid are you of a “bad guy with AI” scenarios? Mild anxiety or apocalyptic fears?
Do you see AI primarily helping "Tier 2" nations close the capability gap with the "Big Three," or does it just further cement the dominance of the nations that own the underlying compute and models?
You’ve spent a lot of time as an analyst looking at how enterprises buy and run security tech. For a CISO at (say) mid-tier logistics company, should 'nation-state cyberattacks' even be on their threat model? Or is worrying about the spies just a form of security theater when they haven’t even solved basic credential theft yet?
Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified?
AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU?
If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint?
Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk?
The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications) but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications?
Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models?
Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem?