
Where AI Actually Delivers in Cybersecurity (And Where It Doesn’t)
The cybersecurity industry has a hype problem. Vendors promise AI will revolutionize everything from threat detection to compliance, but security leaders are left wondering what’s real and what’s smoke and mirrors.
In a recent episode of the Down the Security Rabbit Hole podcast, Bytewhisper Security CEO John Dickson joined a panel of security experts to cut through the noise. Their conclusion? AI’s biggest wins won’t come from flashy features—they’ll come from solving the tedious problems nobody wants to talk about.
The AI Hype Cycle Isn’t New
Dickson has watched the AI hype cycle before. “My 2019 RSA session was how to vet vendor claims on AI,” he noted, “which at the time was really machine learning. It’s only gotten exponentially worse.”
The challenge for security leaders isn’t whether AI has potential—it’s figuring out which claims are legitimate. Most AI capabilities will arrive through product enhancements from vendors, not internal development. That means interpreting marketing claims remains one of the hardest skills in security procurement today.
Which AI Cybersecurity Use Cases Actually Work?
The panel identified several areas where AI delivers genuine value right now:
- Threat detection and SOC operations: Taking a tier-one analyst and “making them a super SOC analyst,” as Dickson put it
- Vulnerability scanning: Faster identification of security gaps across environments
- False positive reduction: Improving signal-to-noise in static analysis results
- Log analysis: Surfacing meaningful patterns from massive data volumes
But perhaps the most compelling insight came when the conversation turned to what makes AI genuinely useful. It’s not the impressive demos or the “card tricks of LLMs.” It’s the boring stuff.
“All those things that are tedious—configuration, documentation—that’s where AI can really help,” observed one panelist. Another pointed out that SOAR platforms never reached their potential because writing runbooks manually was too time-consuming. AI can change that equation.
Dickson suggested security leaders look outside the industry for inspiration: “Look outside of our industry for better metaphors on use cases.” He pointed to utilities using machine learning to predict water main breaks—unsexy, critical infrastructure problems that AI can actually solve. The same logic applies to security.
The Entry Point Problem: What Happens to Junior Security Roles?
The conversation took a sobering turn when discussing AI’s impact on entry-level security jobs. Dickson, who lives in San Antonio, has spent years advising aspiring security professionals that SOCs are the entry point into cyber careers.
“I was the AFSR guy in the nineties who did all this stuff manually. I was a political science major who got the chance to learn Unix and do it manually,” Dickson reflected. “That opportunity doesn’t exist anymore for somebody.”
If tier-one SOC positions get automated, where do newcomers learn the fundamentals? Application Security Assessments require coding and security knowledge from day one. The industry hasn’t solved this pipeline problem.
Where Will Humans Add the Most Value Over the Next 36 Months?
When asked where humans will add the most value over the next three years, Dickson was direct: “Helping identify the architectural weaknesses, not the massive amounts of data… You inherently shouldn’t trust the outputs from an LLM as a starting point. Shouldn’t treat them as ground truth.”
The future role for security professionals centers on the “should” questions, architectural decisions, and determining what systems should connect to an LLM’s API in the first place. These judgment calls require human expertise that AI can’t replicate.
The Bottom Line for Security Leaders
AI won’t replace your security program. It will make the tedious parts less painful while creating new questions about architecture and trust that only humans can answer.
For security leaders evaluating AI-enabled tools, the advice is clear: ignore the flashy demos, focus on the boring use cases, and never stop asking whether vendor claims hold up to scrutiny.
Need help separating AI hype from reality in your security program? contact@bytewhispersecurity.com for an honest assessment of where AI can—and can’t—strengthen your defenses.
This post is based on John Dickson’s appearance on the Down the Security Rabbit Hole podcast. Watch the full episode for the complete discussion.