Curious about the future of cybersecurity? You should be.
Cybersecurity advancements and AI innovation are on pace with each other, moving with a momentum few of us have ever seen. The unfortunate reality? Bad actors have access to the same tech and tools as the good guys—which means that these advancements, if not properly leveraged, could have catastrophic effects on small businesses and enterprises alike.
That’s why today, we’re recapping the latest interview between our Guardian and Head of AI Engineering, Íñigo, with J.D. Miller—founder of the James Dwayne Group and host of Suite Speak.
You can view the full Live here:
Why Does Agentic AI Matter Now?
Rather than simply regurgitating an output based on a given input, agentic AI has advanced to the point where it can conduct its own decision-making, which is informed by previously submitted data sets from its user.
Our current agentic language models have reached a new peak never before seen in 2025, opening new doors for advanced cybersecurity visualization and mitigation opportunities for enterprises and small businesses alike.
How Would Agentic AI (Potentially) Enhance SOC?
Our Guardians believe that agentic AI enhances SOC—but it won’t be able to replace human expertise. AI lacks the ability to read nuance to the degree that a seasoned 20-year CISO can…and we are a way off from that being a reality in the near future.
Despite this reservation, Ìñigo states that agentic AI, in this context, will empower cyber teams of all sizes to take on risks with renewed confidence and innovation, leveraging the tool as a form of validation, conceptualization, and support as they execute in alignment with a given strategy.
How do Cyber Teams Strike The Balance Between Agentic AI Automation and Human Support?
The answer to this question will look different depending on the goals and constraints of a given organization. However, Íñigo notes that the best strategy for the current stage of development is a human-in-the-loop method. Agentic AI has a lot of data to pull from, and is a skilled executor for teams looking to expand their capabilities. However, it’s not infallible. Most models can really only refine and leverage what they have been exposed to, so the human collaboration step is critical both during and after deployment, especially if you want your system to self-improve.
Our Guardian also suggests that the teams see the tool as an aggregator, saving CISOs the data analysis step and automatically suggesting courses of action based on the data that the tool has “seen.” A hypothetical human-in-the-loop system would then incorporate the human subject-matter expert, as they would choose the next step in the cybersecurity response process.
How Can Agentic AI and Cybersecurity Be Used to Reduce Alert Fatigue?
Finally—the moment that most of our cybersecurity experts have been waiting for: “How can artificial intelligence and security tools reduce alert fatigue?”
Our Guardian recommends that teams leverage AI to focus on precision recommendations, which will be made possible by ongoing training and “teaching.” Your tool of choice, if using agentic AI, can then be prompted to place its recommendations in order of priority—cutting out the signal “noise” in the most efficient way possible.
How Important is Trust In AI Cybersecurity Tools?
Trust is a critical consideration when it comes to artificial intelligence and cybersecurity—but how can stakeholders hedge their trust in a system that can, at times, hallucinate?
According to Íñigo, while trust is paramount, explainability is the ethical responsibility of the provider—and it should also be the expectation of the consumer, as well.
Explainability in AI ensures that users and stakeholders get the “why” and the overall context behind a suggested resolution, making it more compelling and aligned with what the actual need is. It also shifts the equation entirely, as security professionals don’t need to trust the AI itself. Rather, they need to trust the integrity of the data they input for the given outputs. It’s a verification-based approach that helps bridge the gap between the most skeptical AI specialists and innovation, and it’s something that will be imperative as the technology continues to evolve.
What’s One Question Cyber Teams Should Be Asking as Cybersecurity and AI Evolve?
Changing systems and adopting new tools is no easy feat—so J.D. Miller closed the interview with one final question for our Guardian, Íñigo: “What’s one strategic question to ask teams this quarter about AI readiness?”
“How much info can you process without AI? Where does AI fit into that flow?” ínigo concluded.
AUTHOR
Lars Letonoff
Lars Letonoff, Co-Founder of Guardare, is an internationally recognized strategic visionary and highly regarded technology executive with decades of leadership and go-to-market strategy experience. Lars has a proven track record of successfully building and scaling hyper-growth, global organizations.
The Guard Post is your go-to source for the latest cybersecurity news, industry events, and exclusive updates from Guardare. Stay informed and ahead of the curve with engaging, insightful content delivered straight to your inbox.