Bewitched by AI? From innovation to illusion
Monday, October 13, 2025
RightBrains is delighted to share news, blogs and learnings from partners and our RightBrains United Network. We asked Valentine Couture, co-founder of Women in Cybersecurity Community Association (WICCA), to unpack how the AI revolution is impacting the world of cybersecurity from the viewpoint of WICCA – and to share some tips for organisations to balance the gains of AI while mitigating potential risk.
Table of contents
The Hype vs. the Reality
When capitalism overrules quality
When AI is your Intern
A better path forward
Community matters
The bottom line
The hype vs. the reality
Artificial Intelligence is reshaping technology at a spellbinding pace. In cybersecurity, AI promises to spot anomalies faster, identify and remediate vulnerabilities, and even predict threats before they materialise. These are real advances and, used wisely, they can make security teams more effective.
But at WICCA, where our mission is to empower women in cybersecurity for a more secure tomorrow, we’ve also seen a darker side of the current AI revolution. It isn’t the technology itself that we fear. It’s the way organisations are racing to monetise it. The push to “have AI features” at any cost is leaving quality and security behind.
When capitalism overrules quality
We’ve watched companies deprioritise patching critical issues or improving security features to ship shiny new AI objects. In some cases, these AI “solutions” are worse replacements for existing, perfectly fine, non-AI functionality. Like the time we switched from returnable glass bottles to cheap single-use plastic. It looked innovative, lighter, and more convenient, but it created a pile of hidden (environmental) problems for everyone else to clean up.
The problem isn’t that “AI is bad.” The problem is an economic system rewarding speed and novelty over resilience and security. In that environment, the security community is left grappling with preventable cybersecurity threats.
When AI is your Intern
Inside WICCA, we often say AI today is like a brilliant, but untrained intern. It can draft a policy or generate a regex faster than any human, but you still need a senior engineer to review the output. Expecting AI to solve your highest-priority issues without supervision is like handing a baby the keys to a nuclear plant.
And yet, that’s exactly how many organisations behave. They market AI as autonomous, then staff down the humans because of budget cuts. Security teams are told to “let the model handle it” while funding for better security tooling and training shrink.
This false sense of security is itself a vulnerability. Attackers move quickly to exploit new AI features, like prompt-injection flaws, because we shipped too fast and stopped thinking about user input, and data-leakage pathways. If defenders lower their guard, the balance shifts in the adversary’s favour.
A better path forward
AI is revolutionary, but only when combined with robust processes, diverse teams, and a commitment to long-term safety. And the most crucial element is that we need to understand it before we use it.
Large language models aren’t designed for truth or security. They’re designed to predict the next word. Treating them as oracles instead of pattern machines is dangerous.
At WICCA, we see three principles that should guide its adoption:
- Security first, features second. New AI capabilities should never be launched while known vulnerabilities remain unpatched. Core resilience is not optional.
- Human oversight. Treat AI as an assistant, not an autonomous decision-maker. Keep skilled security professionals in the loop, and invest in their training rather than replacing them.
- Ethics and diversity. The teams designing and deploying AI should reflect a wide range of perspectives. Diversity in background and thinking reduces blind spots and makes it more likely that risks will be spotted early.
These principles aren’t anti-innovation. They’re pro-sustainable innovation. The companies that integrate security and robustness into their AI strategy now will be the ones still standing when the hype cycle ends.
Community matters
One reason we launched WICCON, our upcoming cybersecurity conference, was to create a space where cybersecurity professionals, especially women, can discuss hard topics openly. AI in cybersecurity is one of those topics. It’s not enough to applaud the technology -- or panic about it. We need nuanced conversations about how it’s being implemented, who benefits, and who bears the risks.
Our community is already talking about the responsible uses of AI for security as a whole. But we’re also documenting the pitfalls and sharing lessons learned. By doing so collectively, we can help steer the industry away from reckless adoption and toward thoughtful, secure deployment.
The bottom line
We are building a house of cards. AI will change cybersecurity, and if we’re not careful, we risk building a shaky tower of fancy alloy that will cost more to maintain in the long run. AI should not replace the fundamentals of cybersecurity. Strong architecture, clear processes, trained people, and ethical leadership remain non-negotiable. The real danger isn’t that AI will outsmart us...it’s that we’ll outsmart ourselves by handing over trust too quickly.
We believe in innovation. We also believe in boundaries. Treat AI like a powerful but immature tool, not a saviour. Understand what it means and what it will bring you. Build security into its DNA before release, not after a breach. And above all, keep people, diverse, skilled, empowered people, at the centre of cybersecurity.
That’s how we avoid creating the very risks we’re trying to prevent.


