
On February 27, 400 cybersecurity leaders came together at the TEISS European Info Security in London. The conference featured several insightful panel discussions, covering topics from budgets and leadership in cybersecurity, to culture, education, and the current threat landscape. As you’d expect, many of these sessions were rooted in the industry’s ongoing struggle with AI.
Although conversations around the topic often fall into an echo chamber, it’s clear that AI will play a pivotal role in shaping the future of cybersecurity. But more than two years after the GenAI boom, we still lack consensus on whether AI will ultimately help or harm our profession. It feels like a double-edged sword – every benefit comes with drawbacks.
One of the conference sessions explored how AI is transforming Identity and Access Management (IAM). It can correlate and verify data far quicker than any human could, analysing user behaviour while providing insights into identity verification or potential security risks. However, AI’s role in IAM has sparked an arms race. Cybercriminals are leveraging AI to personalise phishing attacks and accelerate brute-force attempts, as well as predicting passwords by analysing behavioural patterns or common credentials.
In another panel on automating security operations, experts expressed both optimism and caution. It’s clear AI is fast, handling up to 80% of mundane security tasks rapidly, leaving the human experts to address the remaining 20% which might require a bit more time and thought. When it comes to threat hunting, AI can also dive deep into systems at speed, identifying patterns that may be overlooked by humans. But AI isn’t perfect, and accuracy and accountability are still big concerns. For example, if AI deems the CEO’s laptop to be a risk and locks it in the middle of a big presentation who takes responsibility? It might flag a security risk and lock down a critical process, but in doing so, it could bring the business to a standstill. Unlike humans, AI can’t explain its thought process and lacks business context, which is a huge flaw. It was clear from the discussion that without careful oversight, AI could cause more problems than it solves.
One of the early panel debates covered robust GenAI governance frameworks, which our CEO, Amanda Finch, chaired. Clearly this is a subject that is still being worked out, as regulators across the globe are at various stages of the legislative process, which is poised to shape company policies irrevocably. There’s consensus that organisations need to know where their data is, have appropriate access controls and know where the data from AI is flowing. But currently, we’re seeing two extremes in AI governance: “block everything”, and “allow everything”, making it very difficult to have a global corporate policy. Overly rigid policies on AI can make it more difficult to balance security and productivity, leading users to circumvent rules. In the future, when regulations are in place – or today if you operate in the EU and are under the EU AI Act – this could lead to compliance breaks.
After a full day of hearing from cybersecurity experts, one thing the industry remains unanimous on is that AI is here to stay. While the full impact of AI remains uncertain, we must focus on what we can control: education and training. AI is certainly a security issue that exists today. But the next generation of cybersecurity professionals will be on the real front lines as AI becomes more prevalent in both business and cybercrime. As a profession, we have a responsibility to mentor the next generation, passing on knowledge and providing skills training so that future cybersecurity leaders are equipped to combat today’s threats and tomorrow’s. If the profession only understands one or the other in the future, organisations will be left wide open to attacks.