CEO Viewpoint – Who guards social media disinformation when regulators can’t?

team amanda finch

This year, the UK’s Online Safety Act (OSA) was rolled out, sparking debate over its impact on free speech, privacy and platform compliance. Amid the headlines, the Science, Innovation and Technology Committee (SITC) published a report warning that the Act may still fall short on tackling social media disinformation. The paper argued that social media companies are responsible for amplification of false and harmful content on their platforms, with the SITC urging for stronger government action.

The debate over regulation is also becoming increasingly politicised. At the Liberal Democrat party conference, Ed Davey even suggested that Elon Musk should be arrested if he ever set foot in the UK, while other figures, such as Labour’s Emily Thornberry and the Greens’ Caroline Lucas, have spoken strongly against his influence. These kinds of remarks highlight how social media regulation is no longer just a technical or legal issue, but a political flashpoint.

As generative AI accelerates the creation of false content and deepfakes, disinformation on social media is becoming more convincing and spreading faster than ever. At the same time, major social media platforms like Meta and X are expanding their own AI capabilities.

“Grok”, for example, is X’s AI assistant designed to answer questions, surface trending topics, and even generate images and posts in the platform’s tone and style. It enables users to automate content creation and personalisation, while driving engagement at scale. But the same features designed to boost engagement, for instance through photorealistic visuals, also make AI a gift for malicious actors. Lifelike images and videos of people can have many uses in identity theft, scams or disinformation.

In 2024, UK engineering firm Arup was duped into sending £20m to criminals that used an AI-generated deepfake scam, and separately, attackers exploited the CEO of WPP, one of the world’s largest communication firms. His likeness and voice were cloned to trick clients into handing over money and sensitive information on WhatsApp. While the WPP attempt failed, the Arup scam succeeded. Together they serve as a stark reminder of how much disinformation can exploit trust and damage brand value.

Although regulations like the OSA and the EU AI Act are steps in the right direction, governments and regulators are still searching for long-term solutions on how to regulate AI and social media. This gap has left UK companies to largely self-police, balancing reputational risk and market forces without a rulebook.

But concrete regulation will take time to finalise and enforce. In the meantime, the burden of defending against AI-driven scams will fall largely on the shoulders of security professionals, as they understand the technology behind AI – and its risks – to a higher level than the average person.

Because of this inherent knowledge, as well as defending against malicious use and AI-driven attacks, security professionals must actively lead and promote awareness campaigns. A key part of the role will be educating users to better recognise AI-driven scams and misinformation. But technical expertise alone isn’t enough. Security professionals also need strong communication skills to get the message across. From helping employees to spot deepfakes, to advising boards on AI risks and giving colleagues clear, actionable guidance. After all, if a risk can’t be communicated simply, it usually can’t be managed effectively.

Understanding the dangers of AI-driven disinformation is only valuable if it’s shared. By passing on knowledge of deepfake detection, threat intelligence, and AI-enabled phishing and fraud attacks, cybersecurity professionals strengthen resilience across organisations and the wider business community. This collective effort will help organisations to prevent malicious, AI-driven content from taking root and enable them to act now, rather than waiting for regulation to catch up.

Share this Article
Facebook
X
LinkedIn
WhatsApp
Telegram
Email

Other Relevant Articles

Where does the buck stop with security regulation?
When the threat is unseen: PTSD support for the cybersecurity profession

Board of Directors

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat m dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor inc. Lorem ipsum dolor sit amet, consectetur.

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat m dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor inc. Lorem ipsum dolor sit amet, consectetur.Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat m dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor inc. Lorem ipsum dolor sit amet, consectetur.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.