From Doubt to Assurance: The Battle for Online Integrity

As the UK public digested the results of July’s General Election, and adjusted to a new Labour government, conversation swelled about the legitimacy of Reform UK candidate, Mark Matlock.

On social media platform X, users questioned whether Mr. Matlock was an AI-generated persona, owing to an airbrushed profile photo, lack of general digital footprint and absence from the electoral count.

Conspiracy theories were swiftly debunked, but the rush to expose a potential falsehood reflects the troubling state of trust online. We are witnessing a real-time erosion in trust in what we see, read or hear online. In fact, research shows as many as a quarter (23%) of Brits do not trust any political content on social media platforms. But this reaction is not just reserved for politicians, it is acutely felt by businesses, celebrities and influencers alike.

It’s fast becoming an issue for all who operate in the digital economy. As it has boomed, social networks have become a conduit for daily information exchange, but also pervasive misinformation and false content. As identities can be masked, impersonation has become a common practice, and so opportunities for identity fraud have increased in tandem. This has pushed many to a point of no-return – unable to trust any form of online communication or content. It’s a backdrop that could jeopardise future growth of online businesses as well as the fabric of our digital society.

How did we get here?

Content bias has always existed in some form, and it’s not as though society is only now questioning the legitimacy of what is read or seen online. But we’ve gone beyond ‘taking it with a pinch of salt.’ Online trust is splintering and the cracks are getting worse, largely due to the growing accessibility of AI tools which produce convincing content at significant volume. This has fundamentally transformed how quickly disinformation can be produced and spread, reaching significant audiences in the blink of an eye.

Take the proliferation of deepfakes. Deceptive techniques once reserved for those with significant compute power and technical expertise, have now been democratised by AI. Anyone can now create a deepfake through a smartphone app, allowing bad actors to hijack the technology to propagate fraud online, misinform, mislead and attack businesses. The increasing sophistication, both visual and auditory, has already caused significant damage, from deceiving an employee into wiring millions of dollars to a fraudster to targeting the UK’s Prime Minister, Sir Keir Starmer.

Unchecked, malicious AI can, and does, reach millions of people each day. But beyond manipulation, it poses another threat. A lone fraudster can create huge fraud losses by assuming someone’s identity with deepfakes, costing a company dearly, or ruining someone’s credit rating and crippling them with huge debts. This only compounds distrust and raises the stakes on guaranteeing authenticity.

Where do we go next?

Empowered by AI, the game of cat-and-mouse with criminals is far more intense. They deliver increasingly sophisticated attacks on systems and individuals, while the government plays catch-up with legislation, and the private sector innovate alongside it. Touted solutions such as watermarking online content, or deepfake detectors, can deliver reassurance and authenticity to digital interactions. But they are not comprehensive and cannot mitigate against all forms of media.

As AI spreads its influence, rebuilding trust online begins with identity verification, but this cannot arise from self-regulation. If companies are left to mark their own homework, standards will fluctuate and users will experience varying levels of protection.

It’s important to note that government legislation can only deliver structured rules to a point, supporting how lawful users conduct themselves online. It must be a joint effort with the impetus being on social media platforms to prevent disinformation at the point of sharing. They must embrace user verification and remove the protection of complete anonymity which currently allows users to operate under the guise of another individual, company representative, leader or politician – all without fear of reprisal.

Instead, an independent verification service could prompt a demonstrable change in behaviour and limit the impact of disinformation on online trust. Whether on professional networking sites or dating apps, verification emboldens users to communicate online, trust that an individual is legitimate, and assess whether content is genuine or otherwise. In the case of Mark Matlock, an independently verified account would have quickly offset concerns about his own authenticity.

Is it too late to reverse the trend?

What has become increasingly clear as AI-generated media has become more realistic (and deployable at scale) is that it’s not conducive to tar all forms of content or use-case with the same regulatory brush. This unnecessarily restricts innovation and unfairly discriminates against legitimate use-cases, like using deepfakes to train AI models or cybersecurity tools. In this respect, the new Labour government’s commitment to liaising with the private sector on the best approach is very welcome.

The issue of waning digital trust is not beyond reproach. There is still time to correct the course. While an immediate solution has not yet presented itself, the technologies for verification exist and are driving trust at the point of remote registration and digital onboarding, keeping out bad actors and providing seamless experiences for legitimate users. This approach can be replicated for social networks, delivering user authenticity and confidence online.


About the Author

Aled Lloyd Owen is Global Policy Director at Onfido. Onfido, an Entrust company, makes digital identity simple. We make it easy for people to access services by digitally verifying them using its Real Identity Platform. The platform allows businesses to tailor verification methods to individual user and market needs in a no-code, orchestration layer – combining the right mix of document and biometric verifications, trusted data sources, and passive fraud signals to meet their risk, friction and regulatory requirements.

more insights