With AI-Powered Cyber Threats on the Rise, Businesses Need Clarity

Whether we like it or not, AI now pervades our world. From public and private sectors to personal life, this powerful technology surely poses potential risks, but it’s also the catalyst for untold opportunities.

As the UK advances its AI regulatory framework, it is considering transforming voluntary agreements with AI developers into legally binding commitments and granting autonomy to the AI Security Institute. This strategic approach aims to address potential risks while fostering innovation in the rapidly evolving field of AI.

Businesses across industries will face a critical but often overlooked challenge – the lack of standardisation in AI regulation within a globalised economy. While much of the discussion around AI regulation focuses on how it can boost economic growth in the UK, the reality is that companies operating internationally must navigate an uneven regulatory environment, which creates uncertainty and compounds risk.

The dark side of AI

The Mimecast H2 2024 Threat Intelligence Report found a worrying trend: cyber threats are becoming increasingly sophisticated as threat actors blend malicious activities with legitimate operations. Attackers are increasingly exploiting trusted services, making it harder than ever for security controls to distinguish between authorised and unauthorised activity.

Malicious AI-powered tools like WormGPT and FraudGPT, which are rogue versions of legitimate generative AI (GenAI) models, are being used to craft highly convincing phishing emails, generate more efficient malware and automate cyberattacks at scale. Mimecast’s detection engine has learned to identify specific characteristics that distinguish human-written emails from AI-generated ones. By analysing over 20,000 emails, along with synthetic data generated by models like GPT-4, we found that certain phrases, like “delve deeper into this” or overly casual greetings like “hello!” from senders who don’t typically use such language, often signal a phishing attempt.

It is important to note that the model was not looking to identify malicious AI-written emails, but rather to estimate the pervasiveness of AI. The real insight lies in the trend: a sharp rise in AI-generated content and a corresponding decline in human-written emails.

Additionally, organisations must be aware of the rapid evolution of AI-driven deepfakes. These sophisticated tools enable the creation of hyper-realistic audio and video manipulations, opening new avenues for fraud, disinformation and identity theft. Already, these AI-powered threats are being deployed in workplace fraud schemes, financial scams and large-scale misinformation campaigns.

But there is a positive in all of this, while AI is enhancing the capabilities of cybercriminals, it’s also playing an even greater role in strengthening cybersecurity. The latest AI and machine learning-powered security solutions provide organisations with a decisive edge, allowing them to detect and neutralise threats faster and more effectively than ever before.

Building a secure future for organisations

Organisations should prioritise integrating robust cybersecurity measures and adhering to the new AI Cyber Security Code of Practice. Some of the key recommendations include implementing secure development protocols, conducting regular risk assessments and deploying AI-driven defensive tools to safeguard systems against emerging threats like deepfake technology and AI-powered malware.

A vital first step is investing in human risk management. According to Mimecast’s 2025 State of Human Risk Report, 95% of all cyber breaches are caused by human error. Additionally, research from Elevate Security acquired by Mimecast in 2024 found that just 8% of an organisation’s users are responsible for 80% of security incidents. Addressing these vulnerabilities through targeted risk management strategies is essential.

The battle against AI-driven cyber threats cannot be fought with outdated tools and outdated training. Modern threats demand modern security solutions and continuous learning. Organisations must adopt AI-powered security defences that match the speed, scale and sophistication of modern attacks. Mimecast’s acquisition of Aware, an AI-powered collaboration security platform, emphasises this shift by helping to enable businesses to identify risks across workplace tools like Teams and Slack, prevent data loss and mitigate human risk factors in real time.

Cybersecurity must be a strategic priority

It has never been more important for businesses to prioritise cybersecurity as a core strategic priority, not just a precautionary measure. By embracing frameworks such as ISO certification, companies can mitigate legal risks and improve trust among the public, investors and customers, all of which are crucial elements for successfully scaling AI solutions on a global scale.

A balanced approach, combining human expertise with AI’s capabilities, will create a safer online environment and ensure businesses are resilient in the face of future cyber challenges.


About the Author

Jeff Schumann is VP Collaboration Security & AI Strategist at Mimecast. Mimecast is transforming the way businesses manage and secure human risk. Its AI-powered, API-enabled connected human risk platform is purpose-built to protect organizations from the spectrum of cyber threats. Integrating cutting-edge technology with human-centric pathways, our platform enhances visibility and provides strategic insight. Our technology safeguards critical data and actively engages employees in reducing risk and enhancing productivity. More than 42,000 businesses worldwide trust Mimecast to help them keep ahead of the ever-evolving threat landscape. From insider risk to external threats, customers get more with Mimecast. More visibility. More agility. More control. More security.

more insights