A series of recent high-profile breaches has demonstrated that the UK remains highly exposed to increasingly sophisticated cyber threats.
This vulnerability is growing as artificial intelligence becomes more deeply embedded in day-to-day business operations. From driving innovation to enabling faster decision-making, AI is now integral to how organisations deliver value and stay competitive. Yet, its transformative potential comes with risks that too many organisations have yet to fully address. CyberArk’s latest research shows that AI now presents a complex “triple threat”. It is being exploited as an attack vector, deployed as a defensive tool and, perhaps most concerning, introducing critical new security gaps. This dynamic threat landscape demands that organisations place identity security at the centre of any AI strategy if they wish to build resilience for the future.
AI is expanding the threat landscape
AI has raised the bar for traditional attack methods. Phishing, which remains the most common entry point for identity breaches, has evolved beyond poorly worded emails to sophisticated scams that use AI-generated deepfakes, cloned voices and authentic-looking messages. Nearly 70% of UK organisations fell victim to successful phishing attacks last year, with more than a third reporting multiple incidents. This shows that even robust training and technical safeguards can be circumvented when attackers use AI to mimic trusted contacts and exploit human psychology.
It is no longer enough to assume that conventional perimeter defences can stop such threats. Organisations must adapt by layering in stronger identity verification processes and building a culture where suspicious activity is flagged and investigated without hesitation.
Using AI in defence
While AI is strengthening attackers’ capabilities, it is also transforming how defenders operate. Nearly nine in ten UK organisations now use AI and large language models to monitor network behaviour, identify emerging threats and automate repetitive tasks that previously consumed hours of manual effort. In many security operations centres, AI has become an essential force multiplier that allows small teams to handle a vast and growing workload.
Almost half of organisations expect AI to be the biggest driver of cybersecurity spending in the coming year. This reflects a growing recognition that human analysts alone cannot keep up with the scale and speed of modern attacks. However, AI-powered defence must be deployed responsibly. Over-reliance without sufficient human oversight can lead to blind spots and false confidence. Security teams must ensure AI tools are trained on high-quality data, tested rigorously, and reviewed regularly to avoid drift or unexpected bias.
The widening AI attack surface
The third element of the triple threat is the rapid growth in machine identities and AI agents. As employees embrace new AI tools to boost productivity, the number of non-human accounts accessing critical data has surged, now outnumbering human users by a ratio of 100 to one. Many of these machine identities have elevated privileges but operate with minimal governance. Weak credentials, shared secrets and inconsistent lifecycle management create opportunities for attackers to compromise systems with little resistance.
Shadow AI is compounding this challenge. Research indicates that over a third of employees admit to using unauthorised AI applications, often to automate tasks or generate content quickly. While the productivity gains are real, the security consequences are significant. Unapproved tools can process confidential data without proper safeguards, leaving organisations exposed to data leaks, regulatory non-compliance and reputational damage.
Addressing this risk requires more than technical controls alone. Organisations should establish clear policies on acceptable AI use, educate staff on the risks of bypassing security, and provide approved, secure alternatives that meet business needs without creating hidden vulnerabilities.
Identity security comes first
Securing AI-driven businesses demands that identity security be embedded into every layer of the organisation’s digital strategy. This means achieving real-time visibility of all identities, whether human, machine or AI agent, applying least privilege principles consistently, and continuously monitoring for abnormal access behaviours that may indicate compromise.
Forward-looking organisations are already adapting their identity and access management frameworks to handle the unique demands of AI. This includes adopting just-in-time access for machine identities, implementing privilege escalation monitoring and ensuring that all AI agents are treated with the same rigour as human accounts.
AI promises enormous value for organisations ready to embrace it responsibly. However, without strong identity security, that promise can quickly turn into a liability. The companies that succeed will be those that understand that building resilience is not optional, but foundational to long-term growth and innovation.
In an era where adversaries are equally empowered by AI, one principle holds true: securing AI begins and ends with securing identity.
About the Author
David Higgins is Senior Director, Field Technology Office at CyberArk. CyberArk is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads, and the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. For over 25 years, CyberArk has led the market in securing enterprises against cyber attacks that take cover behind insider privileges and attack critical enterprise assets.
Featured image: Adobe Stock


