AI is reshaping cybersecurity unlike any technology to date.
Attacks are becoming more sophisticated, criminals are scaling operations at unprecedented rates using malicious AI services, and at the same time, organisations are fighting back with AI powered defences.
But for all the technical complexity underpinning modern cyber operations, humans are still the biggest weakness. Most cyberattacks still begin with a moment of misplaced trust, whether it’s following instructions from a so-called CEO, or clicking a link after being blinded by the excitement of an award win.
In other words, phishing is still running rampant. But now, it’s a potent blend of automation and psychological manipulation.
Isn’t phishing small fry?
Unfortunately, not. AI has changed the economics of phishing and it’s incredibly lucrative, with a very low barrier to entry. Acronis data revealed that phishing now accounts for a quarter of all global cyber attacks. For managed service providers, which hold privileged access to numerous environments, this rises to over half (52%)— a 22% year-on-year increase.
Phishing is rarely the end goal. It’s merely the entry point that enables credential theft, lateral movement, and eventually disruption. Thanks to AI, attackers no longer need time, effort, or linguistic skills to be convincing. Realistic content can be generated instantly, adjusted endlessly, and delivered through the tools employees use every day.
Hiding in plain sight
Email is no longer the centre of workplace communication and attacks have evolved to reflect this. With the help of AI, we’re witnessing a shift in social engineering away from slow, inbox-based scams toward real-time manipulation embedded in everyday workflows.
A common example is collaboration platforms, which now carry approvals, document sharing, account changes, and informal problem-solving. They are designed for speed and familiarity, not friction. And that’s exactly what attackers exploit.
A short message asking someone to confirm a login, review a file, or approve a request feels routine in a chat environment, especially when it appears to come from inside the organisation. People respond quickly because that is how the tools are meant to be
used. And AI improves the effectiveness of lures by making them clearer, more context-aware, and free from the mistakes that once raised suspicion.
When phishing blends into an organisation’s ways of working, detection becomes significantly harder, and the time it takes for someone to bite shrinks. This also explains why malicious links and malware delivery remain persistent problems. Links are still a primary delivery mechanism, and collaboration tools make sharing effortless. One click, taken in haste, can have consequences far beyond a single account.
‘See it to believe it’ no longer flies
AI has pushed phishing far beyond simple text. Deepfake technology has advanced rapidly over the past two years, and its growing accessibility is allowing attackers to manufacture familiarity using faces, voices, and identities people already trust. Last year, for example, scammers successfully used an AI-generated impersonation of Financial Times journalist Martin Wolf to promote fraudulent investment opportunities.
The technical quality of modern AI fakes is alarming. More worrying still is how easy they are to create. Attackers with little or no technical expertise can now produce convincing content in minutes, test what resonates, and scale successful campaigns at minimal cost.
For organisations, this undermines a long-standing assumption that what looks or sounds authentic can be trusted. Visual and audio realism are no longer reliable signals of legitimacy.
Defend the workflow, not just the perimeter
There is no single fix for AI-enhanced phishing, but organisations need to rethink what they are actually defending. Security controls must reflect how work really happens. Collaboration platforms should be treated as high-trust environments and subject to the same scrutiny as email, identity systems, and endpoints. Visibility into suspicious behaviour, particularly around authentication and link sharing, is essential.
Training also needs to evolve. Awareness can no longer hinge on spotting poor grammar or obviously suspicious messages. Employees need practical guidance on when to pause, verify, and escalate concerns, especially in fast-moving chat environments where instinct often overrides caution.
Identity remains central. Phishing is usually a credential problem in disguise. Strong authentication, tighter access controls, and sensible segmentation can limit the blast radius when an account is compromised and shorten the path to containment.
About the Author
Santiago Pontiroli is TRU Lead Security Researcher at Acronis. Acronis unifies data protection and cybersecurity, delivering cyber protection that solves safety, accessibility, privacy, authenticity, and security (SAPAS) challenges. Acronis offers antivirus, backup, disaster recovery, endpoint protection management solutions, and award-winning AI-based antimalware and blockchain-based data authentication technologies through service provider and IT professional deployment models. These solutions protect data, applications, and systems in any environment. Founded in Singapore in 2003 and incorporated in Switzerland in 2008, over 5.5 million home users and 500,000 companies, including 100% of the Fortune 1,000, trust Acronis. Acronis products are available through 50,000 partners and service providers in over 150 countries and 40 languages.


