Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

How AI can alleviate alert fatigue

A recent Reddit post saw a security analyst ask for advice on how on earth he was expected to field in excess of 100 alerts during a 12 hour shift as well as communicate with customers, write lengthy reports and threat models and create threat advisory releases.

It’s a cry for help that will resonate with many in the cybersecurity industry. There’s a certain resignation to the fact alert fatigue goes with the job, and it doesn’t matter who your employer is; the situation is so endemic in Security Operations Centres (SOCs) that it won’t change. In fact, there’s real evidence to suggest it’s getting worse.

Attackers are increasingly looking for ways to fly below the radar and avoid triggering security systems. This sees them favour techniques that leave no real trace and utilise the resources of the operating systems to carry out Living off the Land (LotL) or passive attacks that intercept rather than invade. It makes the attacks more challenging to detect. While security analysts deal with alerts and false positives, actual attacks fall through the cracks.

GenerativeAI is also being harnessed by threat actors to write malware faster. The HP Threat Insights Report published in September revealed that AsyncRAT, an open source remote access management tool that is frequently manipulated to deliver malware, had been written by GenAI. Researchers were able to brute force the password and access the code which was easily readable, with comments left by the AI to describe each function. The attack demonstrates that GenAI is now being used to craft malware as opposed to just phishing attacks and this will lower the bar to entry, resulting in increased attacks and therefore alert volumes.

Detection is becoming more difficult

The tactics, techniques, and procedures (TTPs) attackers are using are also becoming more difficult to detect. A sophisticated criminal marketplace is now thriving, allowing threat actors to freely trade TTPs and create attacks with multiple components to avoid detection, and to dodge and pivot. This renders traditional means of defence useless because these rely on historic known TTPs. But new combinations of existing TTPs essentially fool the system, which further drives down the likelihood of true positive alerts being generated. Even if one or two events are detected, these systems are unable to join the dots and see they are related.

With alert volumes set to soar even further and security solutions struggling to detect attacks, the outlook does not look good for the beleaguered security analyst. The Cyber security skills in the UK labour market 2024 report found there’s a steady brain drain away from the sector with 11% leaving every year and 14% of those doing so pursue alternative careers. The ISACA State of Cybersecurity 2024 report also reveals that 46% of cybersecurity professionals cited high work stress levels as their main reason for leaving. And other surveys reveal a deep sense of disillusionment, with some reports claiming 64% plan to exit the sector.

It’s therefore imperative that steps are taken to boost retention, alleviate stress and make the job not just tolerable but rewarding. Nobody wants to go to work each day knowing that they’ll feel exhausted and demoralised by the end of it. In fact, working this way can prove counterproductive because the stress response triggered in dealing with so many alerts can desensitise the analyst, leading to poor decision making and missed incidents. So how can the business improve the process?

How AI could improve TDIR

The ISC2 2024 Cybersecurity Workforce Study found only 28% of professionals use AI today to improve threat assessment to identify and prioritise potential threats and minimise false positives, suggesting the technology could be more widely applied. But it’s also being underutilised in terms of its capabilities. Generative AI could be used to augment the security analyst and even assist in providing courses of action.

Threat detection investigation and response already sees correlation used today to map detections against the MITRE ATT&CK framework. But by using hypergrapsh, one of the technologies that falls under AI, it becomes possible to connect alerts and determine if they fit the pattern of a specific kill chain before handing this over to the analyst. In effect, it’s only when the chain of events becomes statistically unusual and a feasible threat that it is put in front of human eyeballs.

Generative AI qualifies the threat by interpreting the hypergraph, and explaining to the analyst what we are seeing, how far the attack has progressed, and what the next steps could be in the investigation. Rather than presenting every single alert to the analyst, they only see highly probable threats and are at the same time provided with recommended actions.

Such an approach has the potential to significantly reduce the analyst’s workload and destress the role while allowing them to do the things that human’s do best, from using their intuition to focus on emerging threats to liaising with colleagues and customers. It will reduce the likelihood of alerts being misclassified, improve response times and counter the growth in alert volumes that we can expect to see as a result of threat actors using AI. But perhaps most importantly of all, it will help arrest the potential exodus of talent from the sector by bringing back job satisfaction to the security analyst’s role, a vital consideration given that businesses are already struggling to man the SOC.


About the Author

Christian Have is CTO is Logpoint. Logpoint provides a converged cybersecurity platform that empowers organizations to thrive in a world of evolving threats. Established in 2012, Logpoint has consistently championed the mission of fortifying the digital heart of organizations. With a foundation built on excellence, we’ve emerged as leaders in the ever-evolving world of cybersecurity. Our team, anchored by a unified vision, delivers advanced cybersecurity solutions that champion business growth.

Featured image: Adobe Stock

more insights