Three ways firms can protect themselves from AI cybersecurity risks

Across the world, we are seeing a rise in the number of cyber-attacks, with research finding an 8% spike in global cyber-attacks in the second quarter of this year – the most significant increase in the last two years.

Accelerated digitalisation across our professional and personal lives, increasingly sophisticated attack techniques used by cybercriminals as well as the easy accessibility of hacking tools, such as malware and services on the dark web, are all factors lowering the barrier to entry into cybercrime.

Now, generative AI is adding another layer to cyber threats, providing new attack methods for cybercriminals. AI in cybersecurity serves as a double-edged sword, both enabling businesses to better detect and respond to cyber threats while also increasing the attack surface. Here we look at three key ways businesses can protect themselves against AI-related cyber threats: priorisiting staff training, investing in research and development, and implementing controls on employees sharing sensitive information with AI chatbots.

Prioritising staff training

Firstly, businesses need to recognise that employees are the first and last line of defence. Employees tend to interact daily with corporate IT systems, data and external entities, meaning they have a crucial role in identifying and responding to cyber threats. However, according to a UK Government and Ipsos report this year, 50% of UK businesses, some 739,000 in total, lack basic cyber security skills training. This means the demand for cyber security professionals is exceeding the supply leading to understaffing. As a result, the lack of expertise amongst companies is allowing attackers to capitalise on these vulnerabilities. 

To address this, companies need to prioritise upskilling cybersecurity professionals at all levels, improving their skills and abilities. Businesses will need to implement an effective strategy – assessing staff’s current knowledge and awareness of AI and cyber threats, as well as educating employees on AI-powered cyber threats, such as AI-driven phishing. Investing in employee cybersecurity training is essential when trying to stay ahead of developments in the cyber landscape and being able to understand the impact of the latest technologies such as AI.

Investing in research and development

Next, businesses need to continuously invest in research and development to bolster their cybersecurity solutions, given the rapidly evolving threat landscape. Across the world, attackers are constantly exploiting new vulnerabilities and developing techniques. One such example is the rise in prompt injection attacks, whereby an attacker manipulates the input data in the AI chatbot to influence the outcome – often to cause scams and data theft. With the National Cyber Security Centre (NCSC) warning that malicious prompt injections will grow, the onus is on businesses to develop innovative security solutions to tackle potential AI-related threats.

So what should businesses prioritise when it comes to investing in research and development? Companies can look at exploring custom software, hardware and other security solutions, given that unique architectures are more challenging to exploit. Prioritising custom systems will allow companies to develop solutions that are tailored to their specific operational needs and individual risk profile, enabling them to be more successful in their cybersecurity efforts.

Controlling the sharing of sensitive data

Finally, across industries, employees are entering potentially sensitive business data into generative AI tools like ChatGPT to help them with drafting and editing documents. However, many are not fully aware of the associated security implications. According to research, 15% of employees regularly post company data into ChatGPT, with over a quarter considered to be sensitive information – putting employers at risk.

By sharing regulated and financial data, employees are giving third parties access to sensitive materials and compromising companies’ control over how information is handled. In order to strike the right balance between safeguarding data and productivity, organisations will need to impose access controls on who can read and share sensitive data, as well as provide comprehensive training on best practices.

As businesses enter unchartered territory with the increasing usage of generative AI tools, they need to focus on having appropriate data protection and security measures in place. Companies will also need to engage employees in regular training sessions to educate them about the importance of data security, as well as develop clear data security policies and procedures. Businesses can even go one step further and restrict the access of employees to certain types of sensitive information. Ultimately, going forward, companies will need to have the right guardrails in place to allow them to reap the benefits of AI without compromising their security.


About the Author

Jesper Trolle is CEO of Exclusive Networks. Exclusive Networks is a global trusted cybersecurity specialist for digital infrastructure helping to drive the transition to a totally trusted digital future for all people and organisations. Our distinctive approach to distribution gives partners more opportunity and more customer relevance. Our specialism is their strength – equipping them to capitalise on rapidly evolving technologies and transformative business models.

more insights