AI under siege: Protecting business-critical models from cyberattacks

The impact that AI is having on our daily lives only continues to intensify.

Just 10 years ago, we witnessed the debut of Amazon Alexa and the introduction of AI-powered Echo devices into our homes. In the same year, meanwhile, Tesla rolled out its Autopilot software with autonomous steering, braking, and speed adjustments, showcasing the potential power of machine learning in our cars.

Fast forward to the present day, and AI’s influence has expanded dramatically.

In recent times, we’ve seen how generative AI platforms such as ChatGPT have captured global attention, driving adoption to new heights. Indeed, according to Microsoft, AI usage at work nearly doubled in the six-month period ending May 2024, with 75% of global knowledge workers utilising these solutions.

Consequently, businesses of all shapes and sizes are now leveraging AI for a variety of purposes, from inventory management and demand forecasting to personalised experiences and automated document processing. The potential for enterprises is significant. Indeed, according to Microsoft’s study, users say AI helps them save time (90%), focus on their most important work (85%), be more creative (84%), and enjoy their work more (83%).

Offering productivity enhancements to increased sales, cost cuts, improved customer relationships, streamlined processes, empowered decision making and more, McKinsey Global Institute estimates that generative AI alone could add between $2.6 trillion and $4.4 trillion to global corporate profits annually.

Within this context, it is vital that companies embrace AI and work to stay ahead of the curve. They risk falling behind in an increasingly technologically savvy and competitive landscape if they don’t.

The double-edged sword of AI in security

However, these opportunities also come with challenges. Indeed, despite AI’s promise of making enterprises’ lives easier in many areas, it will undoubtedly complicate matters in others.

Critically, cybersecurity is one function already facing AI-led impacts that are increasingly being described as a double-edged sword.

On the one hand, AI has already revolutionised protective capabilities, with new automated tools and systems enabling security professionals to better manage their workloads, more easily detect anomalies, and rapidly respond to potentially malicious behaviours. However, we’re also seeing cybercriminals leveraging AI to automate, scale, diversify, and advance their threat techniques.

Much of this focus centres around the use of AI to enhance their attacks, from developing more sophisticated malware and advanced persistent threats (APTs) to exploiting deepfake technologies for highly convincing social engineering schemes. However, threat actors are now targeting companies’ own AI programs, working to manipulate systems to drive incorrect, biased outcomes or even generate offensive content. 

Here, a variety of different attack methods have been observed, including:

  • – Data poisoning: Attackers introduce malicious or biased data into the training set, causing AI models to make incorrect predictions or exhibit biased behaviours.
  • – Model inversion: Threat actors seek to reconstruct sensitive training data, resulting in data breaches and privacy violations.
  • – Model manipulation: Attackers attempt to embed malicious behaviours within AI models through methods such as Trojan attacks, which can then be triggered under specific conditions to cause unexpected and harmful actions.
  • – Evasion attacks: Threat actors manipulate input data to evade detection by AI-based security systems.
  • – Model theft: Attackers reverse-engineer AI models to create competing versions or understand their weaknesses.
  • – Output manipulation: Attackers aim to generate offensive or harmful content, which is especially concerning for AI systems involved in content generation or moderation.

ISO 42001 and ISO 27001: Leveraging Key Guidance Frameworks

With threat actors already working to exploit AI programs and systems in so many different ways, it is vital that enterprises build robust defences into their models, ensuring they can’t be compromised or misused maliciously.

Of course, understanding exactly how to do this is easier said than done. That said, there are two key standards that enterprises can look to which have been specifically designed to guide the development and management of models in a more secure and effective manner.

The first of these is ISO 42001, a standard that can guide organisations in adopting best practices across risk management, roles, security controls, and ethical practices. It can also help them identify AI-specific risks, develop mitigation strategies, and continuously enhance AI security. By achieving accreditation, firms can show that they are managing and implementing AI systems responsibly.

Second, we have ISO 27001, a comprehensive framework for managing information security risks through regular assessments, controls, incident response plans, and compliance measures. Critically, the accreditation can aid firms in protecting their sensitive data and AI models from unauthorised access, ensuring confidentiality and integrity with a focus on encryption, and generally fostering a security-conscious culture.

In leveraging the best practices that are advised by these two key standards, companies will be able to develop and continually improve robust security frameworks specifically designed to protect their AI systems that also ensure compliance with key regulations.

With this in mind, there is also the impending introduction of the EU Artificial Intelligence Act, published on 12 July. This Act prohibits certain uses for AI and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models.  Similarly, during the King’s recent speech, he stated that the UK Government will “seek to establish the appropriate legislation to place Requirements on those working to develop the most powerful artificial intelligence models”, so ensuring that businesses are compliant is key.

Cultivating a security-first culture built on best practices

While it is vital for security professionals to embrace the guidance frameworks offered by these standards, their responsibilities should extend far beyond the Security Operations Centre (SOC). Indeed, individuals throughout organisations need to be educated on the importance of specific cybersecurity best practices to ensure maximum effectiveness.

Continuous learning is essential. While training programmes should continue to encompass traditional aspects, such as identifying phishing emails and proper data handling practices, they should also evolve alongside AI advancements to address emerging risks and challenges. Ethical considerations, such as bias detection and mitigation, as well as training on the threat of deepfakes, are relevant examples that firms should incorporate into their cybersecurity training programmes – and new areas of relevance will undoubtedly continue to emerge as the threat landscape shifts and evolves.

By staying abreast of critical trends and regularly updating training programmes to include best practices and insights for mitigating risks, enterprises will be well-positioned to enhance their cybersecurity posture and better protect their key assets on an ongoing basis.

Keeping a finger on the pulse in this way can feel daunting. However, complying with key standards will always be an excellent place to start. These standards continually evolve, providing relevant and up-to-date guidance that directly addresses the most prominent risks facing companies. They enable firms to establish a strong foundational security posture, ensuring they are equipped to handle current threats and adapt to new ones as they emerge.

Conversely, neglecting to comply with these standards can lead to operational inefficiencies, higher costs, challenges in decision-making, and, ultimately, AI systems that are vulnerable to adversarial attacks. Additionally, with increasing regulations around AI ethics and data protection, non-compliance can result in legal penalties, fines, and a loss of customer trust.

Firms need to ensure they land on the right side of this fence – compliance must take priority if operations and reputations are to be successfully safeguarded.


About the Author

Sam Peters is Chief Product Officer at ISMS.online. ISMS.online helps hundreds of companies around the world with their information security, data privacy and other compliance needs. The powerful ISMS.online platform simplifies the process of getting compliant with a range of standards and regulations including ISO 27001, GDPR, ISO 27701 and many more.

Featured image: Adobe

more insights