While the pace of technological progress we’ve seen in recent years is truly difficult to fathom, there’s little to suggest that things will slow down anytime soon.
Gartner’s Top 10 Strategic Technology Trends are a case in point. For 2025, the consultancy has predicted that agentic AI, post-quantum cryptography, spatial computing, polyfunctional robots and technologically-enabled neurological enhancement will be trends of the year – terms that, right now at least, are simply alien to many people.
It’s difficult to imagine these innovations entering the mainstream in the coming months. However, having witnessed the rapid trajectory of other technologies, it’s equally hard to argue that they won’t.
Take artificial intelligence as an example. While self-driving cars and autonomous robots were largely ideas just a few years ago – the posterchild of Hollywood Sci-Fi – these technologies are now part of our daily lives, with a range of AI applications actively reshaping industries at large.
Generally, the private sector has led the charge with AI adoption, from finance firms employing automated fraud detection and algorithmic trading to manufacturing plants leveraging AI-enabled predictive maintenance and robotics techniques. But now, the public sector and governments are also beginning to follow suit.
In the UK, the Parliamentary Digital Service (PDS) is currently in the process of exploring AI tools like Microsoft’s Copilot to assist MPs and their staff with administrative and constituency duties.
It’s a notable headline – one that reflects the growing desire among all factions of society to leverage technologies to enhance productivity, increase efficiency, optimise decision-making and improve service delivery. However, despite the undoubted and widespread benefits that AI offers, there are several key considerations that public and private sector players alike need to consider when developing or deploying AI systems.
Yes, technology continues to improve our lives in many ways. However, the outcomes are not always positive. To mitigate the risks, good governance is just as important as the technologies themselves.
The role of regulation: A look at EU AI Act
In the case of AI, data privacy concerns are gaining increasing attention as individuals and organisations seek assurances that sensitive and personal information won’t be misused by AI systems.
As a result, we’re increasingly seeing the introduction of new regulations that are specifically designed to drive ethical AI development and deployment that includes prioritising the safeguarding of sensitive information.
Here, the EU AI Act stands as a prime example. As one of the first and most comprehensive such regulations, it has placed the protection of digital rights and personal data front and centre, with focus areas including, safety, fairness and the elimination of algorithmic bias.
Indeed, the Act has been proclaimed for these emphases. Yet equally, it has its critics. Specifically, those opposed to the legislation argue that it is too strict, serving to dissuade companies from investing in AI development owing to excessive and complicated compliance demands.
Interestingly, it appears that frictions have already begun to emerge: While more than 100 companies including Amazon, Google and Microsoft have signed the EU’s AI Pact, committing to responsible AI use, Apple and Meta’s refusal to do so highlights growing tensions with regulators. Further, back in June 2023, Apple had announced that it would be delaying the release of three new AI features in Europe, citing “regulatory uncertainties”.
ISO 42001: A key guiding principle in effective AI development
Clearly, the challenge at hand is to strike a balance between sustaining public safety without impeding the ability of organisations to achieve compliance or pursue innovation.
For the UK, this makes the Parliamentary Digital Service (PDS)’s decision to explore AI tools a particularly interesting development.
With it being the responsibility of the government to create an environment that encourages AI development without compromising individual rights or societal wellbeing, all eyes will be on the PDS, with parliament’s own AI adoption likely to set a benchmark for others to follow.
Of course, the government – like many other organisations – will have much to consider in order to get this right. It must assess how it can ensure the safety of important government and public data, as well as adhering to GDPR and other relevant regulations. It must also ensure that AI tools operate with transparent, auditable decision-making processes to build trust and ensure that constituents understand how AI is being used to serve them. And it must find ways to avoid algorithmic biases is critical in public-sector AI applications.
So, how exactly can all of this be achieved?
For both public and private sector entities embarking on a journey towards increasing AI adoption throughout their operations, looking towards established frameworks that have been designed to facilitate the embrace of best practices can be invaluable.
Enter ISO 42001 – a standard specifically created to deliver guidance to organisations that design, develop and deploy AI systems on factors such as transparency, accountability, bias identification and mitigation, safety and privacy.
Its goal is to ensure that AI systems are built and implemented safely and ethically. By using its guiding framework, organisations can follow a roadmap that enables them to address data privacy, security, and ethical concerns without stifling the innovative benefits that AI offers.
Compliance should be viewed as a catalyst, not a roadblock
Though few organisations will relish the idea of more regulation and audits (albeit voluntary in this case), there are several good reasons to move forward with ISO 42001 certification sooner rather than later.
Not only does embracing ISO 42001 give companies a logical approach through which they can identify, evaluate and mitigate the risks associated with AI, ensuring that they can properly protect themselves from potential harm or scandal. Equally, aligning with the standard also signals an organisation’s commitment to responsible AI usage, helping to enhance stakeholder trust that can set firms apart from their competitors.
By incorporating the best practices that ISO 42001 promotes, enterprises will be well placed to streamline their AI processes, identify and rectify vulnerabilities earlier, and reduce the potential financial and reputational costs associated with AI failures. Further, given its emerging recognition, it is likely to become a key benchmark for AI management systems in the future.
So, how can compliance with ISO 42001 be achieved? Critically, there are several component parts of the compliance checklist that need to be ticked off, including:
- Conducting a gap analysis: to compare current practices against ISO 42001 requirements to understand where changes are needed.
- Developing an AI management system (AIMS): By integrating an AIMS with existing organisational processes, firms can achieve continuous improvement and alignment with ISO standards.
- Performing risk and impact assessments: To regularly assess AI systems for potential risks and impacts on individuals and society.
- Implementing ethical AI practices: Via the development of policies and procedures that address AI ethics, data protection and privacy.
- Preparing for certification: By documenting all processes and preparing for the external audit.
These steps may seem daunting, and it can be difficult to know where to start. However, with the right guidance, help and support from a specialist partner, firms will be well placed to navigate this process with confidence.
From leveraging pre-configured templates and frameworks that can facilitate rapid certification to dynamic risk management tools tailored for AI-specific risks and efficient documentation management solutions, the right kind of compliance-as-a-service (CaaS) support can go a long way in helping organisations to achieve ISO 42001 certification.
In doing so, companies will in turn be able to demonstrate their commitment to ethical AI use, instil confidence in stakeholders, and achieve competitive differentiation.
About the Author
Sam Peters is Chief Product Officer at ISMS.online. ISMS.online helps thousands of companies around the world with their information security, data privacy and other compliance needs. The powerful ISMS.online platform simplifies the process of getting compliant with a range of standards and regulations including ISO 27001, GDPR, ISO 27701 and many more. With ISMS.online you can make up to 81% progress from the moment you log in. Our Assured Results Method is there to guide you every step of the way and if you need any guidance then the Virtual Coach or our team of compliance experts are available to help you succeed. Our customers range from larger enterprises looking to improve their management systems, through to small businesses aiming to achieve standards like ISO 27001 for the first time. Whatever your goals, our platform is designed with all the tools you need and can grow alongside your business.
Featured image: Adobe Stock