The UK’s approach to introducing tech in education is a lesson in inclusive AI

The Government’s AI Opportunities Action Plan has highlighted the importance of improving public services with the support of artificial intelligence (AI).

This goal paves the way for more efficient services and a better overall service for all citizens. But there are a lot of considerations at play when we look at the role that AI must play in our future. And one of those is ensuring AI is inclusive.

One important public service in need of more efficiency is education. The UK Government wants AI and other smart technologies to help modernise the UK’s education system. Speaking recently, Education Secretary Bridget Phillipson explained how AI could reduce the workload for teachers while helping to unlock the recruitment and retention crisis within the profession.

But while the secretary of state is keen to promote the uptake of AI in the classroom, she also took time to reassure parents that any tech used in schools would be safe by announcing a ‘new set of big tech-backed AI safety expectations, outlining how AI tools can be used safely in schools’.

The guidance – laid out in Generative AI: Product Safety Expectations – is aimed at education technology developers and other suppliers to schools and colleges.

It sets out a series of guardrails to help ensure that AI in classrooms is designed with fairness and security in mind. It touches on everything from exposure to harmful content to issues such as privacy and governance. It also addresses the need for AI to avoid bias and discrimination.

As such, it’s an example of ‘inclusive AI’, a methodology or design principle aimed at ensuring AI systems are fair, unbiased and accessible to all users.

Inclusive AI must go hand in hand with advances in technology

As an approach to the development of AI, it’s something I feel strongly about – not least because it chimes with many of the themes spelt out in AI by Design, which encapsulates our own approach to AI development.

Made up of four foundational principles, AI by Design creates a framework to help our customers establish a secure, productive and enduring relationship with our AI-driven solutions.

It focuses on issues such as privacy and security, accountability and fairness and transparency and trust.

Crucially, it’s not designed to be a static framework left to collect dust on the shelf. Instead, it’s a dynamic and evolving set of guidelines that will adapt as we learn more about AI.

What’s more, I can see many similarities between this approach and the guidelines set out by the Government for schools – not least that both share a common goal to ensure that AI systems are built with safety, security and trust at their core.

For example, the Government’s framework highlights the risk of bias in AI-generated educational content, which could lead to misinformation or unequal learning experiences. Likewise, AI by Design follows a similar fairness-first approach, ensuring that AI-driven IT solutions operate transparently, avoiding systemic biases that could impact decision-making.

And it goes without saying that the government framework calls for robust data protection in AI tools used in classrooms, preventing misuse of student information.

The concerns about AI are real and must be addressed

It’s not just the tech community that is calling for the implementation of these guiding principles – or even government departments. The public has also voiced concerns.

In the latest Public attitudes to data and AI: Tracker survey (Wave 4) report published in December 2024, it’s clear that the public wants greater reassurance about the impact AI is making on all our lives.

While the study found most adults can now explain AI to some degree, public perceptions are still dominated by concerns about issues such as bias, transparency and inclusivity.

If teachers are going to be using chatbots to assist with learning, automated systems to personalise education and generative AI to create lesson plans or carry out other administrative tasks, then it’s likely these anxieties will continue to exist. To address those concerns, it’s only right that any use of AI to drive up educational standards must be done safely, fairly and in a way that benefits both students and teachers.


About the Author

Sascha Giese is Global Tech Evangelist at SolarWinds. SolarWinds began with two IT professionals trying to solve complex problems in the simplest way. Today, we still take pride in developing deep, real-world understanding of the challenges our customers face. That’s how we deliver intuitive, time-saving solutions and speed-to-value like nobody else.

more insights