Across the UK, companies are racing to adopt AI strategies to avoid falling behind.
From adopting ‘distillation’ to produce cheaper models to OpenAI launching its largest-ever model, there has been a flurry of news stories this last month that all show AI’s rapid expansion. But an alarming case, a model trained on flawed code that ended up praising Hitler and promoting self-harm highlights a deeper issue, AI is only as good as the data it’s trained on.
The real risk isn’t just a lack of regulation but it’s the rush to deploy AI built on flawed, recycled data. This can lead to dangerous, unintended consequences that undermine AI’s reliability and credibility.
Garbage in, garbage out
AI follows a simple rule: “garbage in, garbage out”. Put simply, if the data you’re using to train your AI models is outdated, duplicated or incorrect, AI is essentially useless to your business because whatever results it yields will be inaccurate. AI models are only as good as the data they are trained on. Yet, as AI adoption accelerates, there is a growing trend of models being trained on AI-generated data rather than original human input or data from the business itself. This creates a self-reinforcing loop where AI consumes its own output, leading to an inevitable decay in accuracy and reliability.
We’ve already seen warning signs. Generative AI tools have fabricated facts, creating false but convincing academic references that raise concerns about their reliability in professional settings. AI trained on historical recruitment data has reinforced gender and racial discrimination, as seen in Amazon’s AI-driven hiring tool. Meanwhile, as AI-generated text finds its way into news articles and research papers, future models risk training on misleading information, compounding errors even further.
Quality Over Speed
The rush to implement AI without addressing data quality is more than just a technical issue, it has real-world consequences. What happens when governments, banks, and healthcare providers make critical decisions based on unreliable AI predictions?
We’ve already seen AI hallucinate facts and reinforce biases. If public institutions and businesses blindly trust AI outputs, the long-term damage to trust and decision-making could be irreversible.
The UK is taking a strategic approach by delaying the latest AI safety act, choosing instead to focus on developing world-leading AI technology before introducing strict regulations. This gives an opportunity to lead, not just to lead in AI innovation, but to set a global standard for data quality and governance. The UK is creating an AI-friendly environment instead of enforcing rigid regulations for a technology that is still evolving. This means giving companies the freedom to experiment, investing in education and research, and designing regulations based on real experience rather than fear and speculation – now it’s time for the UK to also focus on data quality.
But delaying regulation alone isn’t enough. If the UK wants to stay competitive while avoiding AI’s pitfalls, it must go further by enforcing data integrity standards. AI models should not be trained on low-quality, biased, or synthetic data. A national AI strategy must include transparency requirements for training datasets, ensuring accountability in AI-driven decisions.
Shared responsibility
The responsibility is not only on the government, tech companies must do their part. They need to ensure that their AI-models are continuously updated with high-quality, real-world data, not just regurgitated AI-generated content.
Another challenge is the lack of diversity in AI development. Today, most AI models are built by a small demographic, increasing the risk of unintentional bias. According to Towards Data Science, women are severely underrepresented in AI education and careers, leading to skewed perspectives in AI training data. A more diverse AI workforce among tech companies is essential to creating models that are truly representative and fair.
Have the right focus
The UK cannot expect to lead in AI without first addressing the critical issue of data quality. A rushed AI strategy focused purely on speed risks reinforcing biases, misinformation, and declining accuracy.
Instead, the UK should focus on ensuring AI models are trained on verified, real-world data rather than AI-generated content. It should develop clear guidelines for AI governance and data integrity while encouraging innovation without compromising long-term AI reliability.
By focusing on quality over quantity, the UK has a chance to build an AI ecosystem that is not only innovative but also accurate, fair, and globally trusted. Rushing ahead without a strong foundation isn’t just inefficient—it’s a risk we can’t afford to take.
About the Author
Erik Wikander is founder and CEO of Wilgot.ai. We’re reimagining high-quality content creation for a new era of AI search. Our platform bridges the gap between content creation and SEO, enabling teams to generate exceptional, search-optimized content that amplifies human creativity. Wilgot is purpose-built for today’s multi-platform AI-driven search landscape—not constrained by outdated, Google-only strategies—giving marketers the tools they need to scale future proofed content with confidence. Discover. Generate. Scale. See how Wilgot helps your team create content that stands out in an AI-first world.