UK businesses are grappling with the challenge of moving generative AI from pilot initiatives to full-scale implementation, often hindered by concerns around privacy, quality, and cost.
As a result, many are shifting their focus towards AI agent systems – a trend that is set to accelerate as organisations increasingly recognise that ‘general intelligence’ alone is no longer enough. To meet their goals and ensure their data remains relevant, precise, and trustworthy, businesses are now prioritising ‘data intelligence’.
Understanding AI agent systems
With AI agent systems, it’s not about being ‘all-knowing’; it’s about ‘exactly knowing’.
General-purpose models like ChatGPT are extremely useful in many contexts, but when deployed on their own, they can struggle to deliver meaningful results for enterprises. Trained on broad datasets, these models are often too generic to fully grasp the unique language, processes, and data structures that define a specific business.
The real opportunity lies in using the right model for the right job. That might be an open source model fine-tuned for a particular domain, or a commercial model integrated into a broader framework of tools and components. What matters most is not the model itself, but how well it is applied to solve a clearly defined business problem.
AI agents enable this by coordinating specialised components – from retrieval and reasoning to validation and orchestration – each tailored to a specific function. For example, a customer support agent and a financial forecasting agent might operate within the same system, but each is optimised for its own domain and supported by relevant, private data.
This targeted approach allows businesses to move beyond the limitations of general intelligence, and towards data intelligence; the ability to reason on their own proprietary data and drive informed decisions.
Validation is essential for building trust
Trust is arguably one of the primary barriers for businesses looking to adopt AI. A recent survey indicated that workers in the UK and Ireland are 28% less likely to trust the use of AI, compared to the global average. Concerns can range from potential errors, bias, and unpredictable outputs, to name a few. AI agent systems address these concerns by integrating human oversight and validation processes, such as ‘human-in-the-loop’ grading
systems and cross-checking tools. These layers of validation enhance trust in the system and encourage a more collaborative approach, enabling smoother adoption and improving overall outcomes.
For businesses, employee trust leads to greater confidence in AI systems, which in turn helps to improve both the user experience and the effectiveness of the AI solutions deployed.
Solid data is key to agentic success
At the heart of any successful AI agent system is a robust data foundation. Despite the growing pressure on businesses to adopt AI, many are still grappling with fragmented datasets and the complexities of unifying these assets. Furthermore, governance and security concerns are high on the agenda, as the increasing use of data can present new risks.
Despite these challenges, businesses are beginning to make progress, often starting with pilot projects that successfully demonstrate ROI before scaling further. This gradual, iterative approach allows companies to build the necessary infrastructure – people, processes, and technology – required for a successful, long-term AI transformation.
An essential aspect of AI transformation is bringing data intelligence to the forefront. This can be achieved through modern data architectures, such as data intelligence platforms, which unify, govern, and operationalise data in one place. These systems enable non-technical employees to engage with data through natural language interfaces and private data integrations, making AI more accessible and accelerating adoption across the organisation. According to a recent Economist Impact report, almost 60% of those surveyed predict that, within the next three years, natural language will become the primary or only method for non-technical staff to interact with complex datasets.
The future: AI agent systems
The future of enterprise AI cannot rely exclusively on building larger, standalone models to achieve success; it’s crucial that businesses evaluate the best tool for their use case, and look to specialised AI agents that work together seamlessly. With a strong data platform, businesses can develop their own customised AI agent systems tailored to their specific objectives and industry. These systems can incorporate a variety of tools, such as vector databases for precise data retrieval, fine-tuning for domain-specific reasoning, and monitoring frameworks for safety and compliance.
AI agent systems ultimately represent a transformative shift in the adoption of generative AI. These systems don’t just solve problems – they build trust, drive value, and push the boundaries of what AI can accomplish. For enterprises ready to embrace this next generation of AI, the future isn’t solely about “general intelligence” but about ushering in a new era of data intelligence.
About the Author
Courtney Bennett, Director of Field Engineering at Databricks. Databricks is the Data and AI company. More than 10,000 organizations worldwide — including Block, Comcast, Condé Nast, Rivian, Shell and over 60% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to take control of their data and put it to work with AI. Databricks is headquartered in San Francisco, with offices around the globe, and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow.
Featured Image: LeonKino