Do Search Engines Hold the Solution to Creating Trustworthy AI?

Generative AI’s problem in a business context is a simple one: trust.

Because of AI ‘hallucinations’, where chatbots can create false or misleading answers, business leaders are hesitant to adopt the technology, despite benefits for efficiency, productivity and improved customer service. Since ChatGPT arrived in late 2022, business leaders have been grappling with the problem of creating generative AI applications which offer accurate answers. But to understand this issue, it helps to look at both the struggles and successes of another common technology: internet search engines.

Internet search underpins the way all of us use technology, from the way we shop to the way we discover new applications. Search engines sift through vast amounts of online data to deliver answers, with varying degrees of accuracy. By understanding the reasons behind these successes and weaknesses, business leaders can adopt the best approaches from search engines and adapt them with new approaches for gen AI in business to solve the technology’s ‘trust’ problem.

Finding accuracy

One area where search engines perform well is sifting through large volumes of information and identifying the highest-quality sources. For example, by looking at the number and quality of links to a web page, search engines return the web pages that are most likely to be trustworthy. Search engines also favour domains that are known to be trustworthy, such as government websites, or established news sources.

In business, generative AI apps can emulate these ranking techniques to return reliable results. They should favour the sources of company data that have been most frequently accessed, searched or shared. And they should strongly favour sources that are known to be trustworthy, such as corporate training manuals or a human resources database, while deprioritising less reliable sources.

Ranking reliability

Many foundational large language models (LLMs) have been trained on the wider Internet, which as we all know contains both reliable and unreliable information. This means that they’re able to address questions on a wide variety of topics, but they have yet to develop the more mature, sophisticated ranking methods that search engines use to refine their results. That’s one reason why many reputable LLMs can hallucinate and provide incorrect answers.

One of the learnings here is that developers should think of LLMs as a language interlocutor, rather than a source of truth. In other words, LLMs are strong at understanding language and formulating responses, but they should not be used as a canonical source of knowledge. To address this problem, many businesses train their LLMs on their own corporate data and on vetted third-party data sets, minimising the presence of bad data. By adopting the ranking techniques of search engines and favouring high-quality data sources, AI-powered applications for businesses become far more reliable.

Conquering context

Search has become quite accomplished at understanding context to resolve ambiguous queries. For example, a search term like “swift” can have multiple meanings – the author, the programming language, the banking system, the pop sensation, and so on. Search engines look at factors like geographic location and other terms in the search query to determine the user’s intent and provide the most relevant answer.

However, when a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway. For example, if you ask a search engine, “What will the economy be like 100 years from now?” there may be no reliable answer available. But search engines are based on a philosophy that they should provide an answer in almost all cases, even if they lack a high degree of confidence.

This is unacceptable for many business use cases, and so generative AI applications need a layer between the search, or prompt, interface and the LLM that studies the possible contexts and determines if it can provide an accurate answer or not. If this layer finds that it cannot provide the answer with a high degree of confidence, it needs to disclose this to the user. This greatly reduces the likelihood of a wrong answer, helps to build trust with the user, and can provide them with an option to provide additional context so that the gen AI app can produce a confident result.

A transparent approach

Explainability is another weak area for search engines, but one that generative AI apps must employ to build greater trust. Just as secondary school teachers tell their students to show their work and cite sources, generative AI applications must do the same. By disclosing the sources of information, users can see where information came from and why they should trust it. Some of the public LLMs have started to provide this transparency and it should be a foundational element of generative AI-powered tools used in business.

AI without errors

Creating AI applications that make no and limited mistakes is challenging, but doing so unlocks enormous benefits for businesses. The right approach is to deal with AI tools with open eyes and be alert to potential problems. When we use the internet, we have all learned to have a healthy scepticism around online information and its sources. Business leaders should take this approach to build trustworthy AI, demanding transparency, seeking explainability, and rooting out bias whenever it creeps in.

Truly accurate gen AI applications could revolutionise the way all of us work, and the way the world does business. But to deliver such applications, trustworthiness must be front of mind from the beginning. Search engine technology offers useful lessons in how to surface accurate responses from data and unlock the potential of AI.


About the Author

James Hall is VP & Country Manager, UK&I at Snowflake. Snowflake delivers the AI Data Cloud — a global network where thousands of organizations mobilize data with near-unlimited scale, concurrency, and performance. Inside the AI Data Cloud, organizations unite their siloed data, easily discover and securely share governed data, and execute diverse analytic workloads. Wherever data or users live, Snowflake delivers a single and seamless experience across multiple public clouds. Snowflake’s platform is the engine that powers and provides access to the AI Data Cloud, creating a solution for data warehousing, data lakes, data engineering, data science, data application development, and data sharing. Join Snowflake customers, partners, and data providers already taking their businesses to new frontiers in the AI Data Cloud.

more insights