ChatGPT has changed how I, and probably most of you, look at AI. But let’s be clear, it’s not ‘intelligent’ in the biological sense.
It can complete writing tasks and some coding challenges, but in both cases human expertise is still required since its output is not always precise. Its expertise is general, and it lacks deep knowledge in your domain.
As such, systems like ChatGPT have a propensity to generate results that sound excellent but are inaccurate. Worse, it’s hard to figure out why. Large language models (LLM) are inherently black boxes and aren’t great at explaining how they got to the answer.
In turn, this limits the places where LLMs can be used in anger. It might be fine for search chatbots but it’s not the tool for writing safety-critical code. In fact, one of the creators of ChatGPT has added to a growing chorus of researchers warning of the potentially catastrophic consequences of artificial intelligence development.
Although (often amusing) stories about inaccuracies hinder deployments, the potential benefits of utilising AI are enormous. Industries such as pharmaceuticals are already deploying generative AI in areas such as genetics research, while generative AI is also being used to design physical objects in manufacturing settings. Another example is the use of generative AI tools in creating background music for video games.
But for CIOs looking to harness the power of LLMs without encountering issues related to accuracy and explainability, what is the best way to move forward? There are a number of potential strategies to consider. One promising approach is to train the LLM not with a broad corpus of text, but with a domain-specific knowledge graph. With this approach the LLM isn’t a general knowledge savant, but can instead more precisely answer questions from a specific, smaller, domain.
Do we need ‘small’ large language models?
Using a knowledge graph to train AI models may provide a powerful means of mitigating the types of misrepresentations that ChatGPT espouses. Large language models like ChatGPT can ‘hallucinate’—guess answers with a staggering level of bluff—due to their ingestion of vast quantities of text that they struggle to fully understand. As a result, they may generate multiple interpretations or conflicting or even flat-out contradictory explanations of the same information.
However, if you train an LLM on curated, high-quality, structured data—like a knowledge graph—you could significantly improve its functionality. By using a knowledge graph to provide curated data for the LLM to learn from, the resulting range of LLM responses will be more precise and trustworthy, even if the range of potential responses is somewhat limited because it will have typically been trained on much smaller dataset than if it just consumes information from the Internet.
Such “small” language models (SLMs) may soon be prevalent. As the concept of SLMs gains momentum in the business world, powered by advances in (mostly) open source, we can expect to see major enterprises leveraging this technology to enhance their customer service offerings.
More than a fun toy to write marketing copy
The converse is also true. Businesses are awash with data but can struggle to organise it in a way that is transparent and compliant. While LLMs are neither transparent nor suitable for compliance, they can be used to create a knowledge graph that is.
In such cases, large amounts of data from the business can be used to train the LLM, but then the LLM itself produces a knowledge graph from which line of business questions can be answered. The knowledge graph is transparent, and so we can reason about the provenance of answers produced by querying the graph, and curate for accuracy the graph too. In safety-critical use-cases like medical research, this is literally life saving. Imagine how that could speed up R&D or refine your problem identification or compliance procedures.
ChatGPT is not the answer to heavy-duty enterprise AI use cases, but once complemented by graph data science, is the right stepping stone to them.
About the Author
Jim Webber is Chief Scientist at graph database and analytics leader Neo4j, co-author of Graph Databases (1st and 2nd editions, O’Reilly) and Graph Databases for Dummies (Wiley). Neo4j, the Graph Database & Analytics leader, helps organizations find hidden relationships and patterns across billions of data connections deeply, easily, and quickly. Customers leverage the structure of their connected data to reveal new ways of solving their most pressing business problems, with Neo4j’s full graph stack and a vibrant community of developers, data scientists and architects across hundreds of Fortune 500 companies.