Since the debut of ChatGPT nearly two years ago, most businesses have at least dabbled with generative AI.
Yet, for many, the initial excitement around large language models (LLMs) has given way to a more practical concern: how to prove their tangible value within the enterprise.
LLMs gained global recognition for their flexibility, producing content across a wide range of topics—from bedtime stories to in-depth blogs on technical subjects like sovereign cloud, or even concise summaries of lengthy reports. However, when it comes to more specialised business needs, these models often fall short, particularly when it comes to hallucinations. These cause businesses to be more cautious about utilising genAI – but that doesn’t mean they should broadly limit its use. By taking a more focused approach, companies can still harness its potential effectively – by just thinking smaller.
LLMs: Just One Part of the Equation
While LLMs can be used for generating emails, summarising documents, and drafting content, these tasks usually occur within a broader context. In business environments, completing tasks often requires pulling information from various systems, such as CRM systems, communication tools, or internal correspondence. For example, an email summary of a customer status might involve information from multiple sources, such as a Customer Relationship Management (CRM), Conversation Intelligence tool or context from previous correspondence. Workers cannot be expected to efficiently identify and then copy all of this data and text manually into an LLM dialog window and paste the output where it’s needed.
Rather, the true potential of genAI comes when it’s embedded into existing workflows through specialised software solutions. For example, a tool designed to summarise sales activity would be most effective if integrated directly into a sales pipeline system, enabling teams to forecast revenue more accurately. Similarly, if the goal is to generate customer outreach emails, genAI should be linked to the organisation’s sales engagement tools to automatically surface the relevant information and generate optimal results.
Bigger Isn’t Always Better
Although well-known commercial models have dominated headlines, their large size and general-purpose nature often make them costly and inefficient. Smaller, domain-specific language models can be a better fit for businesses, especially when tailored to specific industries or use cases. These models, designed to handle tasks like analysing sales calls, writing financial reports, or generating accurate legal citations, are typically more effective. They require fewer computing resources and deliver better performance in specialised context, leading to faster results and reduced costs—sometimes as much as 90% lower than larger models.
These smaller models can deliver more accurate insights by being trained on industry-specific data. For instance, while a large model might be able to interpret general phrases, a domain-specific model will understand nuanced meanings within a particular context. In a sales scenario, the phrase “let’s go” might signal an intent to close a deal, whereas in a social setting, it could imply a desire to leave. This higher level of precision makes domain-specific models far more valuable for businesses looking to optimise their genAI investments.
The Pitfalls of DIY AI
For companies seeking specialised genAI models, the next question is whether to build a custom model or opt for an existing solution. While creating an in-house model may seem appealing, the effort is rarely worth the return, especially when compared to the well-established solutions available in the market.
Building a custom model presents three key challenges. First, the company must integrate various data sources to feed the LLM. These connections are necessary for the AI to access relevant information from enterprise systems.
Second, workflows must be designed to ensure employees can efficiently interact with the AI-generated content. If genAI is being used to create documents, there needs to be a process for teams to collaborate and refine the output. Or, if it’s summarising customer data, there must be a mechanism for distributing those insights to the appropriate stakeholders.
Lastly, companies need to train their specialised models, which is a complex task requiring vast amounts of data and highly skilled personnel. Even large companies with ample daily data may lack the breadth of information that established genAI providers possess. These providers can fine-tune models using vast and diverse datasets, making them far more effective than what most businesses could develop internally.
Choosing the Right genAI Solution
As domain-specific genAI solutions become more prevalent across industries, companies should start by identifying the specific inefficiencies or challenges they want to address. Tailoring genAI to a particular problem will provide the most value.
A smaller, specialised model fine-tuned for a specific set of parameters will always outperform a general-purpose model trained on irrelevant data. While general-purpose LLMs will remain useful for tasks like generating headlines or summarising long texts, the real advantage comes from using models designed to meet specific business needs.
Ultimately, the key to unlocking the full business potential of genAI lies in thinking smaller and more focused. Companies that adopt this approach will be more likely to see significant returns on their AI investments.
About the Author
Eilon Reshef is a co-founder and Chief Product Officer at Gong, the leading revenue intelligence platform. Before co-founding Gong, Eilon co-founded Webcollage, a SaaS solution in the e-commerce infrastructure space. Eilon holds an M.Sc. and B.Sc. in Computer Science (summa cum laude) from the Weizmann Institute of Science and the Technion.
Featured image: Adobe