To succeed with AI, look at your approach to data

Companies are spending big on AI.

IDC has estimated that investment in artificial intelligence software, services and hardware will reach $191 billion by 2026, growing at 25.5 percent year on year. According to research by McKinsey, more than half (56%) of companies have already adopted AI in some form within their businesses. The amount of AI capabilities or services has also increased, doubling on average between 2018 and 2022. These investments are aimed at improving efficiency around services delivery and supply chains, and boosting the revenue generated in areas like sales, marketing and corporate finance.

Looking ahead, companies of all sizes want to use AI in their business operations. However, while there are massive amounts of potential around AI today, actually turning those successful AI pilot projects into long term production deployments is still problematic for companies outside the technology elite.

McKinsey found that only around a quarter of companies can attribute more than five percent of their organizations’ earnings before interest and taxes to AI, and that this figure was comparable to previous years’ studies. Meanwhile, Accenture found that only 12 percent of companies have currently achieved significant growth from their AI projects.

The companies that are succeeding around AI are those with the most depth in the technology sector, such as Netflix, Amazon, Uber and FedEx. These companies have already invested in their data, analytics and AI strategies and they see returns from improved customer service and the ability to respond to demand instantly. For example, Uber’s Michelangelo provides reliable and scalable machine learning and AI pipelines for predicting demand and allowing the company to respond in real time.

So how can the rest of the world get up to speed around AI?

Getting the right approach to AI

In the rush to use more automation, machine learning and AI in processes, it’s easy to overlook the most important element in making these projects successful – picking the right approach in the first place. There is no one approach to deploying AI that is right for everything, but the two main groups are batch and real-time. AI uses historical data to create models and then apply those models to new data that is coming in, but how that data gets applied and used is different.

AI based on batch processing uses historical data and models, and then applies those results against a new set of data to find patterns and get results. This takes place on a regular basis where a batch of data is inserted in the AI/ML process and the results are then delivered to be used. This approach is great for areas like historical trend analysis and predictions on what might happen next.

Conversely, real-time AI involves bringing the AI/ML model to the data as it is created. Rather than looking at huge sets of data for deep long-term trends, this approach is aimed at improving the response around specific events or transactions, and using AI to help in that process. This is a very different set of use cases to the batch approach, and includes areas like personalisation, real-time pricing, security and recommendations. What sets this set of use cases apart from the batch AI group is that they are time-sensitive. Picking the wrong approach around how you architect your AI can therefore make a significant difference.

Avoiding data management problems

Alongside picking the right approach to data, there are some other challenges that can affect your success around AI deployments. The feature store is at the heart of the AI process – data points fed into a machine learning model are measurable properties that can be used for analysis are called features, and these come from an application database or set of log files. These sets of data then need to be prepared for analysis, such as scaling values for consistency or comparing those values to prior records or calculating a moving average at the time a record was generated, for example. These actions do take time, which can slow down the flow of data.

For use cases that work on batch AI, this time lag between input, transformation and output may be acceptable. However, for real time AI, this lag will affect how quickly you can make a recommendation or affect a decision, which would then mean that the results are not available when someone needs it. Imagine waiting minutes for the next song recommendation, or to get insight into your decisions around a potentially fraudulent transaction? No matter how brilliant the result, customers and employees will be disappointed.

Alongside this, you can face a problem around the sheer amount of data that you might have. Normally, data must be aggregated to make it easier to move around, and to make it available where you need it. However, if you have to aggregate and transform the data in your feature store too much, then you can find it hard to identify the right actions in real time. This makes it harder to achieve the right outcome in real time as well.

Lastly, you might find it hard to apply that data when you need it the most. If you have to carry out too many transformations through different systems, then you can end up making the wrong recommendation or action during a process. For instance, your model might provide a special offer that would be suitable for a customer, but it might come through when they have already purchased something else or abandoned their original idea. This leads to worse outcomes, rather than better.

Improve your feature store approach to achieve real-time AI

To make use of real-time AI, data has to be collected at a very granular level. Using a massive volume of time-stamped data events, your feature store can be used to develop a deeper understanding of the relationship between the relationships that exist between actions and items. This can then be used to act upon changing behaviour over time, and then prompt when the right action can be taken.

For example, this could be looking at the right point in a user journey to make an offer, or the right recommendation for a response that an employee can take in a specific set of circumstances. By having in depth event data, the models in your feature store can identify a situation and potential outcomes, and then make the best recommendations to achieve a desired outcome. Most importantly, you can bring this model to your operational data so that you can carry out these actions when they are needed.

To make this work in practice, your feature store will be part of a wider data pipeline and lifecycle where your architecture does not constrain access and does not hamper scaling up. From a database perspective, the open source database Apache Cassandra can provide you with a scalable feature store that can handle millions of features being added, while Apache Pulsar can support data streaming when you need to share that data out to other sources.

This approach can also help your data scientists, as they can create features from the event data that is in Cassandra then carry out tests quickly. This makes it easier for them to test their models and iterate faster, improving performance and then scaling up the model to production deployments. This also reduces the potential for any data leakage between training and production which might affect model accuracy at scale.

The most successful AI applications today use massive amounts of event data to build models that can then be deployed into applications. These applications differentiate the service you offer and improve customer experience and operations over time, evolving constantly as new data comes in. Using Apache Cassandra, you can create your feature store and then use this to bring AI to your operational data, rather than the traditional approach of bringing data into your AI pipeline. This approach makes real-time AI available to more companies outside the big technology companies.


About the Author

Dom Couldwell is Head of Field Engineering EMEA at DataStax, a real-time data and AI company. Dom helps companies to implement real-time applications based on an open source stack that just works. His previous work includes more than two decades of experience across a variety of verticals including Financial Services, Healthcare and Retail. Prior to DataStax, he has previously worked for the likes of Google, Apigee and Deutsche Bank. www.datastax.com

more insights