Generating value from the data ‘trading floor’

Data is the modern currency for businesses, yet locking it away in rarely yields the best results

Leveraging data to build a corporate data ‘memory’ to predict and understand the wider marketplace, consumers and internal organisations without doubt drives higher returns.

Data driven strategies have complexified with the emergence of native hyperscaler solutions opening an exponential explosion in new, application agnostic, data-centric technologies. If data was a currency AWS is the new trading floor.

One of the biggest differences is large commercial applications (such as SAP) tend to have a linked scaling cost between the data size and the amount of compute needed to manage and report on it, the growing range of big data technologies break this expensive symbiotic relationship allowing organisations to store petabytes of data with no compute running, if it isn’t needed, this makes data one of cheapest currencies to hold in the Cloud bank.

To achieve this, enterprise organisations must accept that data is an enterprise asset, just like anything else, that requires a long-term strategy. It sounds simple, but is not always the case, even among the largest organisations out there.

When it comes to developing a data strategy, generally speaking, there are two schools of thought:

1) The temptation to do nothing. To continue to focus efforts on the bits and the boxes and the bytes, without really thinking of a true strategy for the organisation. This restricts the IT team to thinking about where enterprise data lives, grows and dies – and drives attention on comparing the market for the cheapest platform to save a few bucks. These organisations aren’t really lifting their head to look at the horizon.

2) Then there is almost the polar opposite, which is to adopt the magpie’s approach of looking at the latest, shiny thing, beautification without a business case literally is only skin deep.

Beautiful, simple UI’s in many cases can drive significant business improvements, and a cheaper VM can yield some cost benefits, but without answering the Business “why” question these approaches are typically short term in nature, the business case must first be established. This begins by asking what the pain point is and how technology can help. This might be to reduce costs, modernise a business process or mine data for discovery purposes.

Data lakes are one very pertinent example of this, where the aim is to pull data together. But they then don’t think what to do with it. At its heart, a data lake is simply a set of modern technologies, from storage layers, to query engines, to AI, that doesn’t really answer the “Why” question. Many organisations take the leap before really thinking about why, or what they are trying to fix.

The attraction is clear. If you take SAP as an example, it’s a world class platform, but can also be something of a silo, traditionally keeping its data to itself and its own reporting tools.

The minute you want to go broader and look at data from across the enterprise, either SAP or non-SAP or from hundreds of other sources (Public, private or subscribed), it starts to get interesting. By building a data lake you are bringing together all those sources into a leverageable asset – and this will help derive better insights, improve cross application business processes, modernise enterprise reporting and enable you to learn new things about your company. Rather than going into SAP, collating data from different application sources, wrangling it into Excel and providing it in a clunky dashboard for the C level, that process which once took many painful weeks can now be automated by pulling data into a neutral data lake, enabling you to model & report to resolve your business pain point, or simply to discover new business insights (the why…).

Accessible data

From a technical perspective, when it comes to managing data, gravity can be pretty heavy. By adopting a data strategy built around a data lake, you can reduce the financial gravity (platform, licensing, maintenance etc) of the data by moving into the cloud, for example AWS, as well. There, it is cheaper to keep than it would be in a good, old fashioned offline media. We’ve really passed that evolutionary tipping point, meaning data should now always be accessible.

Take a retailer, for example, being able to make queries around successful Christmas promotions in these same conditions over the last two decades, allowing you to look at historical POS data, consumer loyalty behaviour, price point analysis etc. This is powerful stuff.

Its quite common for large enterprises to centralise data sources and simply want to go in and discover ‘stuff’ quickly and cheapy. They might find something, or they might not, but in an agile AWS world failing fast (which is also a good result!), or finding insights to build a business case can be achieved in just a few weeks. This is data discovery at its best. Best of all its now very quick and cost effective to do, so you can fail fast and move onto the next area without any noticeable business or cost impact.

Solving problems

Another way in which the enterprise benefits is in fixing broken business processes, using native cloud technologies as a way of solving a particular problem. This means they no longer have to buy yet more enterprise software which involves licenses, maintenance, disk storage underneath and more virtual assets to run and operate. By doing this through native cloud tooling is empowering them to refactor business processes by using data. This is extremely valuable to the organisation at large.

Of course, one of the biggest benefits of cloud technology is to reduce the financial cost of running systems such as SAP. For organisations where organic data is growing at an exponential rate, requiring a new box and a slice of licensing to run it in the SAP world, they can reduce costs by simply by ageing that data out of SAP, putting it into native Amazon technology, which reduces the size of the system that they had to buy, licence and run. Further, it massively reduces the cost of maintaining and querying that data in the legacy world.

We’re certainly seeing the emergence of a new type of business role, that of the Data Architect – somebody who understands business issues, pains and processes but can apply data models and bring an algorithmic mindset to manipulate that into a format that can yield results that directly impact top and/or bottom-line improvements.

Eyes to the horizon

Most organisations would consider themselves to have a data strategy, but its often intrinsically linked to application data management vs enterprise wide and open marketplace data that relates to the industry vertical. Much of this depends on the maturity of the organisations with data architecture & skills within the organisation.

On the one hand there are those who don’t think beyond the application strategy about their data and then set a strategy for it (newer silos, but still a silo!). Then there are others who look at business improvement by bringing data into a data lake and what can be achieved by that. Your typical operational manager of a large enterprise software solution is not going to think across those axis lines. It’s then a question of finding someone to guide you through these scenarios, helping enterprise organisations develop insights they have never had before, they say fortune favors the bold, and in an agile cloud model being bold has never been so cheap and low risk.


About the Author

Ben Lingwood is Chief Innovation Officer at Lemongrass Consulting. Lemongrass is a software-enabled services provider, synonymous with SAP on AWS, focused on delivering superior, highly automated Operate services, accelerating growth and profitability with robust, reusable migration pattern assets.

With operations in all global geographies, Lemongrass was established in 2008 as a specialist SAP technology consultancy. Today, the company is an AWS Partner Network Premier Consulting Partner, an AWS accredited Managed Service Provider, and was the second company globally to achieve the AWS SAP Competency status. The company is headquartered in Atlanta, GA, in the Americas, and in London, UK in EMEA.

Featured image: ©Whyframeshot