The Technological Developments Shaping Data Management Trends in 2019

The volume of data we deal with on a daily basis has grown massively over recent years, with 90 percent of the data in the world generated in the last two years alone

In fact, 2.5 quintillion bytes of data is created every day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). With so much data at our disposal, the data management industry has seen significant developments in recent years and 2019 looks set to be no different with a number of new trends expected to emerge over the course of the year.

Trend towards humanising data will gain steam

In 2019, there will be a growing trend towards “humanising data”. Humanising data requires adding context to it, making it easy to access, and recognising that a human touch is required to interpret it.

This move is not simply a backlash against the impersonal and opaque analytics that so often have negative business and societal effects, but in fact due to there being a great deal to be gained from this mindset.

Ultimately, looking at the quality as well as quantity of data leads to better decisions and less unconscious bias. Treating data in this way will also allow businesses to bring in more data about each individual customer to create a more complete picture to enable customisation of services and products on an unforeseen scale.

In order to succeed at making large quantities of data intuitive to explore and understand, organisations must implement high-performance data platforms as well as a variety of data visualisation and analytics capabilities all running on the same data. Data quality assessment is easier with good control over the sources, processing pipelines, and residency of the data; while large quantities of historical data may need to be tapped to provide a more complete view.

Machine Learning meets governance

Within data management, Machine Learning (ML) has found applications in governance, risk and compliance (GRC), but ironically ML applications typically lack governance themselves. For example, the European Union’s General Data Protection Regulation (GDPR) requirement of “explainability” restricts the kind of ML algorithms that can be used. There’s also tension around access to ‘crown jewel’ data for ML purposes, often resulting in an impasse between Data Scientists (who want unrestricted access to all the data) and IT (who need to maintain compliance with data privacy and security regulations). In 2019, the industry should expect to see organisations start to recognise this problem and look for solutions that keep their data secure and compliant while giving data scientists appropriate access.

Increased traction for EDP technology

In the almost ten years since data lakes first came on the scene, organisations have been in a hurry to deploy data lake technology, mostly based on Hadoop. More often than not, this was driven by a vision of unified data access for the entire organisation and transforming legacy firms into modern, data-driven companies. The reality, however, has been very far from this, and tales of data lake projects producing ‘data swamps’ abound.

Over the last year, more attention has been placed on traditional concerns, such as data integrity and rapid development of data-intensive applications. Industry experts have also noticed the coining of a new variant of the term, an Enterprise Data Platform (EDP), which they expect to become a standard term in 2019 as more organisations turn to this approach to power their digital transformations.

This shouldn’t be surprising given that EDP technology offers organisations a more agile approach, allowing users and developers to find and leverage data where it lives. According to IDC, it is also possible to combine data where it makes sense to do so, to understand the data, to record its meaning and generate more frequent and better analyses. As more organisations adopt this approach in 2019, they stand to see a number of benefits, such as reducing the time to insight, creating a more agile enterprise and the ability to make more comprehensive business decisions.

DataOps will overtake Extract, Transform, Load

While DataOps only appeared in Gartner’s Hype Cycle for Data Management for the first time in 2018, in 2019, DataOps practices will become a bigger part of an organisation’s data management than traditional extract, transform, and load methodology (ETL).

Although DataOps may be considered by some to encompass ETL, many regard it as a new school of data integration that is much more agile and integrated. ETL is batch-oriented and quite heavyweight to develop and maintain, and it’s the domain of specific data integration vendors. With a DataOps approach, batch and real-time data integration can be done using the same development platform and same computer and database infrastructure.

Recognition for a multi-model databases

The number of different data models in use has continued to grow since the advent of NoSQL databases over a decade ago. It’s not uncommon to run into applications that use multiple different data stores, with five or six different models across five or six different products. This adds a lot of cost and complexity, not to mention the risk of data inconsistency from duplicate data being incompletely synchronised across different products.

As a result of this database sprawl, experts predict that within the next year, multi-model databases will become a recognised category by all major analysts. One of the main reasons for this is that multi-model databases provide an attractive option for those looking to support two or more different types of data models in the same application. A multi-model database offers several advantages over the ‘polyglot persistence’ approach of combining multiple products: the simplicity and lower cost of a single product, quicker time to market, and lower development, maintenance and support costs.

As you can see, 2019 looks set to be an exciting year for database management with a host of new and improved technologies allowing organisations to get a better handle on their data and reap the benefits this will bring their business.

About the Author

Jeff Fried is Director of Product Management at InterSystems. InterSystems is the engine behind the world’s most important applications. In healthcare, finance, government, and other sectors where lives and livelihoods are at stake, InterSystems is the power behind what matters™. Founded in 1978, InterSystems is a privately held company headquartered in Cambridge, Massachusetts (USA), with offices worldwide, and its software products are used daily by millions of people in more than 80 countries. For more information, visit