Despite the growing emphasis on agility in the modern business landscape, it’s still surprisingly common to find organisations that rely on a rigid, centralised decision-making structure.
This is particularly true amongst large enterprises, where embedded business practices and culture make any kind of change difficult to enact.
Traditional centralised structures typically see most of the decision-making power concentrated around a small number of individuals at the top of an organisation, with orders and directives passed down via various managers and department heads. The main purpose of this is to ensure consistency and control, but in today’s digital world such a hierarchical approach can feel archaic, leaving little room for the kind of two-way communication needed to quickly respond to ever-changing market conditions.
The same can be said for the way organisations collect and use their data. The growing popularity of cloud-based collaboration tools and data platforms mean businesses of all sizes now have near-limitless choice when it comes to how they operate, both internally and externally. Digital transformation has the power to make possible the wide distribution of data and decision-making across even the largest organisations, empowering employees and fostering new cultures centred around innovation.
However, while this sounds great on paper, the reality isn’t quite as simple. This is because most organisations today are overwhelmed by the sheer volume of data they have collected over the years, often contained in numerous different formats and spread across a wide range of disparate silos. IT teams have had to amass a diverse range of tools and platforms to try and make sense of it all, which usually adds to the problem instead of solving it. As a result, they’re unable to unlock the wealth of value that’s contained in it.
Centralised structures create unnecessary bottlenecks
Take for example, a marketing team trying to compare online customer feedback with sales performance, or a HR employee pulling together a report containing data from both HR and accounting. In a centralised structure, all data requests like this need to go through the IT team, which very quickly creates bottlenecks to productivity and pulls IT team members away from more strategic business activities. Long turnaround times on seemingly simple data requests can also have a demotivating effect on those making the initial requests, resulting in a reluctance to make similar requests again in the future.
However, if IT teams can implement a more self-service approach to data enablement and retrieval, it quickly takes a big burden off their shoulders and enables organisations to start generating significantly more value from their wealth of data. One way to make this happen is through a relatively new concept known as data mesh.
A new approach to data management
Data mesh facilitates a shift towards decentralisation of data by placing ownership and management of it in the hands of those who created it in the first place, protected by policies for internal governance and compliance. It is based on four key principles:
Domain Ownership: Individual departments such as sales, HR, finance, and marketing take full ownership of the data they generate because they are the ones that understand it best and value it the most. These domain owners behave as independent entities, accountable for the stewardship of their data assets. They are responsible for ensuring its accuracy and quality.
Data-as-a-Product: Each department turns its raw data into clear, defined products that all users enterprise-wide can easily understand, access, and utilise. This standardised approach ensures everyone is accessing the same, up-to-date information to drive business initiatives and decision-making.
Self-Service Data Platform: The IT team becomes responsible for delivering and maintaining a secure central platform, complete with intuitive self-service tools and sufficient capacity for every department to manage their growing level of data products. Doing so empowers users to access data without relying on IT and enables them to quickly extract key insights as needed.
Federated Governance: To ensure quality and coherence in a decentralised environment, overarching data governance rules are established to shape the requirements each data product must satisfy. These standards must be followed by each department to keep data integrity consistent across the data ecosystem, while still allowing different areas flexibility to manage their specific needs. The platform supports this practice by checking every data product for compliance before releasing it, preventing the deployment of non-complying products.
Allaying fears is a crucial part of the process
When implemented correctly, removing the dependency on centralised systems and IT teams can truly transform the way organisations operate. However, introducing a data mesh can also raise fears and concerns relating to storage, duplication, management, and compliance, all of which must be addressed if it is to succeed. With decentralised data management, it’s also critical that everyone follows the same stringent set of rules, particularly regarding the creation, storage, and protection of data. If not, issues will quickly arise. Additionally, if any team leaders or department heads put their own tools or processes in place, the results may cause far more problems than they solve.
Trusting individuals to stick to data guidelines is too risky. Instead, adherence should be enforced in a way that ensures standards are followed, without impacting agility or frustrating users. This may sound impractical, but a computational governance approach can impose the necessary restrictions, while at the same time accelerating project delivery. Naturally, not everyone will be quick (or keen) to adjust, but with additional support and training even the most reluctant individuals can learn how to adopt a more entrepreneurial mindset.
Robust data governance is key
Sitting above an organisation’s data enablement and management tools, a computational governance approach should be technology agnostic. It provides the capability needed to ensure every project follows pre-determined policies for compliance, security, quality, and more. These customisable governance policies enforce all relevant standards at both local and global levels, ensuring projects only go into full production once all requirements are met. Simply put, users can’t create new data that doesn’t meet the correct criteria.
Automated templates can be used to help data practitioners rapidly initiate new projects, facilitating data access while simultaneously ensuring compliance and security standards are met. The result is faster processes, along with consistent and reliable data quality standards on every project.
Data mesh has the potential to truly revolutionise the way organisations approach data management and decision-making, providing substantial competitive advantages in the process. However, only when implemented alongside robust data governance can its full potential be realised.
About the Author
Andrea Novara is Engineering Lead | Banking & Payments Business Unit Leader at Agile Lab. Agile Lab creates value for its Clients in data-intensive environments through customizable solutions to establish performance driven processes, sustainable architectures, and automated platforms driven by data governance best practices. Since 2014 we have implemented 100+ successful Elite Data Engineering initiatives and used that experience to create Witboost: a technology-agnostic, modular platform, that empowers modern enterprises to discover, elevate and productize their data both in traditional environments and on fully compliant Data Mesh architectures.