A decentralised approach to data can bypass silos and ensure organisational politics remain positive
Data sharing can be political. We know how important data is for a business but the use of data can cause internal conflict and this is something that businesses tend to steer away from.
The reason data sharing can cause unrest is largely due to the data silos that evolved organically; companies require an organisational structure – where lines are drawn in the sand between departments – to grow and prosper. This requirement gives credence to the suggestion that data silos are not necessarily ‘bad’.
Technology procured is tailored for department-specific needs, all of which have their own data formatting rules and capabilities. These data systems ought to be considered members of their respective teams, as this ensures they are well nurtured and taken care of, just like employees. As the systems have clear ownership, the sources are well-defined and perfect for the needs of their teams.
On the flipside, data silos create obstacles for organisations at the strategic level. For example, when organisations try to provide an overview across data silos, they commonly encounter issues with the variability in data structures, languages, accessibility methods, level of data quality, and rules on data governance across different silos. This means they can’t get a clear picture, or a single source of truth. The variability issues also mean that valuable data – often unintentionally – is not shared with other departments for whom it is relevant.
To overcome issues with silos, organisations have attempted to use data warehouses and data lakes to pool the data into one place. This centralisation effort was the most logical choice over the last few decades, and it resulted in huge improvements in data utilisation and monetisation. But with increasing complexity of data within enterprises, and advances in technology, it has become apparent that these approaches have limitations.
Technical limitations include the ability to properly integrate organisations’ M&A, tailor data priorities for specific departments, and preview and access data assets. The act of copying data into a central repository is also detrimental to the quality of the data that is used. These limitations have lent themselves to the negativity of organisational politics, with many disillusioned users wondering why they should share their datasets in this environment.
This leads us to the crux of the matter; data is power. Having access to the right data can help teams or individuals succeed with their initiatives. They’re developing their data assets and want to make sure these assets will increase in business value.
Healthy competition between departments or personnel, can lead to data sharing becoming political. For instance, a marketing team could have a dataset that would be useful for sales teams to acquire leads, but opt not to share it in the hope that they could use the dataset in another way, which they believe would have more of an impact on the business. Their assumption may be that the sales team would hinder this progress, and they may not want to relinquish the power they have over the data.
Internal competitiveness is also prevalent between personnel who are in the same department or team, especially when the effective use of a dataset could be the edge required to get a promotion over a colleague.
There are also valid governance concerns from teams that by sharing this data, there wouldn’t be clarity of who is using the data, how the data is being used, and what the data looks like. These concerns will also raise the question “am I/are we being credited for the data that has been used?”. Data is a commodity, and teams may prefer to trade with it within the company; cooperating to gain and uphold the power of data.
The lack of visibility into data consumption can also mean that departments end up spending valuable time trying to source the relevant data or building custom applications from scratch, unaware that they are duplicating the efforts of another department. Duplication is not political per se – but it can unleash unhealthy competition as a team may view their work, data and assets being duplicated as a challenge. It can also lead to frustration and resentment, which increases political tension.
All of these issues lead to missed opportunities of better data governance, quality and usage; imagine what teams could do if they could all share data freely knowing when it has been used, how it has been used and which data is most accurate?
To achieve this requires a new type of working culture where data becomes democratised. This is best materialised by the implementation of a Data Fabric, an approach based on multiple data management technologies working in tandem, streamlining data ingestion and integration across a company’s ecosystem.
The Data Fabric
It may seem like the obvious thing to do is to remove the root of the problem – in this case, silos. But enterprises have already attempted this with data lakes and data warehouses – so the centralised approach to data is leaving enterprises with an insurmountable problem. A new approach, which acknowledges that these organisational silos are a good thing, and should remain in place, is required.
A decentralised Data Fabric approach bypasses – rather than removes – data silos, by virtually connecting the data required at any given time, meaning virtual data lakes can be created and disposed of on-demand, without impacting existing applications and infrastructure.
A data framework whereby each repository of data across the business is exposed and can be accessed – with the appropriate permissions – would be in place. The integrity of the data would be upheld via data virtualisation, ensuring data assets are accessible from where they reside and that data duplication does not occur. Data points can be virtually viewed and analysed on-demand, with no changes made to the original data, effectively enabling teams to use the data required for any one project from various different silos. The location of data is managed using a flexible metadata catalogue, allowing users to keep track of what data appears where, and what its relevance is for any type of usage.
Enterprises seeking to achieve a 360-degree view of a customer, for example, would be able to then further enrich the profile they would have built, by seamlessly using metadata, structured and unstructured data from open source intelligence and paid-for sources. This dynamic method ensures that the profile built is a true reflection of a customer, client or other entity.
The Data Fabric is a more pragmatic and scaled to budget alternative to consolidating data sources and providers, enabling a single point of visibility of data flows across the enterprise, so that teams can act on the data in various ways; while restricting access at a granular level, allowing better enforcement of control decisions, and understanding when and who has accessed the data. This prevents data being intentionally or unintentionally inaccessible to those who need it, and gives confidence to users that the information will not be corrupted.
Once the data framework is in place, the development of applications built upon this data can be homogenised into reusable components; this way it becomes easier to duplicate successes in other departments, effectively enabling companies to become composable businesses that can more easily adapt to market changes.
A new culture
A Data Fabric approach doesn’t mean enterprises have to do away with their existing centralised data lakes or warehouses; instead they can be accessed as a data repository in their own right, as part of the Data Fabric. Enterprises can benefit from a Data Fabric through an application or platform that has already embedded the approach into its backbone.
Technology, however, is never a silver bullet; enterprises that opt for this approach should also be transparent with their users about how the approach will benefit them and the company as a whole. This will help the business to transition to smarter organisational politics than that has existed for so long, making internal data-sharing more about sharing and benefitting from power, and as a result collaborating for the greater good of the organisation.
About the Author
Lior Perry is Chief Engagements Officer at BlackSwan Technologies. He has over 25 years’ experience as an executive leader of innovation, development and delivery of products for Global 2000 enterprises across a wide range of industries. He has in-depth knowledge of intelligence, big data analytics, SaaS\PaaS, finance, communications, and eCommerce markets.
Featured image: ©remotevfx