The two-part challenge preventing businesses from ‘crossing the data chasm’

There is an expanding chasm between data and value. An Accenture study across 190 executives in the United States found that only 32% of companies reported being able to realise tangible and measurable value from data and only 27% said that analytics projects produce insights and recommendations that are highly actionable.

In that report, Ajay Visal, Data Business Group Strategy Lead, Accenture Technology said: Companies are struggling to close the gap between the value that data makes possible and the value that their existing structures capture—an ever-expanding chasm we call ‘trapped value.’”

This trapped value could lead to changes to legacy business processes that could unlock huge productivity. It could be the digitisation of paper-based systems and processes which would improve accuracy and support reduction in paper waste. It could be more profound, unlocking capabilities for modern organisations to out manoeuvrer the competition with greater and more actionable insight. And a central area of this trapped value will be the ability for businesses to automate processes, harness artificial intelligence and machine learning – technologies that are certain to completely transform the way we do things.

But perhaps a more discrete point would be business agility. We have seen unprecedented levels of uncertainty over the past 5 or so years. It has been one uncertainty followed by another, from Brexit to the pandemic, to the war in Ukraine, to energy prices and so on. Never have businesses experienced this kind of shock repletion for such a sustained period. Business now needs to be able to react to change like never before. This means adapting existing processes, introducing new processes, simplifying operations, building more robustness into aspects of operations.

Data of course, and business’s ability to access and use data, has a huge role to play in this. Overall, we see this data challenge in two key parts: (i) data accessibility at scale – the ability for individual organisations and operators to access the data that they needed and (ii) the ability to use that data – being able to interact and derive insight from data simply and quickly.

Data accessibility at scale

Modern business is rapidly becoming more interconnected but as a result, more fragmented. Be it departments within a single organisation or separate organisations working together. The level of interdependency is growing, and rapidly. This is driven by many things. The complexity of modern business, the market and competitive pressures, geographies, supply chains, and so on.

Today’s collaborative norm has huge advantages for those businesses and their customers. From quicker and richer supply chains to more advanced, comprehensive, and collaborative products and services that extend value to consumers.

But the interconnectivity does also lead to more fragmentation. As these businesses work more closely together, the need to share data, information, assets etc. across a network of departments or organisations increases. The businesses are no longer single domain. Data regarding their operations, their performance, ultimately, the data they need access to now resides across many different organisations or departments, in a myriad of different systems. This makes accessing the data one needs incredibly difficult. In fact, studies show data collection accounts for 20% of the time on a typical data project.

Ability to use that data

Couple the data accessibility challenge with the many different data types, the different data formats, etc. and the ability for businesses and individuals to use that data becomes nigh on impossible.

What was highlighted in the data accessibility challenge was the many different organisations involved and the number of respective systems within which data resides. These systems will likely differ in data type, data format, latency, interfaces and so on. Studies show almost 80% of all data is unstructured. This results in data and analytics professionals spend most of their time on data cleansing/processing –accounting for 50% of the time on a typical data project. Data cleaning is the process of putting data into consistent formats, freeing it from duplicate records, ensuring there are no missing values and placing it in a structured form. The protects the number one rule in data science – garbage in, garbage out.

The harder data is to use data, the more time spent cleansing and processing data as opposed to analysing it, the less time spent analysing and using that data, the bigger the gap between data and value becomes.

Crossing the data chasm

For businesses to unlock the trapped value in their data, they must cross the data chasm. Learn how Entopy can support businesses to achieve this with Intelligent data Orchestration.