The Data-Centric Enterprise: Where Revolutions Converge

Recent months have seen an explosion in articles about the generative-AI revolution and its effect on economy and society. But in fact there are many ongoing revolutions almost as significant albeit not as well known. And as these revolutions converge, their collective effect may be far greater than what AI will achieve alone. We believe the data-centric enterprise may be the convergence point for all these developments. In this article we’ll explore what a data-centric enterprise is and describe the new revolutions converging into it. These are shaping how organizations utilize their data to drive innovation and value. We believe this overall convergence enables a far more powerful – and perhaps frightening – transformation than any one factor such as generative AI alone.

Classical enterprises today are mainly application centric. This means their business capabilities are enabled by processes; and large, often monolithic applications enable those processes.  Here it’s fair to say the applications sit like kings, and IT landscapes are filled with servants that extract data owned by one kingdom, transform it, and serve it to a different kingdom. Enterprise architecture teaches us that we should focus on data first, but the reality is that until now enterprises have bought off-the-shelf applications, not off-the-shelf data, thus putting the application at the center of the enterprise.

Software-as-a-service, together with an explosion of standard interfaces and connectors, is now giving rise to the data-centric enterprise. Using our kingdom analogy, a data-centric enterprise is a collective or a kibbutz, where data is not subservient to any one king. Rather, data is a shared resource and primary value source, just like the land or machinery in a kibbutz. Each community member (department) brings its own special skill to work with the data, generating insights (harvest) that benefit the entire enterprise (collective). Most importantly, data is no longer locked into one kingdom; instead, it can be freely moved and utilized where it can bring the most value to the community.

How do you build a data-centric enterprise?

You start by viewing data as the central resource, such as the land or machinery that empowers the collective. And you design integrated IT and business architectures so this data maintains consistency, reliability, but especially accessibility so it can flow freely to where it’s needed.  A key point is to realize that data is a business asset, not an IT asset. Business must take ownership and provide stewardship of its data to ensure quality and consistency, but this is a role for which they alone are ideally suited, not IT. The role of IT becomes no longer one of content but of accessibility, to ensure the data can be shared freely between domains.

Revolution 1: the data mesh

While there are many ways to enable a data-centric enterprise, the data-mesh model has enjoyed success in recent years. In this “data-as-product” approach the business domains (not IT) make their data available as products for use by other domains. The essential ingredients are data governance (the rules for how members of the collective treat their shared resources), data catalogs so everyone knows the data products, the data owners and the data stewards (caretakers).  While business focusses on the content (crops), the data-sharing infrastructure (irrigation and roads) is owned and operated by IT.

Revolution 2: citizen development and the democratization of data

Low-code/no-code (LC/NC) are the technologies (such as Microsoft Power Apps, but now also integrated into most large SaaS platforms such as salesforce.com) that put programming power into the hands of non-IT-specialists in the business, so-called “citizen developers.” By most industry accounts, soon over 70% of all application development will be carried out with LC/NC. Almost daily large enterprises are announcing adoptions of “LC/NC-first” strategies, displacing traditional software development. Critical success factors for this approach include “centers of excellence” as well as demanding rigorous attention to governance, to ensure security and prevent sprawl.

One consequence of this approach may be the increasing volatility of applications in the enterprise. Rather than creating permanent, legacy apps that persist for years and often become outdated, LC/NC enables rapid development for short-term needs. Such “ephemeral apps” can be built for a specific project or purpose, then retired or reworked when needs change.  In this way, the enterprise can avoid the accumulation of outmoded legacy systems that often hinder advancement and efficiency.

Revolution 3: artificial intelligence and generative AI

AI technology is crucial for processing large volumes of data and extracting its full value. This can create new insights and improve decision making within the data-centric enterprise. Niche companies providing these services for special domains are exploding. But the ability for enterprise or any person to access this power from the cloud computing providers (such as AWS, Microsoft Azure, Alibaba Cloud, the Google Cloud Platform or HPE GreenLake) lowers the barrier for adoption. Increasingly, thanks to LC/NC, this AI power is put directly into the hands of the citizen developers in the business, rather than being siloed within the IT department.

Revolution 4: DataOps

Just as DevOps is an agile methodology that increases productivity by integrating application development with operations, DataOps is an agile approach that reduces the time and improves the quality of data analytics. Like DevOps it relies on collaboration, governance, automation, and so-called continuous integration and delivery (CI/CD) of “data pipelines,” the processes by which data is ingested, transformed, and prepared for analysis. By ensuring a smooth and efficient flow of high-quality data through the organization, DataOps allows the data-centric enterprise to be more agile and responsive, reducing the time from data-insight to business actions.

Revolution 5: privacy-enhancing technologies (PETs)

PETs are the technologies enabling a “privacy-by-design” approach that embeds both privacy and protections into the data processes, to maximize data utility while minimizing risks. These range from well-known approaches such as encryption, anonymization and pseudonymization to newly developing technologies at the cutting edge. Secure multi-party computation (SMPC) allows multiple parties to maintain the privacy and sovereignty over their own data even when participating in computations involving the other’s data. Homomorphic encryption allows for data to be processed while remaining encrypted. In data-centric enterprises the use of PETs is crucial, and as the amount of data being generated and processes grows, so does the importance of PETs.

Revolution 6: edge computing

As more devices are connected to the Internet of Things (IoT) there is a growing trend to process the data where it is generated, i.e. moving the “cloud” and its algorithms to the data at the edge instead of the data into the cloud. This enables more functions and insights by reducing latency and bandwidth, allowing for decentralized real-time analytics and services. Here there are many enablers, such as a proliferation of IoT devices as well as advances in 5G and other high-speed networking technologies, including WiFi-6 and satellite internet (such as OneWeb and SpaceX). The cloud providers (all of the before mentioned) are investing heavily in city-based edge computing resources in addition to their traditional region- and country-cloud offerings, thus bringing the cloud closer to data sources and users. Companies such as HPE are making the edge the new cloud – with cloud no longer being a place but an experience. This decentralized approach is a paradigm shift in data management, since it allows data-centric organizations to utilize data efficiently, regardless of where its located.

Where revolutions converge

We began by discussing the revolution in generative AI, and the power it has to disrupt economy and society. But as we have seen there are many other revolutions – the rise of the data mesh, low-code/no-code platforms, DataOps, edge computing, and privacy enhancing technologies – that are more quietly unfolding in the background. Together, with the key design principle of data centric architecture, these revolutions can mature fast and coalesce. In consequence, they are empowering the data-centric enterprise. This is a transformation that promises to have a collective impact far greater than what AI will bring alone.

Creative Commons Licence

AUTHOR: Kenneth Ritley

Kenneth Ritley is Professor of Computer Science at the Institute for Data Applications and Security (IDAS) at BFH Technik & Informatik. Born in the USA, Ken Ritley has already had an international career in IT. He had Senior Leadership Roles in several Swiss companies such as Swiss Post Solutions and Sulzer and built up offshore teams in India and nearshore teams in Bulgaria among others.

AUTHOR: Stefan Brock

Stephan Brock is Enterprise Architecture and ICT Operations Manager at Hewlett Packard Enterprises in Zurich.

Create PDF

Related Posts

None found

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *