Tag Archives: Data architecture

Updates from McKinsey

Breaking through data-architecture gridlock to scale AI
Large-scale data modernization and rapidly evolving data technologies can tie up AI transformations. Five steps give organizations a way to break through the gridlock.
By Sven Blumberg, Jorge Machado, Henning Soller, and Asin Tavakoli – For today’s data and technology leaders, the pressure is mounting to create a modern data architecture that fully fuels their company’s digital and artificial intelligence (AI) transformations. In just two months, digital adoption vaulted five years forward amid the COVID-19 crisis. Leading AI adopters (those that attribute 20 percent or more of their organizations’ earnings before interest and taxes to AI) are investing even more in AI in response to the pandemic and the ensuing acceleration of digital.

Despite the urgent call for modernization, we have seen few companies successfully making the foundational shifts necessary to drive innovation. For example, in banking, while 70 percent of financial institutions we surveyed have had a modern data-architecture road map for 18 to 24 months, almost half still have disparate data models. The majority have integrated less than 25 percent of their critical data in the target architecture. All of this can create data-quality issues, which add complexity and cost to AI development processes, and suppress the delivery of new capabilities.

Certainly, technology changes are not easy. But often, we find the culprit is not technical complexity; it’s process complexity. Traditional architecture design and evaluation approaches may paralyze progress as organizations overplan and overinvest in developing road-map designs and spend months on technology assessments and vendor comparisons that often go off the rails as stakeholders debate the right path in this rapidly evolving landscape. Once organizations have a plan and are ready to implement, their efforts are often stymied as teams struggle to bring these behemoth blueprints to life and put changes into production. Amid it all, business leaders wonder what value they’re getting from these efforts.

Data and technology leaders no longer need to start from scratch when designing a data architecture. The past few years have seen the emergence of a reference data architecture that provides the agility to meet today’s need for speed, flexibility, and innovation (Exhibit 1). It has been road-tested in hundreds of IT and data transformations across industries, and we have observed its ability to reduce costs for traditional AI use cases and enable faster time to market and better reusability of new AI initiatives. more>

Updates from McKinsey

How to build a data architecture to drive innovation—today and tomorrow
Yesterday’s data architecture can’t meet today’s need for speed, flexibility, and innovation. The key to a successful upgrade—and significant potential rewards—is agility.
By Antonio Castro, Jorge Machado, Matthias Roggendorf, and Henning Soller – Over the past several years, organizations have had to move quickly to deploy new data technologies alongside legacy infrastructure to drive market-driven innovations such as personalized offers, real-time alerts, and predictive maintenance.

However, these technical additions—from data lakes to customer analytics platforms to stream processing—have increased the complexity of data architectures enormously, often significantly hampering an organization’s ongoing ability to deliver new capabilities, maintain existing infrastructures, and ensure the integrity of artificial intelligence (AI) models.

Current market dynamics don’t allow for such slowdowns. Leaders such as Amazon and Google have been making use of technological innovations in AI to upend traditional business models, requiring laggards to reimagine aspects of their own business to keep up. Cloud providers have launched cutting-edge offerings, such as serverless data platforms that can be deployed instantly, enabling adopters to enjoy a faster time to market and greater agility. Analytics users are demanding more seamless tools, such as automated model-deployment platforms, so they can more quickly make use of new models. Many organizations have adopted application programming interfaces (APIs) to expose data from disparate systems to their data lakes and rapidly integrate insights directly into front-end applications. Now, as companies navigate the unprecedented humanitarian crisis caused by the COVID-19 pandemic and prepare for the next normal, the need for flexibility and speed has only amplified.

For companies to build a competitive edge—or even to maintain parity, they will need a new approach to defining, implementing, and integrating their data stacks, leveraging both cloud (beyond infrastructure as a service) and new concepts and components. more>

Related>