The future isn’t the cloud, it’s multicloud. Matthew Yeh shares key data challenges IT teams must overcome when mapping a multicloud strategy.
Oct 14, 2019
Organizations today are adopting cloud-based applications, platforms, and services to achieve greater elasticity and faster delivery times in today’s app-driven world. In fact, 98 percent of organizations expect to operate within a multicloud environment by 2021, according to IBM. Of course, every enterprise is different, and there are varying reasons, priorities, and use cases when considering the move to multicloud.
As the name suggests, adopting a multicloud strategy means using multiple cloud services from different providers, with workloads spread out across cloud environments. The emergence of multicloud aligns well with the trend in application architectures towards the deployment of loosely coupled components that, together, constitute a cohesive unit.
Compared to monolithic approaches, leveraging microservices and APIs spread out across multiple clouds affords organizations far greater flexibility, reuse opportunity, and extended reach to choose the best available cloud services. But not every team, business function, or application workload will have similar requirements in regards to performance, privacy, security, or geographic reach for its cloud environment.
While most businesses require rapid and flexible access to computing, storage, and networking resources that can meet the needs of legacy applications and modern cloud innovations, not all enterprises are pursuing multicloud for the same reasons. Intentions may be motivated by regulatory concerns, as a hedging strategy and avoid unwanted vendor lock-in, or to build an optimal business solution by leveraging the best-of-breed services.
Whatever the impetus, companies must build a culture that puts data front and center in a world where every company is becoming a data company. This leads us to our main points: understanding key data challenges and getting your data ready for multicloud environments.
Migrating large volumes of data to the cloud or across clouds is not an overnight process or a one-time trip; it’s quite the opposite. And once that data has been delivered, provisioning or refreshing data for consumption by developers, testers, or cloud services is often a slow and iterative process, involving multiple teams.
The fact that data is distributed across multiple databases, including relational and non-relational, as well as a number of locations (on-premises and multiple clouds) complicates workflows that can already take days, weeks, or even months.
A confluence of factors – including data security, privacy, quality, and access, among many others – makes data management extremely hard. Hence, organizations need to pull together a seamless data management strategy that will integrate disparate cloud services and automate the movement of fast, secure data across their cloud ecosystems in order to fully experience the benefits of going multicloud.
Although 98 percent of organizations expect to operate within a multicloud environment by 2021, just 38 percent have in place the procedures and tools needed to operate in such an environment. To build a multicloud data fabric, data teams need to bring in new technologies to enable the rapid, automated, and secure management of data in the cloud.
There a number of data-related challenges that come with multicloud adoption.
Heterogeneous data sources: Multicloud architectures often encompass a distributed set of applications or services that run on a diverse set of underlying data sources including RDMS, NoSQL, or PaaS offerings native to specific clouds. Businesses need the tooling (as well as the talent) to use these various sources and harmonize their operations.
Slow, manual migration processes: Securely migrating large volumes of data to cloud providers can take weeks— and sometimes even involve the shipment of a physical appliance. Moreover, applications may have requirements to keep data across clouds synchronized or fresh, meaning data movement across clouds is not a one-time migration. It must be continuous.
Slow data provisioning for testing: Many multicloud scenarios create requirements for rapid, iterative testing that drive demand for test data availability. For instance, when migrating application workloads from one cloud to another, IT teams need new data environments for validation and cutover rehearsal. Or if QA is testing composite applications with components/services that are distributed across multiple clouds, he or she will need data environments (potentially derived from multiple data sources) for integration testing.
Visibility: Teams need visibility into data environments across clouds from a single point that establishes who has access to what data in the cloud. It’s critical to know where what data resides and have the ability to control access, movement, retention in an expanded cloud environment. Hence, the processes and tools for understanding and controlling data environments must be standardized in a way that works across multiple clouds.
Securing data in the cloud: The surface area of risk for sensitive data potentially increases with multicloud, and the number of people with access to data may increase as well. Teams need an approach for finding and securing confidential information — to mitigate the risk of breach and stay compliant with privacy regulations, such as the GDPR, CCPA, CPRA, and HIPAA.
The multicloud era is upon us. On average, organizations are leveraging almost five clouds across both public and private. While the promise of driving greater operational efficiency, productivity, agility, flexibility, and profitability are luring more organizations to adopt the cloud, IT leaders must consider addressing data-related challenges first in order to successfully build and manage their multicloud strategy.