Glossary

What is Data Virtualization?

Data virtualization decouples the database layer that sits between the storage and application layers in the application stack. Just like a hypervisor sits between the server and the OS to create a virtual server, database virtualization software sits between the database and the OS to abstract/virtualize the data store resources.

Because database resources are virtualized, they require a much smaller storage footprint than the source database. Instead of making and moving new blocks of data, virtual data (virtual data copies) use pointers to data blocks, providing high-performance access to data already in place.

Data virtualization provides the ability to securely manage and distribute policy-governed virtual copies of production-quality datasets. No matter the underlying database management system (DBMS) or source database location, data virtualization technology creates block-mapped virtual copies of the database for rapid and controlled distribution all while leaving a minimal storage footprint no matter how many copies are used.

Why Virtualize Data?

The speed of innovation and ability to adapt to rapidly changing market trends rests on the agility of your release cycle and the ability to quickly diagnose, triage, and fix errors. Data virtualization is the critical lever used by forward-thinking enterprises to provision production-quality data to dev and test environments on demand or via APIs.

Virtual data copies are fully readable/writeable, and can be provisioned or torn down in just minutes, eliminating developmentā€™s reliance on slow serial ticketing systems and DBA involvement for initial data delivery as well as data refreshes after destructive testing.

Data virtualization technology facilitates data delivery across all phases of application development, including testing, release, and production fix. Traditionally, IT organizations rely on a request-fulfill model, in which developers and testers often find their requests queuing behind others. Because it takes significant time and effort to create a copy of test data, it can take days, or even weeks to provision or refresh data for a test environment. This creates massive wait states in the software delivery life cycle, slowing the pace of application delivery.

To keep pace with a faster release cadence, dev and test teams are forced to work with a stale copy of data because refreshing test data takes too long. This can result in missed test cases and ultimately data-related defects escaping into production.

Common Use Cases & Systems Used With Data Virtualization Technology

  • DevOps: For teams that need to transform app-driven customer experiences, oftentimes everything is automated except for the data. Data virtualization enables teams to delivers production-quality data to enterprise stakeholders for all phases of application development.
  • ERP Upgrades: Over half of all ERP projects run past schedule and budget. The main reason? Standing up and refreshing project environments is slow and complex. Data virtualization can cut complexity, lower TCO, and accelerate projects by delivering virtual data copies to ERP teams more efficiently than legacy processes.
  • Cloud Migration: Data virtualization technology can provide a secure and efficient mechanism to replace TB-size datasets from on-premise to the cloud, before spinning up space-efficient data environments needed for testing and cutover rehearsal.
  • Analytics and Reporting: Virtual data copies can provide a sandbox for destructive query and report design and facilitate on-demand, data access across sources for BI projects that require data integration (MDM, M&A, global financial close, etc.)
  • Backup and Production Support: In the event of a production issue, the ability to provision complete virtual data environments can help teams identify root cause and validate that any change does not cause unanticipated regressions.

Data Virtualization Capabilities

By virtualizing data software teams have:

  • Enterprise Grade Distribution: Provision lightweight virtual database copies in minutes (depending on the types and size of files) via UI or API that scale with your agile development goals.
  • Built for Scale: Replicate data from production to non-production environments at scale, either on-premises or in the cloud for multiple instances. Teams can provision virtual databases as necessary without taxing storage.
  • Data Governance: Put your InfoSec department at ease with data controls that govern who can do what, where, and when over specific datasets. When combining best-in-class security, consistent data-masking policies, and robust auditing, data Virtualization becomes a security asset.
  • Cost Savings: Maximize testing throughput while minimizing storage use - Virtual Datasets provisioning, destruction, refresh and rewind all provide new tools for application testers to maximize testing throughput with virtually no additional storage cost.

Learn how to bring data agility to your entire enterprise with the Delphix Data Platform.