Accelerating Devops via Data Virtualization | Delphix
I hate buzzwords as much as the next bitter twisted old man but the idea behind DevOps, of building, testing, and releasing software more rapidly and reliably is simply amazing and utterly necessary. As system complexity continues to increase, and application functionality balloons, and as the cost of production downtime skyrockets development and rapid testing of code is crucial to reaching the promised land of published and deployed production code. DevOps visionary Gene Kim says that the biggest barrier to that promised land is how quickly you can provision data for application development and testing: in the form of cloned, masked production databases; and in the form of application stacks cloned from production systems. In many enterprises, the amount of time waiting for data on which to develop or test often dwarfs the time spent developing or testing code. It can take weeks to provision the systems and software required for app development and testing teams to get started. And it can take a day or more to reset environments for additional tests. And keeping the data refreshed is another challenge. In practice, this means only occasional refreshes of data are possible, resulting in seriously inadequate dev/test systems. Every bug that slips past testing into production code raises the risk of serious costs from downtime, lost revenues and brand damage. All because of data constraints in how quickly DevOps teams can get the environments they need to develop and test code.
It doesn't have to be this way.
There is a way to provide near instant provisioning of data for development and testing: data virtualization. Data virtualization breaks through the constraint on data. In the same way that server virtualization enabled IT to break out of the constraints in provisioning servers, with tools from VMware and OpenStack that allow almost unlimited numbers of virtual servers, data virtualization does the same for enterprise production data. Data and storage were largely untouched by the ten-year virtualization of the data center. This is no longer true and data virtualization has emerged as a whitehot nova in terms of investments in new companies; and a realization by the enterprise that data virtualization is a vital component in business agility. Modern file-system and data virtualization technologies such as those from Delphix, transform the old and very painful way of copying data and making it available to different teams. Once one realizes how new file-system technologies can share at the block-level, compress, and de-duplicate a making copies of databases and file-system directories becomes easy and almost instantaneous. Consider this: What if every individual developer and tester could have their own private full systems stack? Or several of them, one or more for each task on which they're working? I can telepathically hear everyone thinking:
Nobody has that much infrastructure, you idiot!
And that is the point. I will show you that you certainly do have the infrastructure. You just don't have the right infrastructure. The above was an extract from my recent presentation: Accelerating DevOps Using Data Virtualization at the Collaborate 2016 conference in Las Vegas. It discusses the inevitability of data virtualization and its many use cases. Here is the slide deck: