Blog

The power of Delphix is speeding up apps development

The recent "leap second" added to atomic clocks inspired Woody Evans, Solution Architect at Delphix, to write about the challenges in

The recent "leap second" added to atomic clocks inspired Woody Evans, Solution Architect at Delphix, to write about the challenges in DevOps in finding "the last "concurrent moment" among a group of datasets so that we can restore to the same point in time across them all." It's something that Delphix does brilliantly. Here's an excerpt from: Give me a Second | Data Conjurer

If storing DataSets were free, and reproducing a DataSet could be done instantaneously, wouldn't everybody store every iteration of the DataSet Timeline? Of course. Because that would make development and testing and operations and bug fix run a lot closer to light speed. But storage isn't free... Often, solutions to these problems force a tradeoff. You can go back in time on your dataset, but only once.

Or, you can go back in time or fork your dataset at the cost of keeping another copy of your dataset. I contend that the best solutions manage the river delta of DataSet Timelines by figuring out how to share them via thin cloning. And further, that Delphix is head and shoulders ahead of anyone else trying to do that.

water
Einstein viewed time like a river: it can speed up, it can slow down, it can even fork into two new rivers. That's apt for our data as well. Photo credit: Feral Arts

With Delphix thin cloning, we can actually step into the same river twice. We can reproduce a DataSet exactly as it was then. Moreover, we can reproduce it really fast. So what? Well, it turns out that the more efficient your DataSet "time machine" is, the faster you can deliver an application that depends on that data. Why? Before thin cloning, we made a lot of wall-clock time trade-offs to help us mitigate the cost of storage or the time to restore.

For example, instead of working on fresh data we'd work on old data because it took too long to get the right data back. Instead of reusing our old test cases, we'd just build new ones because it was easier to build new ones than to go back and try to fix all the stuff that got corrupted during our test. Instead of having our own environment, we'd have to share with lots of other folks. That wasn't so bad until we had to wait for this team, then wait for that team.

And, all the while, those little errors that continually crop up because my code isn't compatible with your code get bigger and bigger. The culmination of all those little errors we let slip by is what DevOps calls technical debt. You can't be successful in a DevOps/Agile paradigm if you're operating this way. DevOps needs faster data.

With Delphix, we can step into the same river twice, and we can do it at powerboat speed. Being able to relate one clock to another and reproduce a point in time not just for a single dataset but for a whole group of datasets lets you freeze time, fork new timelines, and move back and forth between different timelines at will. Being able to accept changes to the timeline in floods but reproduce them in trickles means large synchronization efforts become push-button instead of becoming projects. Delphix give you access to each timelines, and you can synchronize them all at will.

By using shared storage for shared history, you can radically speed up your app provisioning - even at scale and even when you have LOTS of different datasets that make up your app. Through massive block sharing and EASY timeline sharing, a tester can find a bug in the data, "save" the timeline for someone else, and then go back in time to a previous point on their timeline and keep working without having to wait for someone to "come and see my screen."

The Finders don't have to wait for the Fixers anymore. That means that the cost to get capability out the door goes down A LOT. The ROI there isn't single digit. It's more like 50% faster. That ROI will, quite simply, blow your socks off.

Read more at: Give me a Second | Data Conjurer