Launching 5.2 and a Quick Look Back at 2017
Over the course of a few blogs, I’m going to walk through a handful of features to give a little more context about the significance of some of the ways that we are bringing a new dimension to managing enterprise data.
To start, I’d like to look back at 2017 and cover some of the major changes to our platform!
Data Privacy and Security: A few years back, masking was little more than just a hobby for us. Now with the rise of data breach, PII awareness and regulations like GDPR, we see nearly every one of our customers implementing data masking. 2017 was the year our data masking solution really took off and the product has matured in a number of key ways.
- Focused on simplicity: Removing a ton of unused / little used features and introducing a wizard to make masking for beginners trivial and faster for experienced users. It’s our commitment that we’re going to keep making masking easier, as we know the usability is top of mind from our users!
- Introduced new data connectors: We introduced a mixture of new and old data sources - based on where our customers’ sensitive data is being used. These include support for AWS RDS Oracle, SQL Server 2016, DB2 on z/OS, DB2 on iSeries and VSAM.
- Focus on Speed: Our customers are not only masking more datasets, but the data themselves are growing and we continue to focus on how to streamline the process. We introduced faster profiling (discovery of sensitive content) and masking to keep up with this.
- Move to a distributed architecture: As mentioned above, our customers are masking more data, bigger data and also in a variety of locations. We’ve taken our first steps to introduce fast, distributed masking. These improvements make it easy for our customers to mask data in complex, distributed environments at massive scale. The last blog in this series will dig into this topic in more detail.
Platform Usability: As we introduce support for managing increasing types and amounts of data, we realize that the management experience and functionality needs to evolve as well. To support this, we introduced:
- Support for Public Clouds: Running Delphix in AWS and Microsoft Azure. The dominant use case here is to replicate masked data from on-premises and do dev/test in the cloud, leveraging the cost savings and elasticity provided by on-demand IaaS + Delphix.
- Brand new UI: As we’ve matured and moved from a single use case application to a broad data platform, our user experience needed to be refreshed as well. We introduced a new UI and eliminated all use of Flash.
- Self Service: We’ve worked hand in hand with our users to continue to reinvent how developers access and manage data in self-service fashion. We introduced several new features to support very large data sets, team control and more options to use historical data.
Data Connectors: Our Data Platform is only as good as the data it supports and 2017 was a big year for us. We introduced support for Oracle 12c Multi-tenancy, SQL Server 2016, DB2 BLU, and SAP HANA. For an up to date list on what data sources we support for virtualization and masking, watch this page.
It’s been a busy year, but I expect dramatically more in 2018! Thanks to everyone involved for a great 2017.
Up next: details on what it means to be living with Adobe Flash in 2017 and how we got rid of it.