Blog

What’s wrong with Test Data Management? Friction. And lots of it.

Market share, profitability, even business survival can be a function of feature deployment frequency. And, your competitors are speeding up.
Thumbnail

Market share, profitability, even business survival can be a function of feature deployment frequency.  And, your competitors are speeding up.  The best companies are deploying 30x faster, delivery times are dropping as much as 83%, and unicorns like Amazon are now deploying new software every second.  But, with data expected to grow to 44 Zeta bytes by 2020, all of our work to reduce coding friction and speed up application delivery will be for naught if we can't reduce the friction in getting the right data to test it.

Companies face constant tension with test data:

  • Feature delivery can take a hard shift right as errors pile up from stale data or as rework enters because new data breaks the test suite.  Why is the data out of date? Most companies fail to provision multi-Tb test datasets in anywhere near the timeframes in which they can build their code. For example, 30% of companies take more than a day and 10% more than a week to provision new databases.

  • To solve the pain of provisioning large test datasets, test leaders often turn to subsetting to save storage and improve execution speed. Unfortunately, poorly crafted subsets are rife with mismatches because they fail to maintain referential integrity. And, they often result in hard-to-diagnose performance errors that crop up much later in the release cycle.  Solving these subset integrity issues often comes at the cost of employing many experts to write (seemingly endless) rulesets to avoid integrity problems that foul-up testing.  Unfortunately, it's rare to find any mitigation for the performance bugs that subsetting will miss.

  • It's worse with federated applications.  Testers are often at the mercy of an application owner or a backup schedule or a resource constraint that forces them to gather their copy of the dataset at different times.  These time differences create consistency problems the tester has to solve because without strict consistency, the distributed referential integrity problems can suddenly scale up factorially.  This leads to solutions with even more complex rulesets and time logic.  Compounding Federation with Subsetting can mean a whole new world of hurt as subset rules must be made consistent across the federated app.

  • Synthetic data can be essential for generating test data that doesn't exist anywhere else.  But, when synthetic data is used as a band aid to make a subset "complete", we re-introduce the drawbacks of subsets.  To reach completeness, the synthetic data may need to cover the gap where production data doesn't exist, as well as determine integrity across both generated and subset data.  Marrying synthetic data and subsets can introduce new and unnecessary complexity.

  • Protecting your data introduces more speed issues.  Those that mask test data typically can't deliver masked data fast or often enough to developers, so they are forced into a tradeoff between risk and speed, and exposure usually trumps speed when that decision is made.  As a Gartner analyst quipped: 80% of the problem in masking is the distribution of masked data.  Moreover, masking has its own rules that generally differ from subsetting rules.

  • Environment availability also prevents data from getting the right data to the right place just in time.  Many testers use a limited number of environments, forcing platforms to be overloaded with streams such that the resultant sharing and serialization force delay, rework and throwaway work to happen.  Some testers wait until an environment is ready.  Others write new test cases rather than wait, and still others write test cases they know will be thrown away.

  • Compounding this problem, platforms that could be re-purposed as test-ready environments are fenced in by context-switching costs.  Testers know the high price of a context switch, and the real possibility that switching back will fail, so they simply hold their environment for "testing" rather than risk it.  Behaviors driven by the cost of context-switching create increased serialization, more subsetting, and (ironically), by "optimizing" their part of the product/feature delivery pipeline, testers end up contributing to one of the bottlenecks that prevent that pipeline from moving faster globally.

  • Reproducing defects can also slow down deployment.  Consider that quite often developers complain that they can't reproduce the defect that a tester has found.  This often leads to a full halt in the testing critical path as the tester must "hold" her environment to let the developer examine it.  In some cases, whole datasets are held hostage while triage occurs.

  • These problems are all subsumed into a tester's most basic need: to restart and repeat her test using the right data.  Consider, then, that repeating the work to restore an app (or worse a federated app), synchronize it, subset it, mask it, and distribute it scales up the entire testing burden in proportion to the number of test runs.  That's manageable within a single app, but can quickly grow unwieldy at the scale of a federated app.

Blazing fast code deployment doesn't solve the test data bottleneck.  Provision speed, data freshness, data completeness, data synchronicity and consistency within and among datasets, distribution speed, resource availability, reproducibility, and repeatability all contribute to the longer deployment frequency.  Why is all this happening? Your test data is Not Agile.

How do you get to Agile Test Data Management?  One word: Delphix.