Spring 2016 EKO Hackathon report
Building on the success of the Fall 2015 EKO Hackathon, I was privileged to help organize the Spring 2016 EKO Hackathon. This time we kept the same event structure for the hackathon but conducted the bulk of the hacking and the presentations at the offsite EKO location. We did have some of the inevitable wifi challenges that offsite events bring but in the end it was a successful event with a lot of great work being done by the hackers. As with the last hackathon we made it a point to invite folks from outside engineering and had healthy representation from groups like support, services, education and BTC in the final list of 44 hacks. And now without further ado I'd like to present the winners of the hackathon and their own descriptions of their projects.
Judging Panel awards
These were selected by judging panel of leaders from across the company:
ESX host statistics collection without vSphere credentials by Will "Padre" Guyette, Seb "Patrick" Roy, Matt "Not Ahrens" Amdur
VMware provides a number of statistics that are very useful when trying to debug performance problems of a Delphix Engine. Today these statistics are only available by logging into vSphere or the ESX host, something we can't do by ourselves nor something the typical Delphix admin has permission to do. In this hack we've extended our integration with VMware tools to pull out information about the physical configuration of the ESX host that the engine is running on as well as statistics related to the %RDY time of the engine (which is time the engine wants to be on CPU but isn't). This data is collected from within the Delphix engine without requiring any credentials to authenticate to vSphere.
Speeding up OS compilation times by John Kennedy, Paul Dagnelie
We used ccache to cache object files created during a full build of the OS. Subsequent full builds can use these object files when the sources are unmodified to skip the compilation step. We measured roughly a 2x improvement in the compilation phase of a full nightly build.
Producer-Consumer model for Upgrade testing setup by Prachetaa Raghavan, Derek Tzeng, Emmanuel Morales, Sumedh Bala Upgrade testing currently takes about 7 hours to finish and is likely to increase as we add more testing in the future. The solution was to take advantage of the 3 phases of upgrade testing:
- Presetup which takes about 2hrs
- Upgrade which takes about an hour
- Post-upgrade which takes about 4 hours
The presetup phase does not depend on the code under development. Most of the bugs are present in the upgrade phase and we would like to get to this phase as soon as possible. Since the presetup is common across developers we would run a few instances of the presetup and store the configuration in a database which is accessible by git-utils. When a developer wants to test upgrade, git-utils would kick off automation on one of setups which is preconfigured. A different process would go and replenish the database with new setups.
Resumable Masking by Manoj Joseph, Abdullah Mourad
Today, if masking jobs fail, provisioning the VDB fails. And the admin has to restart the entire masking process. If the masking jobs take a long time, the time spent redoing the masking can be significant. This project addresses this problem by taking snapshots between masking jobs and restarting just the failed ones.
Audience Choice Award
The most coveted prize at each Hackathon is the audience choice award as the winner gets to be a judge for the next hackathon. For the first time we had a tie between two very different hacks: Greenhouse Data Mining by Ardra Hren, Elizabeth Liu, Kevin Greene & Nav Ravindranath We have a lot of interviewing data from Greenhouse but haven't yet used it to improve out recruiting process. For our Hackathon project we took a first stab at analyzing some of this data and examined the correlation between interview traits and hiring decisions as well as variance between interviewers. We're looking forward to leveraging this data to make our recruiting process as fair and effective as possible.
Using blackbox to reproduce bugs through plugin configs by Zachary Reese-Whiting & Eric Muller
Using the existing blackbox plugin for rerunning test cases and defining test states that describe bug conditions it's possible to use blackbox to run tests until difficult to reproduce bugs are triggered, snapshot the VMs at the point of failure, and stop the automation run. This project provides access to this functionality through the blackbox GUI which allows us to attach these specific configurations to bugs so engineers could use blackbox to reproduce difficult to trigger issues without the need for manually rerunning tests.