podcast

Episode 4: Making Data as Agile as Code for DevOps

Marc Hornbeek, also known as DevOps the Gray, joins host Sanjeev Sharma to talk about what it means to automate the data workflow to improve development velocity and time to market, as well as big picture thinking to grow a career in IT.

Sanjeev Sharma: Hello everyone, this is your host Sanjeev Sharma and welcome to the Data Company Podcast. I have a very special guest today, Marc Hornbeek, also known as DevOps_the_Gray. Marc, welcome to the podcast.

Marc Hornbeek: Thank you for inviting me to this podcast. I'm primarily a DevOps consultant. I help companies with assessments. Most recently, I've been focusing on writing a book that kind of reframes DevOps as an engineering discipline with an engineering step-by-step solution. That's going to be published in October, I hope this year.

Sanjeev Sharma: Excellent. So Marc, the focus of this podcast is on the data aspect of transformation. As you're working with clients, what kind of data-related challenges do you see your clients experience as they are adopting DevOp?

Marc Hornbeek: I've worked with a lot of different types of clients: enterprise, manufacturing. But data aspects of DevOps is almost always an underserved area and a bottleneck in implementing what I call well-engineered DevOps. It's often an afterthought, and when they start thinking about how to integrate data changes together with code changes, there's a disconnect. So folks that are dealing with database, for example, separate from the application designers, and they're not coordinating all that well. So this is a bottleneck in terms of trying to get a release out through the agility of DevOps.

There are other aspects of data as well beyond just the database to go with the application that I run into a lot, so if you look at the broader scope of DevOps beyond that, the data associated with the infrastructure's code, the data associated with the pipeline itself, the data associated with pretty much everything that it takes to roll forward or roll back an application in a pipeline situation where you're trying to make a release or correct something that's been released that shouldn't have been.

Sanjeev Sharma: A couple of very interesting things you mentioned there, right, first of all going to the application space itself. I call the data layer to be the last silo. DevOps was so focused on integrating all the silos, but they seem to have forgotten the DBAs. They seem to have forgotten the data architects, and we realize now, once DevOps adoption starts maturing how critical they were. And of course, right from the beginning, data coming from production, the monitoring and the logging has been a part of DevOps, it's a very critical part of DevOps which not every company's very good at.

But the last thing you said how to manage data when you're rolling forward, or especially rolling back, how to handle that. How to handle the transactions which might have already happened, and you have to roll back, how to handle the schema changes. How to test for that? Everybody says they can do it, but when I ask them, “Have you tested this?" It's very difficult to test. So can you elaborate a little bit on that aspect?

Marc Hornbeek: Well, I actually have in the book something called the DevOps Cloud Blueprint, which is a very big picture of DevOps and it has layers. So the top layer is all about management of things, and that involves value stream management, application release automation, and governance and security—so these are the higher level things that consume data and have to do with an analysis, synthesis, and reaction to data. But then there's the lower levels, which really are generating data, so they're events and activities that are generating data, so testing being part of that. I often refer to it as continuous assessment, because I don't think anybody really cares about testing. What they really want to know are the changes working.

Sanjeev Sharma: I've heard it be referred to as continuous maturation.

Marc Hornbeek: Exactly. Same idea. So there's an awful lot of data being generated there. A lot of the times, it's not being gathered in a way that can be processed consistently, certainly not normalized. But when it is, there's a lot of wealth of information that can be gleaned if it's gathered together and then provided in a way that can be then be analyzed by the upper levels of the blueprint and do a lot of cool things. First of all, figure out where is the change of the whole federated set of things, whether it's the applications, the pipelines, different parts of let's say an app that's broken into microservices to get a bigger picture of what's going on as the changes are percolating through the pipeline.

Then reacting to what you see in that data. Looking for trends that are potentially interesting to identify further improvements and allow continuous improvement, so I look at it as stratified, testing being part of the lower level of providing information to an upper level that's gathering and analyzing across a federated set of applications or services that can then have some intelligence to drive an ongoing improvement cycle as you're trying to continuously improve the whole DevOps environment for the application.

Sanjeev Sharma: So looking at your application delivery pipeline and at all the metrics it generates, all the data it generates, whether it is data from test results, builds which passed or failed, there've been several attempts to try to analyze that and collate that, or even just provide a dashboard. I know Capital One had that open source project called Hygieia, which did pretty well. In your blueprint, is that the data you're talking about when you say which is captured by the highest level, the data related to how your pipeline is performing and what it is capturing?

Marc Hornbeek: So at the highest level, I talk about people, process and tech. At the lower level, it's more like the analogy's applications, pipelines, and infrastructure, so the source of the data is from the application being used, the pipelines being processed, and the infrastructure operating. Those are the sources of the data, and then they're feeding into the upper levels of people, process, and tech.

Sanjeev Sharma: DevOps has been around now for almost a decade, right? The name has. We've been talking about continuous improvement, continuously learning from what one is building and deploying and running, which is really what you're talking about also. Are companies really succeeding in doing that? Do you believe they have truly reached that level of maturity where there is continuous learning and continuous improvement as a result?

Marc Hornbeek: Frankly, I think most enterprises are just coming to grips with their goals for DevOps, let alone getting that far. And if you think about a maturity model for DevOps, I sort of see it as five: there's chaos. Then let's get the front-end working, the CI, and then you get into more like the three ways of the Gene Kim thing: continuous flow, get flow working end-to-end, and then feedback, a lot of data there, and then improvement, which is making use of the data to make ongoing improvements. So a lot of the enterprises that I've dealt with are really back at the chaos level still, but they're aspiring to get to level five. Practice is one thing, the state of the art is another. But there are state of the art organizations that are at that level. But very few, frankly.

Sanjeev Sharma: Right. Even in large enterprises, you'll always find pockets. But when you try to scale across an enterprise, that's where things fall apart, where even flow is difficult to achieve just because of the sheer interdependencies between applications, which are usually very poorly understood. So they cannot be well-implemented, right? You try to pull one thread, and you realize there's a whole gob of noodles hanging off. I think one thing DevOps has done is shown people the promise land, the holy grail. But how to get there will vary.

I believe in two things. Number one, A, what is your starting point? And B, do you have the right people to get you there? Sometimes you have the right people but not the right organizational structures, which can impede the ability to get there even with great leaders.

Marc Hornbeek: Or you might have too much of something. I talk about balance. You might have technology that you've overinvested in, but you haven't invested enough in the people or the culture or maybe the process, and maybe you should shift some of that to the weak spots in order to get a balanced solution for whatever your target is in the journey. That's true for the data as well as everything else, frankly.

Sanjeev Sharma: Oh, absolutely, and you know, the pendulum swings both ways from an organizational structure perspective or cultural perspective. To me, that's the most difficult one to change. You can overinvest in technology because it's easy to do. What are you seeing? Are companies succeeding in transforming culture or are there any success patterns?

Marc Hornbeek: What I have found works, and unfortunately it's not a typical case. But I find you've got to start with getting alignment around the top leadership. They've got to have some consistent goals because I find if you don't get that, you're kind of pushing on a rope.

Sanjeev Sharma: Agreed. We're coming up on time, Marc, and this has been a really interesting conversations, but to wrap it I want to change gears. Quite frankly, I eagerly wait to get to thatr data part of my conversation to get to this part of the conversation. If somebody is asking for mentorship on how to put together their career, how to get started down a path, what would you say to them?

Marc Hornbeek: So this is something I actually have a strong interest in because I actually do mentor quite a few people and I make it pretty clear on LinkedIn and other places that I'm available for that, including one of my own sons, who was a psychology major, and he decided to become a DevOps guy. First of all, don't overwork, overthink a particular tool. Yeah, you're going to have to learn some tools to get your hands on something real, but start with trying to get a good grasp of best practice level, the foundational level. I'm probably a little biased because I'm an author for the DevOps Institute and really they focus on that level to try to get a good education about the principles and practices of DevOps.

But then pick an area that you want to start with because DevOps is so vast, really, I talk about 27 factors to make DevOps work, which sounds complex, but anyway, pick an area. So let's say they decide they want to work on continuous delivery, then sure, learn the tools associated with that. Maybe it's Jenkins and Kubernetes and Containers with Docker or something else, Pivotal. Pick three or four tools, learn those, and maybe couple that with some certification classes just to get some recognition. But other than that, it's just practice and trying to really get a good grounding in the big picture. Once you've got that big picture in your head, then you know what the context is.

Sanjeev Sharma: Absolutely. I think that is sage advice because most people get so distracted by the details that they miss the principles. The tools and the details and the individual technologies are just methods of implementing some other principle. I mean, Kubernetes has its own way of implementing orchestration of processes and services, which just happen to be running in containers, for example. The same way with deployment automation tools, right? They're all various methods to do it.

I love that style of thinking, and I think this is great advice for anybody who listens to this. Perfect, perfect, thank you. Well, Marc, this has been an excellent conversation. We are sitting in a hotel in Bangalore, India, at the DevOps India Summit, and here we are, halfway around the world. So I really appreciate your taking the time and sharing your words of wisdom. Thank you so much.

Marc Hornbeek: Perfect.