I’ve blow it off by saying things like,

I’ve been mulling over something for a little while. Ever since the Equifax hack, there has been a swarm of companies in the DevOps space coming out and speaking up, in favor of, application security. Others blow it off by saying things like, ‘nobody really knows what happened’. I like that they are joining the conversation, but I can’t help but question the motives. Not to worry. This isn’t a finger-pointing exercise. Allow me to explain.

 

Before Equifax, none of them talked about application security (with few exceptions). In fact, if they did discuss security, it was an afterthought, at best, or an opportunity to climb on a soapbox of how much “security needs devops”, or another chance to say that security is a huge inhibitor to their efforts at keeping (or increasing) speed. The usual tirade was about touting the benefits of getting products to market faster, and enhancing their technology to help companies deploy every five seconds, rather than every five minutes – “better, faster, more” – the usual status quo. While we all want products to get to our customers as quickly as possible, I find it interesting that security wasn’t part of their rubric until now. It’s one thing to join the space and contribute to its advancement. It’s entirely different to jump on a trend for more exposure.

 

The unfortunate reality is security doesn’t matter until, suddenly, it does matter. And it usually takes a breach to make it important across multiple technology ecosystems. In my (not so humble) opinion, DevOps platform companies simply saw an opportunity to join the hype bandwagon, yet don’t offer actual process, organizational or technology solutions to help the problem of insecure code being released into production.

 

DevOps “Metrics”

 

One of the most recognized reports in the DevOps space is the State of DevOps Report from DORA and Puppet Labs. Here are some stats they give from their 2017 report, about high performing organizations:

·      46 times more frequent deployments

·      440 times faster lead time from commit to deploy

·      96 times faster mean time to recover from downtime

·      5 times lower change failure rate (changes are 1/5 as likely to fail)

 

The Puppet Report also provides the technical practices that “high performing” organizations employ – version control, continuous integration, trunk-based development, and automation. They do, in fact, include integrating security into software delivery into their “factors that positively contribute to continuous delivery”. Finally, a mention of security.

 

They also recognize that software quality and security is sometimes difficult to measure, so they used “unplanned work” and “rework” as benchmarks. In the 2016 report, the data tells us that “high performers spend 50 percent less time remediating security issues”.1 However, the 2017 data show “21 percent less time spent on unplanned work and 44 percent more time on new work.”2

 

 

Smoke and Mirrors

 

To consider “unplanned work” or “rework” part of security is a bit misleading. What kind of work are they referring to? This could be new features or code, as they state, or it could mean fixing problems that were in the code in the first place. The report doesn’t make the distinction.

 

Gene Kim, a pioneer in the DevOps movement and author of The Phoenix Project says, “…the high performers were massively outperforming their non-high performing peers. Thirty times more frequent code deployments. They were able to complete the deployments from code committed to running a production 8,000 times faster. In other words, they could do a deployment measured in minutes or hours. Whereas low performers it took them weeks, months, or quarters. Furthermore, they actually had better outcomes. Twice the change success rate and when they caused outages they could fix it 12 times faster. There were 4,200 that completely responded and it just showed that you could be more agile and reliable at the same time. I think if we just connect some dots, they’re also going to be more secure, they’re probably going to have a faster find fix cycle time. We didn’t actually test that but, if that were true it would mirror what we found in the previous benchmarking work I had done.”

 

 

Don’t think the age-old manifesto of “better, faster, cheaper” is dead in DevOps. It’s not. Deploying code 8,000 times faster is not measure that anything is better, other than speed. It’s certainly not a metric having anything to do with risk reduction. Maybe it helps fix software flaws more quickly, but does it really? Is that the entire story?

 

The Bottom Line

The point I’m trying to make is that the “better, faster, more” mentality of DevOps platform companies, does several industries a disservice. I’m very much in favor of agile and DevOps practices, but I’m looking for a holistic view of benefit and risk from an informed standpoint. When Gene said, “Twice the change success rate and when they caused outages they could fix it 12 times faster”, but how does anyone know how many of those outages were caused by faster deployments?

 

Think about it. Fixing an outage is fantastic. Let’s fix them quickly and often. However, that is only what we have visibility to. It doesn’t account for what we can’t see – the new software vulnerabilities that aren’t being tested (because these flaws may not even be considered by anyone).

In 2017, we saw the unfortunate cases of Equifax, WannaCry, Cloudbleed, T-Mobile, Deloitte and Uber, among many others, proving once again that security often takes a backseat to development. It simply isn’t given the consideration or the budget it deserves. We all want to provide our customers with new products and applications as quickly as possible, but it should never be done at the expense of security of consumer and business data.

 

 

 

1 2016 State of DevOps Report: https://puppet.com/resources/whitepaper/2016-state-of-devops-report

2 2017 State of DevOps Report: https://puppet.com/blog/2017-state-devops-report-here