Who cares about DORA metrics anyway?
Imagine for a moment, a world without metrics. Without dashboards. A magical place where there are no strings attached to your work and you are free to explore your ideas, your innovation and your creativity.
Sounds like a dream, right? I mean, who wouldn’t be into that??
Managers and business owners. That’s who. Because in the world of bottom-line performance measurements, KPIs and accountability, there is no room for dreams and fairy tales.
With distributed teams becoming the norm and the expectations for development velocity increasing like the worldwide inflation rate, managers need ways to be in control.
In the context of development teams, managers have been increasingly fond of using “DORA metrics” for this purpose. By monitoring and trying to improve this set of metrics, managers feel like they are somehow controlling their engineering team’s performance and culture.
Well, we’ve got news for all the managers out there (on behalf of developers everywhere): DORA measurements on their own, won’t help you.
The risk in monitoring DORA metrics
Google spent six years running its "DevOps Research and Assessment” (“DORA”) initiative. Through this research, they identified “four key metrics that indicate the performance of a software development team”:
- Lead Time for Changes
- Deployment Frequency
- Change Failure Rate
- Time to Restore Service
Defining these metrics has been incredibly helpful in many ways and there are companies like LinearB building tools to help measure them effectively.
The risk in focusing heavily on DORA metrics though is that they are very focused on the bottom line. DORA metrics are, by definition, a measure of output, and so it can be easy to overlook the fact that in order to impact that output, you need to first consider the inputs that make it happen.
Let’s take a look at some typical definitions for these DORA metrics:
METRIC | DORA Definition | Parameters |
---|---|---|
Lead Time For Changes | Time from PR’s first commit until deployment to production | Coding time, Clarity of hand-off and scope, Pickup time (Idle PR time), Review time, Deployment time => aka Cycle time |
Deployment Frequency | How many deployments to production are happening (this focuses on the frequency of the deployments, not the quality) | Automated testing coverage, CI/CD pipeline maturity, and performance |
Change Failure Rate | Percentage of failed deployments and incidents on production to rollback or hotfix deployments. | Observability and Monitoring systems maturity to discover production incidents -> Some issues will be never discovered so you can keep assuming the deployment was successful (known hack to improve that metric) |
Mean Time To Recover | Per incident discovered on production, how long does it take to revive or resolve it? | Issue reproduction time, Dev environment setup time + All parameters in Cycle Time |
As you read through these definitions, two things probably occurred to you - this is really boring. And it sounds like it's all about the outputs.
In order to have any chance of positively impacting any of these measurements, the input needs to be considered. And since we’re talking about how well the dev team is functioning, the relevant inputs are human inputs and they’re all related to developer experience.
Put simply - managers who want to see better DORA metrics need to keep in mind the relevant DX inputs, and make sure those are addressed. Since developers are not robots or machines, the real, developer-experience underpinnings need to be looked at before any dashboard showing charts and graphs.
A better way to use DORA metrics: Translate them first.
Because it is so important to keep these developer experience inputs in mind, we’ve created a 'DORA to DX' translation kit. We took those boring, output-centric DORA definitions and translated them into human DX terminology so that developers and managers alike can be more clearly aligned on what's really needed to achieve their common goals.
This is what the DORA metrics sound like when framed in real-life developer terms. In the table below, we’ve adjusted the definition for each metric so you can quickly understand it in terms of DX considerations.
Metric | DORA Definition | Real-life DX Translation |
---|---|---|
Lead Time For Changes | Time from PR’s first commit until deployment to production | "When can I move to the next ticket and forget about this one for good" |
Deployment Frequency | How many deployments to production are happening (this focuses on the frequency of the deployments, not the quality) | It's someone else’s problem, where is the deployment script I should use? |
Change Failure Rate | Percentage of failed deployments and incidents on production to rollback or hotfix deployments. | I’m on-call this week, please tell me no one pushed anything to production. I don't want to spend all week putting out fires... |
Mean Time To Recover | Per incident discovered on production, how long does it take to revive or resolve it? | Who can I assign to this bug? I don’t understand how to fix it and I have better things to work on and Copilot is not helping here |
In addition to the DORA to DX translation, it’s also worth pointing out some suggested ways to achieve these improvements, and how the DX impact might be felt across your team:
Lead Time For Changes
- How to achieve your goal: Consider sharing a continuous preview environment with your peers to streamline the pickup and review process, and consider getting all review comments by the team in your git to improve the general Lead Time while Reducing the Change Failure Rate, (Livecycle is a great solution for this type of contextual collaboration)
- What it means to your DX: Faster review comments gathering, instant pickup time for review, alignment from non-developers and their review reduces the delay in deployment to production
Deployment Frequency
- How to achieve your goal: If you have a fully functional CI/CD pipeline you trust, consider using a dev friendly deployment solution, check out some of them here
- What it means to your DX: No need to mess around with configuration files so within a few minutes you can enjoy a smooth deployment process and you can combine it with your automated testing solution.
Change Failure Rate
- How to achieve your goal: By running a collaborative review process early on using the methods above you’ll get to sleep better and refrain from on-call fatigue ;)
- What it means to your DX: More time to focus on creation
Mean Time To Recover
- How to achieve your goal: Consider using a tool that allows you to manage incidents contextually. It means you should have an easy method for: 1) Traveling between isolated environments per Pull-Request so you can smoothly reproduce the issue according to last PRs that were deployed. (without the need to clone and build locally), 2) Getting a response from the code owner in the context of the PR once you detect the problematic merge, and 3) Making the needed code changes and generating a new environment by creating a new PR to deploy and review
- What it means to your DX: Get all the recognition you want for handling issue smoothly so you’ll be able to drive your initiatives and get heard by the team
The TL;DR for devs and managers
Here’s the TL;DR for managers: The way to boost your DORA scores is to adopt dev-centric tools and build processes that improve your internal developer experience.
Here’s the TL;DR for developers: Help your managers understand your perspective by sending them a link to this article. Let your manager know that if they invest in developer experience, their ROI will be much better DORA metrics and much happier ICs throughout the team.
Postscript - dev teams need better translations!
Hopefully, you've found this helpful. As a company, we spend a lot of time looking at how to bridge gaps between cross-functional stakeholders. And what we've found is that it almost ALWAYS comes down (at least partially) to translation. Each stakeholder has their own "language" - devs, design, PMs, QA, managers, and marketing. The key to creating alignment is often found in the ability to translate and understand each other's needs and perspectives, and hopefully, this DORA to DX translation cheat sheet can help your team do that.
And because what we do is create alignment and facilitate collaboration, we've got a few more dev-centric translation cheat sheets on the way that we think you'll enjoy.
For example, we're working on translating some of the common "Questions Developers Get". Devs are always getting tapped on the shoulder with "just a quick question". So we've collected some of the most common interruptions and translated them into what developers are really saying to themselves when each question comes in.
So, if you've ever asked one of your engineering colleagues something like this, you'll be able to now know what they're thinking:
- "What's the status of that new feature?"
- "Do you have a minute to hop on a call for us to go over the latest changes?
- "How can I test the new features?"
- "Did you fix all the design issues I reported?"
Suffice it to say, devs aren't always thrilled to stop what they are doing for a complete context-switch :-).
Stay tuned for more translations coming your way...