On-call Manual: How to cope with on-call anxiety

Pawel Kadluczka - Sep 20 - - Dev Community

On-call anxiety is real. I've been there, and I know engineers who experienced it. Many factors contribute to it, but from my experience, three stand out.

1. Unpredictability

Unpredictability is the number one reason for on-call anxiety. You might be responsible for a wide range of services. They may break anytime for various reasons like network issues, deployments, failing dependencies, shared infra outages, data center drains (a.k.a. storms), excavators damaging etc. On-calls, especially new ones, worry they won't know what to do if they get an alert. How would they figure out what broke? How would they come up with a fix?

What to do about it?

With experience, the unpredictability aspect of the on-call gets easier. But even for the most seasoned on-call engineers, handling an outage can be difficult without the proper tools like:

  • Easy-to-navigate dashboards that allow to tell quickly if a service is working correctly and identify problematic areas in case of failures
  • Playbooks (a.k.a. runbooks) explaining troubleshooting and mitigation steps
  • Documentation describing the service and its dependencies, including the relevant on-call rotations to reach out if necessary

Having a team eager to jump in and help mitigate an outage quickly is priceless. Your team members understand some areas better than you. Knowing they have your back is reassuring.

2. Too many alerts and incidents

The second most common reason engineers fear their on-call is a never-ending litany of alerts, requests, and tasks. If you get a new alert when you barely finished acknowledging a previous one and are also expected to handle customer tickets and deal with requests from other teams, fretting your on-call is understandable. The exhaustion is usually exacerbated by the feeling of not doing a decent job. I was on a rotation like this once. After a while, I realized that everyone, not only me, was overwhelmed. Even though we toiled long hours, most alerts were ignored, customer tickets remained answered, and requests from other teams were only handled after they escalated them to the manager.

What to do about it?

There is no way a single person can fix a very heavy on-call by themselves. They won't have the time during their shift, and by the time the shift ends, they will be so fed up that they won't want to hear about anything on-call-related. There are, however, a few low-hanging fruits that can help improve the quality of the on-call quickly:

  • Delete alerts - find routinely ignored alerts and determine if they're useful. If they aren't - delete them.
  • Tune noisy but useful alerts - adjust thresholds and windows for flapping alerts, alerts that fire prematurely, and short-lived alerts.
  • Get a secondary on-call - a second person could help handle tasks the primary on-call does not have the capacity to deal with (e.g., customer tickets). This could be only temporary.

These ideas can alleviate on-call pain but are unlikely to fix a bad on-call for good. Improving a heavy on-call requires identifying and addressing problems at their source and demands effort from the entire team to maintain on-call quality. I wrote a post dedicated to this topic. Take a look.

3. Middle-of-the-night alerts

Many on-call rotations are 24/7. The on-call is responsible for dealing with incidents promptly, even if they happen in the middle of the night. Waking to an alert is not fun, and if it happens regularly, it is a valid reason to resent being on-call.

What to do about it?

While it may not be possible to avoid all middle-of-the-night alerts, there might be some actions you can take to reduce the disruption. A lot will depend on your specific situation, but here are some ideas:

  • Check your dashboards in the evening and address any issues that could raise an alert.
  • Increase alert thresholds outside working hours. If your traffic is cyclical - e.g., you have much lower traffic at night because most requests come from one timezone - you may be able to relax thresholds outside working hours. Even if an incident happens, its impact will be smaller. Also, alerts get much noisier if the traffic volume is low (e.g., if you get ten requests in an hour and one fails, you might get an alert due to a 10% error rate).
  • Disable alerts at night. Some outages won't cause any impact unless they last for a long time. For instance, our team was responsible for a service that would work fine even if one of its dependencies was down for a day. This 24-hour grace period allowed us to turn off alerts at night.

💙 If you liked this article...

I publish a weekly newsletter for software engineers who want to grow their careers. I share mistakes I’ve made and lessons I’ve learned over the past 20 years as a software engineer.

Sign up here to get articles like this delivered to your inbox:
https://www.growingdev.net/

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player