A SOC Under Siege: How to Reduce Alert Fatigue

Wednesday, March 28, 2018
Fidelis blog: A SOC under siege: alert fatigue

I recently sat down with a SOC analyst from a large e-commerce vendor who showed me his daily workflow. He walked through how he had different data sources pumping into his SIEM, which spit out lots of alerts. Then he walked through a few examples, starting with the alert from the SIEM and then walking through his workflow - research each alert to find and validate source information and then to ensure there was proper context in relation to his environment. Connecting the dots to attain this context takes time and expertise. He showed me numerous alerts that essentially were redundant – yet he still had to go through each one, as opposed to them all being consolidated. He estimated that it could take anywhere from 20 minutes to a day to address a single alert. That’s a wide range, but if you have even a couple of alerts that take an hour or two to address, you can quickly see how this just won’t scale. It’s like a DDoS of your SOC and the analyst team – it’s nearly impossible to keep up with the volume, and ultimately real incidents get missed.

In “State of the SOC” report, commissioned by Fidelis Cybersecurity, a question asked of 50 enterprise companies was: “How many investigations a SOC analyst can realistically handle in a day?” 

  • 60% said “7 to 8 investigations.”
  • 30% said “5 to 6 investigations.”
  • 10% said “8 to 10 investigations.”

How namy investigations can a SOC handle in a day?

The study shows that the upper limit of investigations a trained, competent SOC analyst can reasonably handle in a day is 7-8. When the number of investigations shot up to 10 a day, the interviewees reported cases of analyst fatigue, which often resulted in lower fidelity investigations and missed attack signals. 

It’s clear that manual investigation combined with overwhelmed security teams ultimately will lead to trouble. Here are some ways you can automate the process of investigating alerts:

  1. Keep your analysts focused in a consolidated view – Alerts can come from both network and endpoint tools. Instead of managing these alerts separately, look at consolidating related events into a single alert that incorporates both network and endpoint data. This ensures that you don’t miss an important alert between the different views and also allows you to track an incident across the network and related endpoints.
  2. Consolidate alerts – If you can tie network and endpoint data together, you can automatically identify alerts that can be aggregated based on common entity (host, IP address, email address). Conclusions are intended to be actionable.
  3. Leverage metadata and validation for greater alert context – By combining metadata from network alerts with validation from the endpoints you create rich alerts with much greater context, which allows for faster response and remediation. 

Operational fatigue in the SOC is common – finding ways to automate the alerting workflow can help you stretch your analysts and ensure that they can quickly escalate, investigate and respond to the alerts that matter most to your organization.

 

- Sam Erdheim
Vice President of Marketing