Maturing and Specializing: Incident Response

Interested in learning
more about security?
SANS Institute
InfoSec Reading Room
This paper is from the SANS Institute Reading Room site. Reposting is not permitted without express written permission.
Maturing and Specializing: Incident Response
Capabilities Needed
Survey results reveal an increasingly complex response landscape and the need for automation of processes and
services to provide both visibility across systems and best avenues of remediation. Read this paper for
coverage of these issues, along with best practices and sage advice.
Copyright SANS Institute
Author Retains Full Rights
Maturing and Specializing:
Incident Response Capabilities Needed
A SANS Survey
Written by Alissa Torres
Advisor: Jake Williams
August 2015
Sponsored by
AlienVault, Arbor Networks, Bit9 + Carbon Black,
Hewlett-Packard, McAfee/Intel Security, and Rapid7
©2015 SANS™ Institute
Executive Summary
Hackers used to break into a system, steal as much data as possible and get out,
without worrying about detection. Today, however, they have learned to be patient,
harvest more data, and cause significant security and financial effects. Because of this,
organizations must detect and respond to incidents as quickly, efficiently and accurately
as possible.
The length of dwell time (the time from the attacker’s initial entry into an organization’s
network to the time the intrusion is detected) correlates most closely to the total cost
of a breach. The longer an attacker has unfettered access on a network,
% report an average dwell time of 24 hours or
37
less, with 23% reporting two to seven days
the more substantial the data loss, severity of customer data theft and
subsequent regulatory penalties.
Of the 507 respondents to qualify and take the SANS 2015 Incident
36
%
spend an average of 24 hours or less to
remediate an incident, and 28% remediate
in two to seven days
% cited a skills shortage as an impediment to
66
effective IR
45%
lack visibility into events across a variety of
systems and domains
37%
are unable to distinguish malicious events
from nonevents
Response Survey, 37% cited their the average dwell time as less than
24 hours, while 36% of organizations took 24 hours or less to remediate
real breaches. However, 50% took two days or longer to detect breaches,
7% didn’t know how long their dwell time was, another 50% took two
days or longer to remediate and 6% didn’t know. This represents a slight
improvement over our 2014 survey, in which 30% remediated breaches
in 24 hours or less, while 17% took one to two days to remediate, 51%
took more than two days to remediate and 6% took three months or
longer.
These and other results of the 2015 survey show that incident response
(IR) and even detection are maturing. For example, although malware
is still the most common underlying reason for respondents’ reported
incidents, 62% said malware caused their breaches, down from 82% in
2014. Data breaches also decreased to 39% from 63% last year. Such results hint that
malware prevention and other security technologies are working in an increasingly
complex threat landscape.
The shrinking window of response time, along with more automated tools and—just
as important—the specialized job titles to support the IR function are all indicators
of this maturation. Now for the bad news: Organizations are short on the skills and
technologies they need for full visibility and integrated response.
SANS ANALYST PROGRAM
1
Maturing and Specializing: Incident Response Capabilities Needed
Executive Summary
(CONTINUED)
In the survey, 37% of respondents said that their teams are unable to distinguish
malicious events from nonevents, and 45% cited lack of visibility into events across
a variety of systems and domains as key impediments to effective IR. Together, these
answers suggest the need for more precise conditions for security information and event
management (SIEM) alerts, as well as the need for more specialized IR skills.
Skills, while in demand, are also hard to come by, with 66% of survey takers citing a
skills shortage as being an impediment to effective IR. Another 54% cited budgetary
shortages for tools and technology, 45% lack visibility into system or domain events,
41% lack procedural reviews and practice, and 37% have trouble distinguishing
malicious events from nonevents.
Immature IR teams do not have the time or expertise to identify the initial entry of an
attacker into the network nor fully scope the attack for successful remediation. This
points to a “cleaver-like” approach to response, with 94% of respondents using the wipe
and reimage method of remediation. Even this is not always effective. As the recently
discovered Duqu 2.0 attacks demonstrate,1 advanced attackers count on their ability to
reinfect machines at will. Wiping and reimaging individual machines without mitigating
the full compromised system is certain to be a losing strategy.
Overall, these results reveal an increasingly complex response landscape and the need
for automation of processes and services to provide both visibility across systems and
best avenues of remediation. These issues, along with best practices and advice, are
discussed in the following pages.
1
SANS ANALYST PROGRAM
https://securelist.com/files/2015/06/The_Mystery_of_Duqu_2_0_a_sophisticated_cyberespionage_actor_returns.pdf
2
Maturing and Specializing: Incident Response Capabilities Needed
About the Survey Respondents
The organizations in this survey are diverse in industry type, geographic location and
size of employee base, providing an excellent cross-section of IR capabilities as they exist
in companies today.
Size and Regions
The respondent pool includes a varied distribution of company size: 26% work for
companies with more than 20,000 employees and contractor staff, and 20% are from
companies of 500 employees or less (Figure 1).
How large is your organization’s workforce,
including both employees and contractor staff?
30%
25%
20%
15%
10%
5%
Fewer than 100
100–499
500–1,999
2,000–4,999
5,000–9,999
10,000–14,999
15,000–19,999
Greater than 20,000
0%
Figure 1. Organization Size
Most (81%) of respondents’ organizations have a presence in the United States, with
Europe being the second most cited region with 33%. Overall, respondents represented
14 regions and countries, with many coming from global organizations.
Type of Industry
Government, technology and financial services were the top three sectors represented
in this survey, with 20%, 19% and 17% of the response base, respectively. Education
and manufacturing were each represented by at least 7% of respondents, while
just under 6% came from health care/pharmaceuticals. Energy/utilities, retail and
telecommunications were each represented by less than 4% of respondents. “Other”
write-in responses include aerospace/defense, chemical engineering and fast food.
SANS ANALYST PROGRAM
3
Maturing and Specializing: Incident Response Capabilities Needed
About the Survey Respondents
(CONTINUED)
Roles/Responsibilities
Only 5% of respondents identified themselves as belonging to an IR/forensics consulting
firm. This indicates more organizations are bringing these types of skills in-house,
particularly as we look at the progress made over the past year in organizations creating
a dedicated in-house IR team. Last year, 59% of respondents had a dedicated team, while
73% reported having a team this year.
73%
Results also reveal growing specialization in IR-related titles. Just over 9% of respondents
consider themselves specifically as incident responders, with others calling themselves
intelligence analyst, CERT team leader, incident/problem manager, IT security architect
Percentage of
respondents’
organizations having a
dedicated IR team
or engagement manager in the write-in responses under the “Other” option. This
suggests that professionals with highly specific skill sets are filling niche roles on IR
teams. Increased specialization is typically a sign of maturation of an industry, a strong
progressive indicator for the IR profession as a whole. See Figure 2.
What is your primary role in the organization,
whether as an employee or consultant?
Security analyst
Security manager/Director/CSO/CISO
IT manager/Director/CIO
Incident responder
TAKEAWAY:
Other
With increased specialization,
System administrator
it becomes more important
Digital forensics specialist
Compliance officer/Auditor
to understand what each
Network operations
member of an IR team
Security operations center (SOC) manager
Help desk agent/Technician
does. When reviewing the
experience of an employee
or a candidate for a position,
Investigator
Figure 2. Many Roles Involved in Response
don’t rely on the title the
In a cursory search through open job requisites for security analysts, descriptions of
individual had. Instead, look
duties and responsibilities varied widely, as did the level of required experience for the
at the specific duties he or she
performed.
position. From assigned duties, such as being a member of a Tier 1 security operations
center (SOC) with responsibility for continuous monitoring, documentation and
reporting of incidents, to being a highly specialized technical expert who develops
signatures and countermeasures based on adversary tactics, techniques and procedures
(TTPs), the security analyst title is used as a catchall in the industry to describe a role with
a variety of duties and responsibilities.
SANS ANALYST PROGRAM
4
Maturing and Specializing: Incident Response Capabilities Needed
Eyes on the Ground
The majority (84%) of respondents report that their organization has experienced at
least one incident over the past year, with 18% experiencing more than 100 incidents.
Of those, 50% resulted in at least one real data breach: 9% say their investigations
resulted in only one critical incident, 25% say that that detection led to actual breach
investigations in two to 10 instances, and 6% report their investigations resulted in 11–
25 breaches, with just over 10% finding more than 25 actual breaches. Interestingly, the
majority of those who experienced two to 10 breaches started with two to 10 incidents.
See Figure 3.
5.3%
1
31.2%
None
Unknown
8.2%
1 or More
Incidents
8.4%
2–10
13.9%
11–25
7.6%
26–50
7.4%
51–100
8.4%
101–500
9.7%
500+
Incidents responded to
in the last 12 months
Breaches in the
last 12 months
8.9%
1
24.7%
2–10
6.3%
11–25
3.4%
26–50
3.1%
2.6%
1.3%
51–100
101–500
500+
1 or More
Breach
Number of Incidents that Resulted in 2–10 Breaches
40%
30%
20%
10%
0%
1
2–10
11–25
26–50
51–100
101–500
500+
Figure 3. Incidents Detected Compared to Actual Breaches Experienced
SANS ANALYST PROGRAM
5
Maturing and Specializing: Incident Response Capabilities Needed
Eyes on the Ground
(CONTINUED)
These percentages show a decrease in actual breaches compared to 2014 results. In
2014, 61% experienced a serious breach, 18% did not and 21% did not know whether
they had experienced a breach. This year, 34% said they had no breaches (as opposed to
18% last year), and there were fewer unknowns (16% as opposed to 21%).2
One possible explanation for the notable decrease in critical breach incidents could
be the increase in automated IR tools. As we will see in the review of IR technology
implementations, 42% of our respondents have fully integrated SIEM correlation and
analysis, compared to only 22% last year.
Breach Payloads
Just as in last year’s survey results, malware tops the list (62%) as the most common
underlying nature of incidents in the respondent’s enterprise, down nearly 20% from
last year’s results. The combination of denial-of-service options tied with unauthorized
access for the second most common category of critical incident, with 43% of
respondents reporting such incidents. Both fell this year, with unauthorized access
showing the most dramatic decrease, from 70% in 2014 to 43%. Data breach occurrences
were down as well, with only 39% of 2015 respondents experiencing such incidents
compared to 63% last year. See Table 1.
Table 1.
Year-Over-Year Comparison of Incident Types
2
SANS ANALYST PROGRAM
Incident Type
2014
2015
Malware
81.9%
62.1%
Distributed denial of service
48.9%
43.1%
D
istributed denial of service
(DDoS) main attack
27.6%
D
istributed denial of service
(DDoS) diversion attack
15.5%
Unauthorized access
70.2%
42.5%
Data breach
62.8%
38.5%
Advanced persistent threat (APT)
or multistage attack
55.3%
33.3%
Insider breach
28.2%
Unauthorized privilege escalation
21.3%
Destructive attack (aimed at
damaging systems)
14.9%
Other
12.8%
False alarms
66.0%
1.7%
“Incident Response: How to Fight Back,” www.sans.org/reading-room/whitepapers/analyst/incident-response-fight-35342
6
Maturing and Specializing: Incident Response Capabilities Needed
Eyes on the Ground
(CONTINUED)
A contributing factor to the decrease in malware incidents and, in some regard, data
breaches, may be the growing implementation of more effective antivirus, edge
detection and endpoint protection products. Organizations are becoming more adept at
handling these infections with automated processes and may no longer consider them
incidents, as they previously might have.
However, with 33% selecting advanced persistent threat (APT) or multistage attacks,
the extrapolation that APTs are mostly malware-based means there is some overlap in
answers. Unauthorized access (43%) could also be included in malware infections.
Denial and Destruction
TAKEAWAY:
In response to the growing
prevalence of attacks involving
DDoS has increasingly become a means to disable a company or hide nastier payloads
inside the noise of the DDoS. Respondents saw more frequent use of attack methods
over the past year. According to 28% of respondents, DDoS was used as a primary attack
method, while 16% saw it used as a diversion attack. This is slightly less than last year’s
data destruction and system
49% of respondents who experienced a DDoS attack, whether as a primary attack vector
disruption, IR teams must
or a diversionary attack. DDoS was also mentioned as an attack type for the first time in
prepare to contain, counter
the 2015 version of the Verizon Data Breach Investigations Report (DBIR).3
and remediate by creating
Another 15% of respondents cited intentional system damage as a method employed
specific procedures that
in breaches their organization has experienced over the past year, which can also deny
address this unique type of
attack.
service. Though in past years data destruction was seen largely in insider cases with
rogue or disgruntled employees targeting specific data, today we have seen recent
examples of nation-states employing these attacks as weapons in cyberwarfare,
for example in the Sony attack of November 2014, which is suspected to have been
perpetrated by North Korea,4 and in the Las Vegas Sands casino intrusion reported in
December 2014,5 which has been attributed to Iran. What used to be an infrequent
occurrence of an information warfare technique is now becoming more common in
attackers’ weapons arsenals.
Ransomware such as CrytoLocker, first seen in September 2013 and written in by one
respondent, is also considered an attack on availability. This type of malware represents
yet another category of attack that could be considered a serious breach and result in
lost access to sensitive or highly valuable data if IR teams do not have a planned set of
procedures for responding to such incidents.
SANS ANALYST PROGRAM
3
“2015 Data Breach Investigations Report,” www.verizonenterprise.com/DBIR
4
www.bloomberg.com/news/articles/2014-12-04/sony-hack-signals-emerging-threat-to-destroy-not-just-steal-data
5
www.bloomberg.com/bw/articles/2014-12-11/iranian-hackers-hit-sheldon-adelsons-sands-casino-in-las-vegas
7
Maturing and Specializing: Incident Response Capabilities Needed
Eyes on the Ground
(CONTINUED)
Targeted Data Theft
In this year’s survey, employee information was the most common category of data
stolen, with 41% of participants citing employee data as the top target of their attackers.
Another 36% cited individual customer information, while 30% selected intellectual
property. The fourth most common category of stolen data is proprietary customer data
(27%), different from individual customer information due to its relation to the service
provided by the victim company. For example, proprietary customer data from an ISP
would include Internet usage, bandwidth and IP address assignment information for the
customer. Table 2 provides a comparison of 2014 and 2015 data exfiltration statistics.
Table 2.
Data Types Targeted 2014–2015
‘
Data Type
2014
2015
Employee information
36.4%
41.2%
Individual consumer customer
information
36.4%
35.8%
Intellectual property (source
code, manufacturing plans, etc.)
31.8%
29.7%
Proprietary customer information
31.8%
26.7%
Legal data
12.1%
14.5%
PCI data (payment card numbers,
CVV2 codes, track data)
13.9%
PHI data (health information)
12.1%
Other
15.2%
Other regulated data (SOX,
non-PHI personally identifiable
information, etc.)
11.5%
11.5%
It’s estimated that 4 million records were compromised in a recent example of data theft
detected in April 2015 at the U.S. Office of Personnel Management (OPM).6 The financial
consequences of this breach can be used as a case study for the typical organization.
Based on estimates that the cost of each record lost is $154,7 we can determine that the
OPM compromise will have a minimum cost of $616 million, depending on the type of
data stolen, just on a cost-per-record basis. When you factor in the sensitivity of the data
that was stolen, the cost will likely be much higher. Such breaches should be avoided
with proper prevention; and their effects must be minimized if they can’t be avoided.
SANS ANALYST PROGRAM
6
www.opm.gov/news/releases/2015/06/opm-to-notify-employees-of-cybersecurity-incident
7
http://securityintelligence.com/cost-of-a-data-breach-2015/#.VYl0u0a6L_0
8
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
Verizon’s 2015 DBIR8 reports that the average time required for an attacker to conduct a
breach is decreasing while the average time to detect a breach is increasing. In reviewing
the incidents occurring in 2014, they found that in 60% of the breaches investigated
attackers were able to compromise an organization within minutes. Considering what
IR teams are up against, automating the response processes and reducing the time
available to attackers are imperative.
Metrics
Whether working as part of a consulting service or internal team, establish a set of
metrics to measure improvements in IR process efficiency and effectiveness. The
core reason for tracking metrics is to demonstrate the value of their investment to
stakeholders. However, the metrics used by survey takers vary widely: 23% of our
respondents use well-defined metrics to help track, evaluate and update their plan,
whereas 37% measure improvements in accuracy, response time and reduction of attack
surface, as shown in Figure 4.
How do you assess the effectiveness and maturity of your IR processes?
e use well-defined metrics to
W
help us track, evaluate and update
our plan.
e measure improvements in
W
accuracy, response time and
reduction of attack surface.
e conduct incident response
W
exercises on a routine basis.
Other
Figure 4. Measures of Improvement
For metrics to be useful, they must be periodically compared against a baseline. Based
on the complexity of an intrusion and the sophistication of an attacker, detection and
remediation prove to be more complex in specifically targeted industries. Comparing
metrics across industries is not a useful guide for measuring in-house IR functional
progress. Instead, use resources such as the whitepaper, “An Introduction to the Mission
Risk Diagnostic for Incident Management Capabilities (MRD-IMC),”9 as a guide for
establishing internal metrics.
SANS ANALYST PROGRAM
8
www.verizonenterprise.com/DBIR/2015
9
http://resources.sei.cmu.edu/library/asset-view.cfm?assetID=91452
9
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
A core measure of IR effectiveness is the time from infection, or occurrence of incident,
to detection and remediation. In our survey, the single most selected average for time to
detection was two to seven days (23%), which was also the most selected (28%) answer
option for time to remediate. However, when aggregated, 37% of respondents reported
an average time to detection of less than 24 hours, while 36% remediated within 24
hours after detection. See Figure 5.
[
On average, how much time elapsed between the initial compromise and detection
(i.e., the dwell time)? How long from detection to remediation?
Please check both columns as they apply.
30%
25%
20%
23
28%
%
15%
10%
5%
0%
Time to detection
Unknown
Percentage of
respondents citing
2–7 days as the time
to detection and
time to remediation,
respectively
< 1 hr
1–5 hrs
6–24 hrs
Time to remediation
2–7 days
8–30 days
1–3 mos
4–6 mos
7–12 mos
> 1 yr
Figure 5. Time to Detection/Time to Remediation
In contrast, 11% take more than one month to detect an incident, as well as remediate
an incident after detection.
Time to detection and remediation are difficult metrics on which to compare
organizations because some industries are more attractive, valuable targets for
sophisticated attackers. So it’s important to measure improvement in incident
handling within the organization rather than looking at averages across industries.
But is this 24-hour or less time frame cited for both detection and remediation
realistic? Based on other security trend reports, the average dwell time for a
financial services company is 98 days, and it’s 197 days for retail.10 How, then, can
our respondents report an average time frame to return business functions to fully
operational within two to 10 days of infection?
The answer lies in the varying definitions of detection and remediation. Often
organizations determine time to detection from when the system first exhibited
suspicious behaviors, thus explaining the short (less than 24 hours) time frame
between infection and detection.
10
SANS ANALYST PROGRAM
www.zdnet.com/article/businesses-take-over-six-months-to-detect-data-breaches
10
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
Remediation Practices
Without proper investigative skills or resources, the number of compromised systems
and accounts—and the amount of data stolen—is not properly quantified. In these
instances, detection to remediation is achieved quickly with a simple wipe and reimage.
Yet, best industry response practices also include signaturing malware and attacker
behavior based on the initial system(s) identified. Once these unique signatures, known
as indicators of compromise, are created, they are used to scan other systems in the
enterprise. In this way, all systems with active malware or similar artifacts of attacker
activity will be identified. In our survey, 88% of our respondents stated they were
conducting this type of identification and follow-up either manually or in an automated
fashion (see Table 3).
Table 3. Remediation Practices
What practices do you have in place for remediating incidents?
Indicate whether the process is conducted manually, through automated systems that are integrated,
or a combination of both. Choose only those that apply to your organization.
Manual
Automated
Both
Total Response
Quarantine affected hosts
44.8%
22.2%
29.9%
96.9%
Shut down system and take it offline
67.0%
7.6%
20.5%
95.1%
processes will speed time to
Kill rogue processes
52.1%
11.1%
31.6%
94.8%
remediation and reduce the
Remove rogue files
43.4%
12.5%
38.2%
94.1%
workload assigned to IR staff.
Reimage/Restore compromised machines from gold
baseline image
60.4%
11.1%
22.2%
93.8%
Isolate infected machines from the network while
remediation is performed
63.2%
8.7%
21.5%
93.4%
Block command and control to malicious IP addresses
38.5%
18.8%
35.8%
93.1%
Reboot system to recovery media
64.2%
7.3%
18.4%
89.9%
Identify similar systems that are affected
49.7%
10.8%
27.8%
88.2%
Remotely deploy custom content or signatures from
security vendor
33.7%
21.2%
33.0%
87.8%
Update policies and rules based on IOC findings and
lessons learned
59.4%
7.3%
18.4%
85.1%
Removing file and registry keys related to the
compromise without rebuilding or reinstalling the
entire machine
52.4%
8.3%
23.3%
84.0%
Boot from removable media and repair system
remotely
58.7%
6.9%
16.7%
82.3%
6.3%
2.8%
5.2%
14.2%
TAKEAWAY:
Automating remediation
Answer Options
Other
For all the practices listed here, respondents did more manually than through
automated processes. The critical thing to remember is that manual practices take more
time and are usually much less accurate than automated procedures.
SANS ANALYST PROGRAM
11
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
What Works
Organizations are still automating what they can in their processes, which SANS defines
as integrating functions across ecosystems. Traditional anti-malware/edge protection,
logs and behavior-based scanning are the most integrated, according to results.
Detection
The three most popular detection technologies, as indicated by being either fully or
partially integrated into respondents’ IR capabilities, are IPS/IDS/firewall and unified
threat management (UTM) alerts (89%), log analysis (81%), and network-based scanning
agents for signatures and detected behavior (81%). See Table 4.
Table 4. Detection Technologies
Does your organization use any of the following capabilities to identify impacted systems?
If so, please indicate how integrated each is with your overall incident response ecosystem.
Select only those that apply.
Highly
Integrated
Partially
Integrated
Total
Integrated
IPS/IDS/Firewall/UTM alerts
55.6%
32.9%
88.5%
Log analysis
40.3%
41.0%
81.4%
Network-based scanning agents for signatures and
detected behavior
46.8%
34.2%
81.0%
User notification/Complaints
40.0%
38.0%
78.0%
Network packet capture or sniffer tools
35.3%
40.7%
75.9%
SIEM correlation and analysis
42.4%
32.5%
74.9%
Endpoint Detection and Response (EDR) capabilities
35.6%
38.6%
74.2%
Network flow and anomaly detection tools
33.6%
35.9%
69.5%
Third-party notifications and intelligence
27.8%
41.7%
69.5%
Network traffic archival and analysis tools
34.9%
32.2%
67.1%
Intelligence and analytics tools or services
27.5%
38.3%
65.8%
Host-based intrusion detection (HIDS) agent alerts
32.9%
32.5%
65.4%
Endpoint controls (e.g., NAC or MDM)
27.1%
33.9%
61.0%
Home-grown tools for our specific environment
21.4%
36.6%
58.0%
SSL decryption at the network boundary
26.4%
29.5%
55.9%
Browser and screen-capture tools
23.4%
27.5%
50.8%
Third-party tools specific for legal digital forensics
22.7%
26.8%
49.5%
4.4%
4.1%
8.5%
Answer Options
Other
SANS ANALYST PROGRAM
12
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
There is a strong correlation with the top three automated capabilities between our
2015 and 2014 surveys (although the questions were worded differently in 2014). In our
2015 survey, IPS/IDS/Firewall and UTM alerts were integrated by 89% of respondents,
81% integrated log analysis in their response practices, and 81% integrated networkbased scanning agents for signatures and detected behavior. These categories were also
among the most highly used processes in 2014—whether automated, manual or both.
IPS/IDS/Firewall and UTM alerts were used by 91%, log analysis by 85%, and networkbased scanning agents for signatures and detected behavior by 96%.
Most respondents (44%) felt their SOCs were immature and unable to respond well
to events, with 25% believing their SOCS were maturing, 14% feeling their SOCs were
mature and the rest unsure of their status.
In correlating the maturity of IR capabilities within an organization with the technologies
and resources deployed, mature SOCs had the greatest integration of technologies
such as endpoint-detection-and-response (EDR) capabilities, network packet capture
implementation, and SIEM correlation and analysis.
Intelligence
The incorporation of cyberthreat intelligence (CTI) and analytics tools and services was
also more prevalent in organizations with mature SOCs. By correlating threat intelligence
and analytics, IR teams can detect and respond to threats based on past incidents and
those included in CTI feeds. In fact, in the 2015 SANS Cyberthreat Intelligence Survey,11
75% of respondents cited CTI as important to security. Yet only 66% of respondents
to this 2015 survey on IR report high or partial integration of intelligence with their IR
processes. They do, however, use intelligence provided either internally or through thirdparty sources. Specifically, respondents use intelligence in the following ways:
• 96% tie intelligence to IP addresses. This was the most commonly implemented
type of CTI data when including internal and third-party sources.
• 93% tie traffic to known suspicious IPs.
• 91% track endpoint security data and logs.
• 91% incorporate signatures and heuristics from previous events.
11
SANS ANALYST PROGRAM
“ Who’s Using Cyberthreat Intelligence and How?”
www.sans.org/reading-room/whitepapers/analyst/who-039-s-cyberthreat-intelligence-how-35767
13
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
For the full breakdown of in-house and third-party capabilities for IR processes, see
Figure 6.
What kind of threat intelligence are you using?
Please indicate what is being delivered through third parties and what is developed internally?
Select only those that apply.
100%
80%
60%
40%
20%
correlation and
Internal Discovery
Other
Tor Node IP addresses
Unexecuted or undetonated
malicious files
Network history data
Host and network indicators of
compromise (IOCs)
Domain data
Reputation data
Provided by Third Party
Adversary/Attacker attribution
in analysis,
Suspicious files, hostflow
and executables
without automation
Heuristics/Signatures
from previous events
cannot be achieved
Endpoint data and logs
implementations
Communications between systems
and malicious IP addresses
of security
0%
IP addresses/Nodes
The full functionality
Both
Figure 6. Type of Intelligence Used
reporting.
As indicated earlier, most of the respondents to this survey work internally for their
organizations, so it makes sense that the primary outsourced functions include
heuristics, reputation data and adversary/attacker data attributes. Tor node IP addresses
would fit into the reputation and attacker data categories as well. Many organizations
will not expect to receive legitimate traffic from Tor exit nodes, but because the exit
nodes change frequently, they need automated processes to effectively block attacks
from Tor exit nodes.
It’s clear that significant security implementations are present within our respondents’
networks. However, their full functionality cannot be achieved without automation in
analysis, correlation and reporting. A notable 42% of respondents have fully integrated,
and 33% have partially integrated SIEMs into their IR ecosystems for analytics during
response. Some may also be relying on their CTI tools or services to do the analytics for
them, with 26% fully integrating and 28% partially integrating CTI within their functions.
The 13% of organizations not currently integrating analytics, such as a SIEM, into their
response should consider this a top priority to mature their SOC and IR processes.
SANS ANALYST PROGRAM
14
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
What’s Not Working: Impediments to Response
Despite improvements in technology, IR processes and their analytics capabilities,
organizations still face obstacles that impede effective IR. Leading the list is staffing
and skills shortages, flagged by 66% of survey respondents as one of the top obstacles
to effective IR. The third top problem reported is lack of visibility, indicating that for as
much automation and integration respondents are attempting, they still do not have the
full-picture view across systems they need for fast, accurate response. See Figure 7.
Top 10 Impediments to Effective IR
Staffing and skills shortage
Budgetary shortages for tools and technology
Not enough visibility into events happening across
different systems or domains
Lack of procedural reviews and practice
Inability to distinguish malicious events
versus nonevents
Organizational silos between IR and other groups or
between data sources or tasks
Too much time to detect and remediate
Lack of comprehensive automated tools
available to investigate new technologies,
such as BYOD, IoT, and cloud-based IT
Integration issues with our other security and
monitoring tools
Difficulties in detecting sophisticated attackers and
removing their traces
0%
20%
40%
60%
Figure 7. Impediments to Investigations
This lack of visibility is making it difficult for 37% of respondents to distinguish between
real malicious events and nonevents. Lack of budget for tools and technology, cited
by 54% of respondents is only contributing to this lack of visibility, and staffing issues
account for lack of procedural reviews and practice (41%).
SANS ANALYST PROGRAM
15
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
Recruiting and Retention
Let’s first tackle the people issue: In many instances, the cause of understaffing is not
due to a lack of funding, but to a lack of available skilled professionals to fill open
positions. Based on a 2014 survey conducted by Enterprise Strategy Group (ESG),
28% of organizations say they have a “problematic shortage” of IT security skills.12 One
recommendation to aid in recruitment is to consider filling positions with remote workers.
According to the SANS 2015 IR surveys, 73% of organizations use a dedicated team, 70%
are drawing team members from their internal staff assigned to other functions, while
32% are drawing from third-party services. For surge team augmentation, 61% used a
dedicated internal surge team in both 2014 and 2015, 63% draw additional surge staff
from internal resources and 28% (27% in 2014) use outsourced services. See Figure 8.
Core Versus Surge Staffing
Other
Outsourced services (e.g., MSSP-managed
services security provider) with dedicated IR
services (alerts, response)
Drawn from other internal staff (security group,
operational/administrative IT resources)
Dedicated internal IR team
0%
Core team
20%
40%
60%
80%
Surge
Figure 8. Resources Used in Incidents
Location is also important to staffing. According to the U.S. Bureau of Labor Statistics,
the highest concentration of information security professionals is in the Washington,
D.C. metropolitan area. In comparing the highest and lowest concentrations of InfoSec
professionals by metropolitan areas, 9,070 workers were identified in the DC area,
whereas only 430 were located in Albuquerque, New Mexico, the area with the lowest
concentration of InfoSec.13 Options for companies looking to hire skilled technical
professionals outside of major metropolitan areas include enticing a potential employee
to relocate or building a remote IR team. This second option is becoming more feasible
as infrastructure to support telecommuting is now commonplace in most organizations.
One of the obstacles that may exist to employing remote workers, admittedly, is the
difference in the cost of living and salary requirements associated with different areas.
SANS ANALYST PROGRAM
12
www.esg-global.com/blogs/new-research-data-indicates-that-cybersecurity-skills-shortage-to-be-a-big-problem-in-2015
13
www.bls.gov/oes/current/oes151122.htm
16
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
Diversity of Investigations
More platforms are involved in today’s investigations, driving the need for more
specialized skills. More virtualized and cloud-based systems are being supported by
in-house IR capabilities since last year, for example. Last year, data center servers hosted
in the public cloud (e.g., Azure or Amazon EC2) were investigated in-house by only
37% of our respondents, compared with 61% in 2015. Other notable changes include
employee-owned systems. Last year, only 58% of respondents investigated employeeowned equipment, whereas 69% do this year. This supports the growing prevalence of
employees bringing their own devices, whether laptops, tablets or smartphone devices,
and connecting them to the organization’s network resources. See Figure 9.
What business processes and systems are involved in your investigations?
Check only those that apply. Please indicate whether your capabilities for these investigations
exist in-house, are outsourced or both.
100%
80%
60%
40%
20%
In-house
Outsourced
Other
Third-party social media accounts
or platforms
Employee social media accounts
Data center servers hosted in the public
cloud (e.g., Azure or Amazon EC2)
Employee-owned computers, laptops,
tablets and smartphones (BYOD)
Corporate-owned social media accounts
Embedded, or non-PC devices, such as
media and entertainment boxes, printers,
smart cars, connected control systems, etc.
Web applications
Internal network (on-premises) devices
and systems
Business applications and services (e.g.,
email, file sharing) in the cloud
Data center servers hosted locally
Corporate-owned laptops, smartphones,
tablets and other mobile devices
0%
Both
Figure 9. Investigated Media, Platforms and Apps
We included a new category of “employee social media accounts” as an area of possible
investigation for IR teams because this medium is being used effectively by sophisticated
attackers for targeted reconnaissance. Just 59% of respondents cite including this
element in their in-house investigations.
SANS ANALYST PROGRAM
17
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
(CONTINUED)
Visibility
How do you achieve visibility across these systems for a full picture view of actual events
in progress versus nonevents? This is not the only SANS survey to indicate a lack of
visibility as being among the top three inhibitors of effective detection and response. As
we showed earlier, respondents are integrating across some platforms and using SIEM to
analyze the data.
More than 64% of respondents identified the need for better security analytics and
correlation across affected systems, making it the top target area for improvement. This
is an important milestone because respondents can acknowledge weaknesses and point
to reasons why detection is failing. See Figure 10.
TAKEAWAY:
Focus on key areas to achieve
integration of security
information into automated
What improvements in IR is your organization planning to make in the next 12 months?
Select all that apply.
70%
60%
policy where possible and
50%
reduce reliance on specialized
40%
workers to “catch things” and
30%
seek out infected systems
manually.
20%
10%
Other
Full automation of detection,
remediation and follow-up workflows
Figure 10. Planned Improvements in Next 12 Months
More integrated threat intelligence feeds
to aid in early detection
Better response time
More automated reporting and analysis
through security information and event
management (SIEM) integration
Improved visibility into threats and
associated vulnerabilities as they apply to
the environment
Additional training/certification of staff
Better security analytics and correlation
across event types and impacted systems
0%
[Begin figure content]
Figure 10. Planned Improvements in Next 12 Months
SANS ANALYST PROGRAM
18
Maturing and Specializing: Incident Response Capabilities Needed
Key Elements for Successful Incident Response
TAKEAWAY:
(CONTINUED)
Additional training and certification will be big next year, with 57% of respondents
adding training and certification for their IR staff. This is a reoccurring theme in this
Organizations that take the
year’s survey results, with staffing and skills shortages ranked as one of the top five
initiative to “grow their own”
impediments to effective IR by 66% of respondents.
in-house skills will increase the
The other top targeted areas for improvement include improved visibility into threats
efficiency of their IR process
and vulnerabilities, as well as more automated reporting and analysis via SIEM
and improve effectiveness of
integration. Many of these areas of improvement have a symbiotic nature—one
security implementations and
depending on an improvement in another to truly benefit an organization. Clearly, an
technology.
improvement in visibility (gaining more insight into endpoint and network traffic) will
result in more collected data, which will require automated analysis.
SANS ANALYST PROGRAM
19
Maturing and Specializing: Incident Response Capabilities Needed
Conclusion
Although automation was the most commonly cited area for future IR improvement in
last year’s survey, only a little progress has been made in increasing visibility through
automation of endpoint and network data collection and analytics, or remediation.
This continues to be a key factor in improving IR process efficiency. As the amount of
data collected from endpoints and network traffic grows, teams must move toward
automation to conduct analysis and data correlation with the goal of shortening the
time needed to detect and remediate incidents.
Our survey results also suggest the need for more specialized IR skills. By reducing false
positive alerts and baselining endpoint and network traffic to better detect anomalies,
understaffed teams will have more actionable alerts. The shortage of skilled technical
staff may not have an immediate solution, but organizations can maximize the actions
of existing IR team members by moving to automated detection and remediation
processes.
Reports of data destruction and denial of service attacks have been covered in the
media recently, and the responses from our survey participants substantiate the growing
frequency of such adversary tactics. IR teams, frequently overworked and charged
with constantly putting out fires, rarely have time to craft a new playbook for attacks
requiring different IR processes and containment procedures. Current trends, as seen in
the Sony and Las Vegas Sands Casino attacks, foreshadow what today’s IR teams will be
faced with in future attacks. Anticipate, plan, test and validate response procedures for
the worst attacks—because, inevitably, they are coming.
SANS ANALYST PROGRAM
20
Maturing and Specializing: Incident Response Capabilities Needed
About the Authoring Team
Alissa Torres is a SANS analyst and certified SANS instructor specializing in advanced computer
forensics and incident response (IR). She has extensive experience in information security in the
government, academic and corporate environments. Alissa has served as an incident handler and
as a digital forensic investigator on an internal security team. She has taught at the Defense Cyber
Investigations Training Academy (DCITA), delivering IR and network basics to security professionals
entering the forensics community. A GIAC Certified Forensic Analyst (GCFA), Alissa holds the GCFE,
GPEN, CISSP, EnCE, CFCE, MCT and CTT+ certifications.
Jake Williams is a SANS analyst, certified SANS instructor, course author and designer of several
NetWars challenges for use in SANS’ popular, “gamified” information security training suite. Jake
spent more than a decade in information security roles at several government agencies, developing
specialties in offensive forensics, malware development and digital counterespionage. Jake is the
founder of Rendition InfoSec, which provides penetration testing, digital forensics and incident
response, expertise in cloud data exfiltration, and the tools and guidance to secure client data against
sophisticated, persistent attack on-premises and in the cloud.
Sponsors
SANS would like to thank this survey’s sponsors:
SANS ANALYST PROGRAM
21
Maturing and Specializing: Incident Response Capabilities Needed
Last Updated: June 17th, 2017
Upcoming SANS Training
Click Here for a full list of all Upcoming SANS Events by Location
DFIR Summit & Training 2017
Austin, TXUS
Jun 22, 2017 - Jun 29, 2017
Live Event
SANS Paris 2017
Paris, FR
Jun 26, 2017 - Jul 01, 2017
Live Event
SANS Cyber Defence Canberra 2017
Canberra, AU
Jun 26, 2017 - Jul 08, 2017
Live Event
SANS Columbia, MD 2017
Columbia, MDUS
Jun 26, 2017 - Jul 01, 2017
Live Event
SEC564:Red Team Ops
San Diego, CAUS
Jun 29, 2017 - Jun 30, 2017
Live Event
SANS London July 2017
London, GB
Jul 03, 2017 - Jul 08, 2017
Live Event
Cyber Defence Japan 2017
Tokyo, JP
Jul 05, 2017 - Jul 15, 2017
Live Event
SANS Los Angeles - Long Beach 2017
Long Beach, CAUS
Jul 10, 2017 - Jul 15, 2017
Live Event
SANS Cyber Defence Singapore 2017
Singapore, SG
Jul 10, 2017 - Jul 15, 2017
Live Event
SANS ICS & Energy-Houston 2017
Houston, TXUS
Jul 10, 2017 - Jul 15, 2017
Live Event
SANS Munich Summer 2017
Munich, DE
Jul 10, 2017 - Jul 15, 2017
Live Event
SANSFIRE 2017
Washington, DCUS
Jul 22, 2017 - Jul 29, 2017
Live Event
Security Awareness Summit & Training 2017
Nashville, TNUS
Jul 31, 2017 - Aug 09, 2017
Live Event
SANS San Antonio 2017
San Antonio, TXUS
Aug 06, 2017 - Aug 11, 2017
Live Event
SANS Hyderabad 2017
Hyderabad, IN
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS Prague 2017
Prague, CZ
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS Boston 2017
Boston, MAUS
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS New York City 2017
New York City, NYUS
Aug 14, 2017 - Aug 19, 2017
Live Event
SANS Salt Lake City 2017
Salt Lake City, UTUS
Aug 14, 2017 - Aug 19, 2017
Live Event
SANS Adelaide 2017
Adelaide, AU
Aug 21, 2017 - Aug 26, 2017
Live Event
SANS Virginia Beach 2017
Virginia Beach, VAUS
Aug 21, 2017 - Sep 01, 2017
Live Event
SANS Chicago 2017
Chicago, ILUS
Aug 21, 2017 - Aug 26, 2017
Live Event
SANS Tampa - Clearwater 2017
Clearwater, FLUS
Sep 05, 2017 - Sep 10, 2017
Live Event
SANS San Francisco Fall 2017
San Francisco, CAUS
Sep 05, 2017 - Sep 10, 2017
Live Event
SANS Network Security 2017
Las Vegas, NVUS
Sep 10, 2017 - Sep 17, 2017
Live Event
SANS Dublin 2017
Dublin, IE
Sep 11, 2017 - Sep 16, 2017
Live Event
SANS Minneapolis 2017
OnlineMNUS
Jun 19, 2017 - Jun 24, 2017
Live Event
SANS OnDemand
Books & MP3s OnlyUS
Anytime
Self Paced