SANS 2nd Annual Survey on the State of Endpoint

Interested in learning
more about security?
SANS Institute
InfoSec Reading Room
This paper is from the SANS Institute Reading Room site. Reposting is not permitted without express written permission.
The Case for Visibility: SANS 2nd Annual Survey on
the State of Endpoint Risk and Security
Read the results of the 2015 Endpoint Security Survey to find out whether organizations assume risk, whether
their perimeter defenses protect their endpoints, how much progress we are making on automation, how long it
takes to remediate each compromised endpoint, and much more.
Copyright SANS Institute
Author Retains Full Rights
The Case for Visibility: SANS 2nd Annual Survey
on the State of Endpoint Risk and Security
A SANS Survey
Written by Jacob Williams
March 2015
Sponsored by
Guidance Software
©2015 SANS™ Institute
Executive Summary
The year 2014 was full of interesting breaches, nearly all of them involving the endpoint.
Sony found out the hard way that you can’t protect your data if you don’t know where it
is, but the company definitely isn’t alone.
Survey Trends
• Assumption of breach. In this second SANS Endpoint Security
Survey, 56% of 1827 IT professionals who took the survey assume
that they have been breached. This number is up from 47% in the
2014 survey.1
This year, the majority of respondents said they assume some
compromise will occur in their organizations. But despite this,
few are able to achieve proactive threat response. Proactive
measures include baselining and awareness of endpoint
posture that can be used to detect anomalies through
• P erimeter detection doesn’t protect endpoints. Last year, we
noted that attackers were bypassing perimeter detection methods
with relative ease. That continues this year, highlighting the need for
detection at the endpoint. This year, 55% of respondents say that up
to 30% of their incidents should have been detected by perimeter
security measures but weren’t.
monitoring. Based on the low number of threats detected via
• A utomation has not changed significantly. The levels of
incident response (IR) automation reported by respondents have not
changed significantly from 2014, and organizations are suffering
from low visibility into their endpoints and indicators of attack.
30% of respondents to the second SANS Endpoint Security
• T he majority of organizations spend three or more hours per
compromised host on incident response. When many hosts are
compromised in a single incident (as is often the case), any per-host
time savings are much more significant.
proactively monitoring threats, it appears that the majority are
not performing this function. These capabilities still need to
mature in most organizations, according to the results.
Although they acknowledge the rising risks to endpoints, only
Survey say their organizations scan endpoints for regulated
and/or sensitive data. Understanding where sensitive data is
located and who has access to it is a critical part of the larger
baselining process. Baselining to understand what normal
operation looks like on endpoints allows responders to find
anomalies in user activity and data compliance polices and to
• K erberos vulnerability affects results. Although 63% of
respondents cite Windows endpoints as key sources of concern,
web servers and domain controllers follow with 54% and 47%,
respectively. The high rankings for Windows endpoints and domain
controllers may be a result of MS14-068,2 a critical Kerberos
vulnerability that undermined the security of a Windows domain.
construct a plan to defend the network from the inside out.
• D omain controllers are being given additional security
consideration. This year, significantly more respondents (47%
versus 36%) cited domain controllers as a source of concern.
this survey, 23% of respondents failed to identify their own
These and other issues are addressed in the body of the report.
SANS ANALYST PROGRAM
Survey results also show that automation is not increasing,
which is critical in terms of knowing which endpoint assets and
their sensitive data would be targeted and then correlating
interconnected events that might indicate a compromise. In
vulnerabilities and compromises. Instead, they were notified of
a compromise by a third party.
1
“The Case for Endpoint Visibility,” www.sans.org/reading-room/whitepapers/analyst/case-endpoint-visibility-34650
2
https://technet.microsoft.com/en-us/library/security/ms14-068.aspx
1
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Survey Participants
Of the 1827 responses to pass survey requirements and take the survey, 238 also
participated in last year’s survey. Respondents came from across industry verticals, the
largest of which were finance, government and technology (see Figure 1).
Retail/E-commerce
Telecommunications
Manufacturing
Energy/Utilities
Health care/
Pharmaceutical
Education
High tech
Other
Government
Financial services/
Banking/Insurance
What is your organization’s primary industry?
Figure 1. Top Industries Represented
Participants also perform a variety of roles in their organizations. The most common
role in participants’ organizations was security analyst (33%), so practitioners are well
represented in this survey. Another 29% identified themselves as security managers
or CISOs (16%) and IT managers or CIOs (13%). Less than 10% represented network
administrators, engineers or incident responders.
SANS ANALYST PROGRAM
2
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Survey Participants
(CONTINUED)
Respondents came from organizations of varying sizes, with 41% having more than
5,000 employees, as shown in Figure 2.
How many people work at your organization,
either as employees or consultants?
More than 15,000
10,001–15,000
In what countries or regions does
your organization do business?
Select all that apply.
5,001–10,000
1,001–5,000
100–1,000
Africa
Australia/
New Zealand
Middle East
South America
Canada
Asia
Europe
United States
F ewer than 100
employees
Figure 2. Organizational Size and Regional Representation
The organizations they represent are mostly headquartered in the United States, but
they have global reach. Respondents could select all areas in which their organizations
do business. Although 82% chose the United States, 35% of respondents said that their
organization does business in Europe, 32% in Asia, while more than 20% of respondents
also reported having business in each of Canada, South America and/or the Middle East.
This mixture of roles, organizational size, type and region provides a broad look at
the security and risk across sectors, providing useful information about risk and best
practices across endpoints.
SANS ANALYST PROGRAM
3
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Start by Assuming the Worst
Assumption of compromise is an important distinction in operational thinking because
those who assume that their endpoints are compromised view alerts differently
from those who assume the endpoints are clean. In the 2014 survey, only 47% of
respondents operated under the assumption that at least some of their endpoints
were compromised, but this year that number rose to 56% (Figure 3). Also, a significant
number of “Other” responses indicated that many respondents in this category believe
at least some of their endpoints are compromised.
Are you operating under the assumption
that at least some of your systems are currently compromised?
Yes
No
Other
TAKEAWAY:
Operate under the assumption
that some of your endpoints
have been breached—or they
will be.
Figure 3. Operating Under the Assumption of Compromise
Such an increase in affirmative responses suggests a shift in institutional thinking, as
evidenced by the 59% of 2014 respondents who took this year’s survey who now assume
at least some of their systems have been compromised. The majority are no longer in
denial. They know that operating under the assumption of compromise is most prudent.
Whether organizations discover threats proactively (actively interrogate endpoints)
or react to an alert is closely aligned with whether organizations operate under the
assumption of compromise. Organizations assuming that none of their endpoints may
be compromised are unlikely to invest in proactive scanning and vulnerability hunting
programs, which are key in controlling risk on endpoints and recommended in the
Critical Security Controls3 guidelines.
In every demographic (except companies with fewer than 100 employees), more
respondents assume compromise than not. This difference is probably due to small
organizations having a less mature IR staff and lack of awareness of IT-related risk.
However, it could be due to smaller organizations having a more manageable attack
surface relative to staffing levels. Many smaller organizations tell security auditors they
have no data worth stealing and therefore don’t consider themselves a target.
3
SANS ANALYST PROGRAM
www.counciloncybersecurity.org/critical-controls
4
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Start by Assuming the Worst
(CONTINUED)
Confusion Around Discovery
The biggest change in responses since last year’s survey was the number of respondents
who said they didn’t know the percentage of threats their organization had discovered
through proactive discovery. This year, 34% of respondents (as opposed to 16% last
year) didn’t know or didn’t proactively hunt for threats on the network. This response
leads us to conclude that many are simply not proactively hunting for threats on the
network—a very risky approach. Figure 4 provides an overview of the frequency with
which respondents proactively discover threats.
What percentage of your threats are detected through proactive discovery
(actively interrogating endpoints) versus reactive (responding to alerts)?
TAKEAWAY:
Gear up to proactively hunt
for threats. Specialized tools
91–100%
81–90%
71–80%
61–70%
51–60%
41–50%
31–40%
Figure 4. Proactive Discovery or Reacting to Alerts?
allows you to get a jump on
the attackers.
21–30%
in IR is critical, and hunting
11–20%
for successful hunting. Speed
1–10%
Unknown
and trained staff are required
Proactive hunting begins with knowing what normal looks like in a given environment.
This can only be accomplished through baselining endpoints and network devices. Once
you have established a baseline, it is relatively easy, with the right tools, to note a change
from the baseline. But, without a baseline, even the best analysts, armed with best-inbreed tools, struggle to find intrusions in the environment.
Of respondents who were hunting for threats and could quantify the percentage of
threats discovered, only 15% found half or more of their threats this way. This may signal
to some that hunting is less effective for discovering threats. But, as Intel notes, using
dedicated hunt teams to find threats is “effective in stopping some of the most grievous
threats.”4 Rather, it seems that many IR teams lack the skills and specialized tools required
to hunt effectively. Even when teams know what they are looking for, speed is always
an issue. Using purpose-built tools to hunt for threats is more efficient and scales better
than manual collection and analysis of threat data. However, if tools are functional but
slow, analysts may get an alert from another sensor before the hunt even begins.
4
SANS ANALYST PROGRAM
h ttps://communities.intel.com/community/itpeernetwork/blog/2012/11/28/cyber-security-hunter-teams-are-the-next-advancement-in-network-defense
5
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Start by Assuming the Worst
(CONTINUED)
Perimeter Security Failures
This year, the lack of visibility seems to be getting worse as a result of over-reliance on
perimeter detection, with 25% of respondents (as compared to 21% of respondents
last year) answering that they don’t know what threats should or should not have been
blocked at their firewalls/routers/unified threat management (UTM) and other edge
detection. See Figure 5.
Don’t know
91–100%
81–90%
71–80%
61–70%
51–60%
41–50%
31–40%
21–30%
11–20%
1–10%
What percentage of incidents in your organization over the last 24 months
were the result of threats that should have been blocked by a
perimeter security device (e.g., firewall or UTM)?
Figure 5. Should threats have been blocked?
Furthermore, results from this year show only a 1% improvement in ability to block
attacks at the perimeter, with 20% of respondents to this year’s survey saying that
31% or more of their incidents are getting past perimeter defenses (compared to 21%
of respondents in 2014). But the message is still clear: Attackers bypass perimeter
protection with ease. Are attackers finding new techniques to bypass detection? Or
are our participants simply more aware of perimeter detection’s failings? The answer
is probably a little of both. But no matter the case, the high percentage of threats that
should have been blocked at the perimeter indicates that relying on perimeter detection
alone is a fool’s errand.
Organizations need endpoint management and monitoring technologies to provide a
complete picture of vulnerabilities (so they can reduce their attack surfaces) and attacks
in progress (so they can respond accurately and quickly). These survey results reinforce
the requirement to baseline systems discussed earlier. With perimeter detections failing,
the focus must shift to an inside out approach to security. Concentrate monitoring at
your endpoints—they are the typical target of an intrusion.
SANS ANALYST PROGRAM
6
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Start by Assuming the Worst
(CONTINUED)
Don’t Blame the APT
There’s a strong desire by companies that have been compromised to blame their woes
on advanced persistent threat (APT) actors.5 Incident responders regularly hear quotes
such as, “If the attacker was APT, there’s nothing we could have done to prevent it,” or
“APT attackers won’t stop until they get in ... .”
Similar to last year, 28% of respondents could not gauge the skill of their attackers.
However, 39% of respondents report that less than 10% of their adversaries were
advanced or used stealth advanced exploit and hiding techniques, as illustrated in
TAKEAWAY:
Figure 6.
What percentage of threats that initially evaded perimeter detection would
you categorize as advanced adversaries using stealth techniques?
Because perimeter protections
are easily defeated and
APTs are not the cause of
most breaches, implement
additional layers of security
starting inside at the endpoint
and looking out to enhance
Don’t know
91–100%
81–90%
71–80%
61–70%
51–60%
41–50%
31–40%
21–30%
and attacks.
11–20%
1–10%
early detection of threats
Figure 6. Was it APT?
We have already established that attackers easily bypass perimeter detection. This
additional information implies that they do so without having to even deploy stealth
techniques. Because results were similar two years in a row, we can no longer consider
the result an anomaly. Perimeter protection is easily bypassed even without the use
of APT techniques. As a result, endpoint detection methods are definitely needed as a
deeper layer of defense against all forms of attacks—advanced or otherwise.
5
SANS ANALYST PROGRAM
http://recode.net/2014/12/07/sony-describes-hack-attack-as-unprecedented
7
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
With such a high rate of perimeter security failures reported by respondents, endpoints
must be monitored, taking into account available vulnerability and intelligence data.
Some endpoints represent more risk than others, according to respondents; and all
organizations reported a variety of endpoint types and their associated risk levels.
Respondents rated Windows endpoints as their greatest risk and security concerns,
with 63% choosing this option, up from 55% in 2014. An additional 54% of respondents
chose web servers. Figure 7 lists the endpoints of greatest interest.
Other
Mac endpoints
Linux endpoints
Cloud servers
(e.g., EC2 or Azure)
Storage appliances
(SANS/NAS)
Transaction processing
(payment) servers
Mail servers
Workstations
Domain controllers
Web servers
Windows endpoints
What endpoints are of most concern to you from a risk and security perspective?
Select all that apply.
Figure 7. Endpoints of Concern
The concern over Windows endpoints is well founded, given that the OS accounts for
over 90% of desktop market share.6 Even among servers, Windows is the OS found most
commonly in data centers, as revealed in the SANS 2014 Data Center Security Survey.7
Respondents may have noted the high number of critical vulnerabilities affecting
Windows—four announced in November alone.8 But an additional critical out-of-band
patch was also released in November, while the survey was live. Sensational media
headlines such as “Unicorn bug” contribute to an awareness of the otherwise dull topic
of vulnerability management.9 Because Windows endpoints account for such a large
percentage of endpoints, they are ripe targets for attackers. Yet some organizations
fail to discriminate between critical and normal vulnerabilities in their patching
timelines. For organizations that cannot meet aggressive patching timelines for critical
vulnerabilities, endpoint visibility and monitoring take on new importance.
SANS ANALYST PROGRAM
6
www.netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0
7
w
ww.sans.org/reading-room/whitepapers/analyst/data-center-server-security-survey-2014-35567
8
w
ww.networkworld.com/article/2846204/microsoft-subnet/nov-2014-patch-tuesday-microsoft-released-4-critical-fixes-14-total-updates.html
9
w
ww.dailymail.co.uk/sciencetech/article-2832157/Unicorn-bug-Microsoft-s-Windows-1985.html
8
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
Respondents concerned about their domain controllers rose to 47% this year, up from
36% in 2014. The sharp rise in the number of respondents concerned about their domain
controllers may be due to the timing of the survey. Many participants were taking the
survey after learning about MS14-068, a critical Kerberos vulnerability that undermined
the security of an entire Windows domain.
Concern over cloud servers (such as EC2) rose to 20% (up 5% from the 2014 survey),
probably a sign that cloud adoption rates among participants is also increasing. Cloud
adoption tends to be particularly high in smaller companies. As RackSpace notes,
companies with fewer than 20 employees are much more likely to use the cloud than
those with more than 500 employees.10 Respondents from companies with less than 100
employees represent 13% of responses. Two new categories this year, point-of-sale (PoS)
systems and storage appliances (SAN/NAS) both garnered relatively high concern, 24%
and 23%, respectively.
The interest in storage systems was most likely fueled by the emergence of the
SynoLocker Trojan11 in 2014. This malware was a game changer in that it infected
the firmware on the storage unit directly. Many clients have also expressed interest
in monitoring their storage systems as a result of CryptoLocker12 attacks. While
CryptoLocker does not target storage systems directly, some variants encrypt files on
file shares mounted by infected computers. These may be detected by abnormally high
numbers of write operations, particularly if coming from a workstation.
SANS ANALYST PROGRAM
10
www.rackspace.com/blog/infographic-the-state-of-smb-cloud-adoption-in-2014
11
www.theregister.co.uk/2014/08/14/synolocker_trojan_closing_down_sale
12
http://en.wikipedia.org/wiki/CryptoLocker
9
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
SIEM Seen as Critical
Management often confuses “Where do we need visibility?” with “What gadgets
should we buy?” The latter may help solve the visibility problem, but the former
actually defines the problem. Without properly defining the problem, organizations
may purchase numerous security controls that do not address their needs (or at least
not their prioritized needs). Some organizations also fail to fully capitalize on their
valuable investments in security tools. They simply may not understand the analytic and
visualization capabilities already at their disposal.
Not surprisingly, given the range of events that they log, security information and event
management (SIEM) systems had the highest number of responses (28%) for where to
manage endpoints and related threat data. However, it is rather disturbing to see the
number so low. See Figure 8.
UTM output
Network devices
(switches, routers)
Other
Network flow data (aggregate
statistics about flows)
Network packets
(full packet capture)
Endpoints (servers)
Log management
Endpoints (workstations)
Firewalls
IDS/IPS
SIEM
What systems do you need the most visibility from when detecting threats?
Figure 8. Visibility Needs
SANS ANALYST PROGRAM
10
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
The second and third most popular responses were IDS/IPS, selected by 16% of
respondents, and firewalls, which 12% of respondents say they most need visibility
from, respectively. This supports the statement that perimeter security needs to provide
better visibility to responders and investigators. IDS/IPS and firewall data could feed into
the SIEM, where other SANS surveys indicate most security data is being collected and
analyzed.13 However, when you look at the endpoint types collectively, nearly 20% of
respondents want visibility into the endpoint servers and workstations themselves.
Servers and workstations hold the keys to the kingdom: the intellectual property, trade
secrets and regulated data that ensure the operation of so many companies. So, what
we’re seeing is that organizations will not be giving up on their perimeter security
anytime soon (nor should they, because perimeter security is catching at least 70% of
attacks, according to results). They also show that organizations are tapping into their
endpoints themselves, and that the majority of respondents use their SIEM platforms to
manage their security of endpoints and response to endpoint threats.
Attackers normally establish some foothold first on a user workstation and then use that
position to compromise servers. If responders can identify the compromised workstation
before the attacker is able to pivot, they are able to prevent the server compromise in
the first place (reducing the concern for server visibility). While this makes some logical
sense, servers represent a smaller search space, so including both workstations and
servers in risk management and response programs makes the best sense.
13
SANS ANALYST PROGRAM
www.sans.org/reading-room/whitepapers/analyst/cyberthreat-intelligence-how-35767
11
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
Endpoint Data Collection
SANS asked survey participants what data they collect from endpoints. The results are
shuffled a little from last year, but not significantly so. Responses, whether participants
realize it or not, are closely aligned with the Critical Security Controls (CSC).14 The top
response is user logins, with 66%, which aligns with CSC 16 (see Figure 9).
Other
ARP cache entries
Route tables
DNS cache entries
Browser history artifacts
Sensitive data (PHI, company,
proprietary) on endpoints where
it is not authorized
Unauthorized network interfaces
(configured VPN interfaces)
Registry-based artifacts
(known malware autorun key)
Disk-based artifacts
(known malware file name)
Listening network ports
Running processes
Installed software and OS
versions
User logins
Which of the following do you collect from endpoints
for the purposes of correlation in the detection of threats?
Figure 9. Endpoint Data Collection
Next most popular are installed software and OS versions, and running processes, with
56% and 51%, respectively. This data aligns with CSC 2. The fourth most popular endpoint
data to collect, chosen by 48%, is listening network ports, which aligns with CSC 11.
Key Endpoint Data Collection CSCs
Endpoint data collection helps organizations meet the requirements of the
Critical Security Controls. Endpoint monitoring includes enumerating installed
software, listening network ports, installed services, logon accounts and account
group membership. These data items (all collected from the endpoint) can be
used to meet minimum visibility standards for the following three CSCs:
CSC 2: Inventory of Authorized and Unauthorized Software
CSC 11: Limitation and Control of Network Ports, Protocols and Services
CSC 16: Account Monitoring
14
SANS ANALYST PROGRAM
www.counciloncybersecurity.org/critical-controls
12
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
It is troubling to note that only 30% of organizations collect data to discover where
sensitive and regulated data is stored, processed or otherwise accessed. Endpoint data
collection could discover unauthorized endpoints that process, store or access regulated
data before the breach leads to an incident. In practice, not all endpoints need equal
protection from threats. But those endpoints that store sensitive data should have
additional security controls. To effectively protect sensitive data, organizations must first
Monitoring in depth
is just as important to
detecting attacks as
defense in depth is to
preventing them.
understand where it is stored.
Two host artifacts that continued to be undercollected are DNS cache entries and
browser history artifacts, chosen by 24% and 29%, respectively. Anecdotally, security
professionals often mention that they intentionally do not collect DNS cache entries
because they believe all the data is also present in the local DNS server logs. This is true if
network egress firewalls are properly configured. However, monitoring in depth is just as
important to detecting attacks as defense in depth is to preventing them.
Browser history is an often-overlooked technique for malware detection because those
URLs requested using the WinINet programming interface (common in malware) end up
in the user’s index.dat cache file. This may provide detection for malware that is not
obvious through other techniques. Organizations that lack the capability to collect and
readily analyze browser history and DNS cache data should investigate whether adding
such capability would enhance their IR capabilities.
SANS ANALYST PROGRAM
13
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
Threat Detection
Detection at the endpoint through host-based antivirus (HIPS)/IPS dropped from last
year’s 62% to 50% this year. But when you consider that in conjunction with endpoint
detection—the third most selected category of detection at 44%—a pattern of
endpoint-related security becomes clearer. This is a good sign, because as we’ve shown,
the perimeter firewall alerts, used by 38% this year, saw a drop of almost 5% in use to
detect events over the last year. See Figure 10.
How did you detect that these threats had compromised your organization?
Select all that apply.
Alert from AV/HIPS
Network IDS alert
Endpoint management system/Alert
Perimeter firewall alert
Automated SIEM alerts
Third-party notification
Searching through SIEM/Correlation
Automated alerts from logging system
Analysis of network flow data
Manual review of endpoint logs
Hunting for compromised endpoints via
IOCs learned from threat intelligence
File integrity monitoring
Analysis of raw packet capture data
UTM alert
Application whitelisting
Other
Figure 10. Threat Detection Tools
But it’s not all bad news. Only 20% of respondents this year reported detecting threats
through manual endpoint log review. That number is down from 38% last year. We like
to see that number trending downward, because it suggests that more automated
detection is replacing manual log review. Another possible explanation is that increased
automation and server sprawl (more logs) are making manual log review impractical.
Perhaps attackers aren’t leaving logs at all (unlikely) or the state of play has changed
and defenders are being alerted by other tools before compromises are discovered via
manual review. We believe the latter accounts for the majority of the change.
SANS ANALYST PROGRAM
14
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
Satisfaction Ratings
Even though respondents indicate large failures in their perimeter security, they are still
the most satisfied with perimeter firewall/IDS/IPS technology, as indicated in Table 1.
Table 1. Satisfaction with Detection Technologies
Very
Satisfied
Satisfied
Very Satisfied
& Satisfied
Not
Satisfied
Response
Count
Perimeter firewall/IDS/IPS
21.9%
58.0%
79.9%
17.2%
97.2%
Web proxy
17.2%
46.9%
64.1%
17.4%
81.5%
Logs and log managers
13.5%
46.2%
59.7%
26.0%
85.7%
Host-based firewall
12.5%
46.9%
59.4%
23.3%
82.7%
SIEM
16.1%
42.8%
58.8%
21.7%
80.6%
Endpoint detection and response (EDR)
11.8%
44.1%
55.9%
19.9%
75.8%
UTM gateway
10.6%
44.6%
55.2%
17.3%
72.5%
Network flow monitoring
12.8%
42.0%
54.9%
21.1%
76.0%
Continuous monitoring
11.2%
41.6%
52.8%
22.0%
74.7%
Network access controls (NAC)
11.9%
40.3%
52.2%
22.4%
74.6%
Data loss prevention (DLP)
10.7%
39.2%
49.9%
23.1%
73.1%
File integrity monitoring
10.5%
39.2%
49.7%
22.6%
72.3%
Sandboxing/Shunting suspect network traffic for observation
11.5%
37.0%
48.5%
22.0%
70.6%
Other
3.9%
25.5%
29.4%
10.6%
39.9%
Answer Options
Satisfaction in a product generally indicates that it is either effective, easy to use or both.
Analysts are unlikely to be “very satisfied” with a product that does not deliver on both
effectiveness and ease of use. When planning future purchases for security products,
organizations can benefit by understanding what others in the field think about the
products they currently have deployed. All things being equal, an organization should
choose to deploy a technology that others in the field say they are very satisfied with.
The highest satisfaction ratings were from perimeter firewalls, web proxies and log
managers. It is especially significant that two of the three highest satisfaction ratings for
detecting compromises in an endpoint survey are not endpoint technologies at all.
Host-based firewalls, which arguably should be very effective at detecting compromises,
were fourth highest in satisfaction ratings. However, they were also second highest
in dissatisfaction. One possible explanation for this is that the capabilities of these
products varies between different products. Another possible explanation is that these
technologies are relatively difficult to deploy.
SANS ANALYST PROGRAM
15
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Detecting Endpoint Threats
(CONTINUED)
The high dissatisfaction with data loss prevention (DLP) is not surprising, but is perhaps
undeserved. Every security professional knows that no security control is perfect (and
DLP is no exception). However, most DLP is manufactured with the intent of preventing
inadvertent data loss, not that from sophisticated attackers. Despite this, DLP has been
effective in detecting attacks in many real world scenarios. It has, however, traditionally
been difficult to configure DLP, resulting in a high false-positive rate for responders.
Perhaps this also contributes to a relatively high dissatisfaction rate.
False Positives
Respondents have varying experiences with false positives, with 32% reporting that
more than 30% of their alerts are false positives and only 28% of respondents reporting
less than 10% false positives rates (see Figure 11).
91–100%
81–90%
71–80%
61–70%
51–60%
41–50%
31–40%
21–30%
11–20%
1–10%
Upon analysis of potentially compromised endpoints,
what percentage of your alerts is later found to consist of false positives?
Figure 11. False Positive Rates
Those having to respond to more than 30% of their alerts being false positives
perpetuates the problem analysts have with alert fatigue. When analysts respond
to an alert expecting it to be a false positive, they are less likely to find evidence of
compromise, even when such evidence exists. Organizations should poll their own
employees internally to determine their perceived false positive rates and invest
in technologies that help lower the false positive rate by applying intelligence and
analytics to their endpoints as well as their security devices. Unfortunately, automating
these processes is the most difficult challenge of all for organizations.
SANS ANALYST PROGRAM
16
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Challenges in Automation and Remediation
Automation is one of the great buzzwords in security today, and a key tenet of the CSCs.
This makes sense: Why do something manually that can be automated? But what’s the
reality of automation in IR and remediation?
In the context of endpoint defense and incident investigation, automation might
involve replacing manual log review with automatic log correlation. Manual endpoint
data collection could be replaced with automated collection and analysis of the same
data. In the context of incident remediation, automation might help responders rebuild
compromised systems with the touch of a button—or prevent them from having to
rebuild at all. If tools can definitely track changes made to the system by an attacker,
those changes can be automatically rolled back instead of requiring the system to be
rebuilt, taking automation to a whole new level.
Automating Response
In the prior sections, we’ve talked primarily about visibility in detection. Given that the
majority of organizations are operating under the assumption that their endpoints have
Automation:
Using tools to perform a task
that humans used to perform
manually without human
intervention
been breached, it is just as important to understand what they are expecting of their IR
teams, tools and capabilities. According to respondents, the acceptable delay between
requesting data and receiving it from all queried endpoints was virtually identical to
expectations uncovered in last year’s survey results: 83% of respondents want their data
in under an hour (Figure 12).
To provide maximum value in detecting and responding to
incidents, what is the acceptable delay between requesting data
and receiving it from all queried endpoints?
5 minutes or less
30 minutes or less
1 hour or less
8 hours or less
24 hours or less
Greater than 24 hours
Figure 12. Time to Respond
SANS ANALYST PROGRAM
17
Figure 12. Time to Respond
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Challenges in Automation and Remediation
(CONTINUED)
Of those, 28% want that data in five minutes or less. Speed is essential in detecting and
limiting an outbreak, but just collecting data quickly isn’t enough. Analysts must be
able to make sense of that data. At the speed our respondents say they need the data,
analysis and collection systems must be integrated to create a full-picture view of events
in progress.
When it comes to event containment, time is money. As in our 2014 survey, the
most popular response continued to be a response time of one or two hours per
compromised endpoint, which was selected by 28% of respondents. See Figure 13.
On average, how many hours do you spend per compromised
endpoint when responding to an incident?
0–1 Hours
1–2 Hours
TAKEAWAY:
3–4 Hours
Evaluate how long you
5–6 Hours
spend on remediating each
7–8 Hours
compromised endpoint. Then
9–16 Hours
consider what you can do to
17–24 Hours
reduce it.
More than 24 Hours
Figure 13. Response Time per Compromised Host
Last year, SANS put the high water mark at “more than 8 hours,” expecting few
respondents to select this option. However, 11% of respondents did so, forcing us to
ask, “How much more than 8 hours is acceptable?” This year, SANS placed the high
water mark at “more than 24 hours,” which received 6% of responses. Another 7% of
respondents reported spending 9 to 24 hours per endpoint. Organizations should
evaluate how they would answer this question. Most respondents (68%) report that they
spend four hours or less per compromised endpoint. If your organization is on the higher
end of the spectrum, what can you do to reduce this time?
One of the ways to reduce remediation time is to automate the collection and analysis of
compromised data, allowing for surgical removal of threats.
SANS ANALYST PROGRAM
18
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Challenges in Automation and Remediation
(CONTINUED)
On the Horizon
But are respondents actually moving toward the automation they claim to desire? To
find out, we compared this year’s answers with those obtained in 2014. The results
suggest they are not. The most significant change between 2014 and 2015 was the
number of respondents automating 10% or less of their workflows. Even though 12
months have passed since the inaugural endpoint security survey, there has been almost
no movement in respondents’ 24-month automation projections, as shown in Figure 14.
Next 24 Months % Automation (2014 vs. 2015)
TAKEAWAY:
Automation is key in
maximizing efficiency and
IR process and develop the
means to address them.
81–90%
71–80%
61–70%
51–60%
41–50%
2014
91–100%
automating portions of the
31–40%
21–30%
organization’s obstacles to
11–20%
analysts. Determine your
0–10%
avoiding burnout among
2015
Figure 14. Automation Projections Relatively Unchanged
The biggest change is in the 41–50% group, where we see a 2% jump in plans to
automate in 2015, which is offset by a reduction in the number of people who plan to
automate 51–60% of their IR.
The conclusion from the data is that respondents plan slightly more automation in their
IR workflows, but turning those plans into reality is difficult to achieve.
Vendors that serve the IR automation market should work with organizations to identify
the obstacles for low adoption. In talking to digital forensics and incident response
professionals, we hear that the reasons for low adoption of automation are often high
price, poor user interfaces and complicated administration that requires special skills. As
an example of poor usability, one respondent noted that the organization’s tools were
unable to correlate an alert with a specific VPN user. This activity should not only be
trivially easy, but should be completely automated.
SANS ANALYST PROGRAM
19
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Challenges in Automation and Remediation
(CONTINUED)
Where to Automate
Indicators of compromise need to be integrated into the remediation workflow,
according to this year’s survey. In it, 37% of respondents felt that they needed the
most help hunting for compromised endpoints without using known indicators of
compromise (IOCs), meaning that this is the most important location in which they need
automation. See Figure 15.
Which activity in your detection/remediation workflow
is most in need of automation? Select the most appropriate.
H unting for compromised endpoints
without known IOCs (anomaly detection)
C onfirming the compromise of a possibly
compromised endpoint
D etermining whether a compromised host
may place sensitive/regulated data at risk
R eturning a compromised endpoint to a
known good state
H unting for compromised endpoints with
known IOCs
C reating IOCs from a known compromised
endpoint
Other
Figure 15. Automation Most Needed
That hunting for compromised endpoints using known IOCs garnered only 13% of the
responses suggests that many or most IR shops already have this activity automated.
It will take the integration of more endpoint-related intelligence, SIEM and analytics to
complete the automation needed to catch unknown indicators of compromise and reuse
them for future reference and to address the second most pressing area for automation:
accurate confirmation of breached endpoints, followed by sensitive/regulated data
detection on those devices.
Detecting and locating compromised endpoints is critical in reducing impact on events,
including reducing impact on regulated data. So the order of the top three priorities for
automation (finding events without IOCs, confirming compromised endpoints and then
the sensitive data on them) follows a logical progression of how most investigations
progress. Determining whether sensitive/regulated data was at risk is critical in
accurately triaging compromised endpoints and assessing damage in a compromise.
Security best practices require organizations to understand where their regulated data
is,15 but many organizations do not have even basic network infrastructure maps, let
alone detailed data maps.
15
SANS ANALYST PROGRAM
www.securityweek.com/zen-and-art-cloud-database-security-part-2
20
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Remediation and Recovery
No matter how well an organization is poised to detect threats, those that don’t perform
proper remediation and recovery are doomed to fail. But remediation is hard. During
remediation, incident responders must choose techniques appropriate to their specific
situation to minimize downtime. Organizations must also ensure that the attackers do
not immediately return to exploit the same vulnerability originally used to compromise
the endpoints.
Remediation Techniques
TAKEAWAY:
Wipe and reimage was the most popular technique for remediating compromised
Enhanced endpoint
endpoints in both 2014 and 2015, with 77% and 79% of respondents, respectively,
monitoring is a critical and
often-neglected task when
remediating incidents. It is
indicating they use this technique. See Figure 16.
What methods do you use to remediate compromised endpoints?
Select those that most apply.
necessary because attackers
often return to a previously
compromised machine after
remediation is considered
Other
Historic analysis of previously
collected endpoint data
Reinstall from OS media
and reconfigure
Restore host from
system backup
Remediate the threat without
reinstalling the OS
Block communications
to threat actors
using the firewall
Wipe and reimage
complete.
Figure 16. Remediation Techniques
SANS ANALYST PROGRAM
21
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Remediation and Recovery
(CONTINUED)
The one technique that grew substantially this year is blocking communication with
threat actors using the firewall. This technique is effective as long as all IP addresses and
domain names in use are known. But we have seen an increase in the use of malware
utilizing multiple IP addresses and domain names for command and control (C2). Many
attackers realize that blocking a single IP address or domain name at the network
boundary is a technique in the playbook of every IR team. Even commodity malware
today uses secondary C2 methods to beat this most basic of IR techniques. Typically,
44%
secondary C2 domains and IP addresses are only exposed when the primary is not
accessible. Responders should carefully monitor endpoints they consider remediated
when using this method.
Percentage of
respondents blocking
communication with
threat actors using the
firewall in 2014
A new option this year was the historic analysis of previously collected endpoint data.
Only 16% of respondents indicated they use this technique. This result is in alignment
with our findings on where organizations most need automation and which area is the
least important for automation (reusing IOCs to detect future events). This collection
and re-use of IOCs is something intelligence and analytics vendors say they have fully
automated, often via a cloud interface. So it is surprising to see that the IT security
community isn’t on board with this concept when it comes to its endpoint automation
programs. This may be because it requires data to be collected before an intrusion. But
what percentage of respondents are actually collecting sufficient endpoint data before
an incident occurs? If an organization has historical data, why wouldn’t it use such data
during remediation? The fact that so few organizations are using this data in remediation
suggests that it is not available, making a convincing argument for collecting endpoint
52
%
data before a compromise.
Percentage of
respondents blocking
communication with
threat actors using the
firewall in 2015
SANS ANALYST PROGRAM
22
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Remediation and Recovery
(CONTINUED)
Incident Recovery Challenges
As in our 2014 survey, assessing impact (55%) and determining the scope of a threat
across multiple endpoints (51%) continued to be the top challenges to incident recovery.
A new option, “determining when the incident is fully remediated” (inspired by writein responses from 2014) was selected by 44% of participants. Incident response firms
will tell you that much of their business comes from clients who tried unsuccessfully to
remediate a network intrusion but failed, only to find that intrusion starting up again.
Figure 17 provides a snapshot of the recovery challenges.
Which of the following are the greatest challenges in recovering from an incident?
Select all that apply.
Assessing impact
Determining scope of a threat across multiple
endpoints in an incident
Determining when an incident is fully remediated
(e.g., is the attacker really gone?)
Hunting for compromised endpoints
Determining what company confidential/regulated
data was at risk because of compromised endpoints
Determining scope of a threat on an endpoint
Remediating compromised endpoints
Dealing with lack of automation and lack of
interoperability between security toolsets
Losing data inadvertently during wipe/reimage
Identifying which security information is most
relevant when remediating a compromise
Other
17. Challenges
in Incident Recovery
Figure 17. ChallengesFigure
in Incident
Recovery
Another significant change from 2014 came in the challenge of inadvertently losing
data during an endpoint wipe and reimage. Although roughly the same percentage
of participants are using this technique (77% in 2014 and 79% in 2015), only 22% of
respondents found inadvertent data loss as one of their greatest challenges, down from
35% last year. This may indicate that organizations are getting better at storing data off
the endpoint or that they have deployed better controls to prevent loss of data. In any
case, wipe and reimage is a potentially dangerous technique that is prone to data loss
and lost work-hours in business units.
SANS ANALYST PROGRAM
23
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Remediation and Recovery
(CONTINUED)
To Outsource or Insource?
In lieu of internal, automated processes, many security and response actions are being
outsourced. The most frequently outsourced security tasks are perimeter security,
selected by 38% of respondents, and network security, selected by 32% of respondents.
Event management is outsourced by 26%, while incident response is outsourced by
25% of respondents. The 2014 SANS Incident Response Survey revealed that 50% of
organizations outsource at least some of their IR activities.16 Of those, 27% use thirdparty IR resources that they call when needed. This aligns well with current survey
results, indicating that outsourcing levels for IR remain relatively stable (see Figure 18).
Event response
Other
Endpoint security
Incident response
Event management
Network security
Perimeter security
Does your company outsource security services to a managed security service provider?
If yes, select all that apply.
Figure 18. Outsourced Security Services
16
SANS ANALYST PROGRAM
www.sans.org/reading-room/whitepapers/incident/incident-response-fight-35342
24
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Remediation and Recovery
(CONTINUED)
Not to be overlooked, endpoint security is also being outsourced by just under 25%
of respondents. When outsourcing endpoint security operations, organizations must
understand the abilities of their contractors. A common mistake when outsourcing
endpoint security is failing to provide the contractor with sufficient data to detect
intrusions. Many services are priced by the volume of data the contractor is expected to
analyze—so it is easy to understand why so many businesses want to provide as little
data as possible. However, as SANS discovered in the 2014 Endpoint Security Survey,
respondents wanted more endpoint data.17 Less is not more when it comes to detecting
and analyzing endpoint intrusions.
Protecting sensitive data when outsourcing is always a complicated task. Organizations
must place some trust in the outsourcing provider, but should that vendor have carte
blanche access to the organization’s data? Probably not. This viewpoint is amplified by
the fact that almost 50% more respondents outsource perimeter security than endpoint
security (where most sensitive intellectual property is stored).
Several “Other” answers indicate that organizations outsource other functions or
intend to outsource more functionality at a later date. After-hours monitoring of events
was another popular write-in answer. There was no difference in the likelihood of
organizations to outsource security services based on their size.
17
SANS ANALYST PROGRAM
www.sans.org/reading-room/whitepapers/analyst/case-endpoint-visibility-34650
25
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Conclusions
The threat landscape has continued to evolve. A check of the morning headlines shows
that endpoints are being compromised in record numbers. The results of the second
SANS Endpoint Security Survey point to poor visibility at the endpoint. Knowing what
you have is important—not only what endpoints you’re looking at but the sensitive data
that is processing across them.
More visibility into incidents and automation in response continues to be highly desired,
but little progress toward automation has been made since last year.
The time required to investigate and remediate each compromised endpoint has not
changed significantly. Considering that compromises are likely to continue at the current
rate (if not increase in frequency), organizations should identify key steps to reduce
the time it takes to respond to each compromised endpoint. Increasing the level of
automation is a good first step toward better visibility into endpoints, their applications,
and the anomalies and indicators of events across multiple endpoints.
Organizations should objectively define their obstacles to increasing automation in
incident response and create a plan to increase automation.
Organizations should also consider tools that help protect their assets from compromise
and identify the locations where their most sensitive and regulated data is stored to
enable more efficient detection and response. Speed matters—and so does context.
Simply collecting bulk data for threat detection makes little difference if analysis of that
data is prohibitively difficult.
SANS ANALYST PROGRAM
26
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
About the Author
Jake Williams is founder and principal consultant at Rendition InfoSec and a certified SANS instructor
and course author. He has more than a decade of experience in secure network design, penetration
testing, incident response, forensics, and malware reverse engineering. Before founding Rendition
Infosec, he worked with various government agencies in information security roles. Jake is a two-time
victor at the annual DC3 Digital Forensics Challenge.
Sponsor
SANS would like to thank this survey’s sponsor:
SANS ANALYST PROGRAM
27
The Case for Visibility: SANS 2nd Annual Survey on the State of Endpoint Risk and Security
Last Updated: June 17th, 2017
Upcoming SANS Training
Click Here for a full list of all Upcoming SANS Events by Location
DFIR Summit & Training 2017
Austin, TXUS
Jun 22, 2017 - Jun 29, 2017
Live Event
SANS Paris 2017
Paris, FR
Jun 26, 2017 - Jul 01, 2017
Live Event
SANS Cyber Defence Canberra 2017
Canberra, AU
Jun 26, 2017 - Jul 08, 2017
Live Event
SANS Columbia, MD 2017
Columbia, MDUS
Jun 26, 2017 - Jul 01, 2017
Live Event
SEC564:Red Team Ops
San Diego, CAUS
Jun 29, 2017 - Jun 30, 2017
Live Event
SANS London July 2017
London, GB
Jul 03, 2017 - Jul 08, 2017
Live Event
Cyber Defence Japan 2017
Tokyo, JP
Jul 05, 2017 - Jul 15, 2017
Live Event
SANS Los Angeles - Long Beach 2017
Long Beach, CAUS
Jul 10, 2017 - Jul 15, 2017
Live Event
SANS Cyber Defence Singapore 2017
Singapore, SG
Jul 10, 2017 - Jul 15, 2017
Live Event
SANS ICS & Energy-Houston 2017
Houston, TXUS
Jul 10, 2017 - Jul 15, 2017
Live Event
SANS Munich Summer 2017
Munich, DE
Jul 10, 2017 - Jul 15, 2017
Live Event
SANSFIRE 2017
Washington, DCUS
Jul 22, 2017 - Jul 29, 2017
Live Event
Security Awareness Summit & Training 2017
Nashville, TNUS
Jul 31, 2017 - Aug 09, 2017
Live Event
SANS San Antonio 2017
San Antonio, TXUS
Aug 06, 2017 - Aug 11, 2017
Live Event
SANS Hyderabad 2017
Hyderabad, IN
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS Prague 2017
Prague, CZ
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS Boston 2017
Boston, MAUS
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS New York City 2017
New York City, NYUS
Aug 14, 2017 - Aug 19, 2017
Live Event
SANS Salt Lake City 2017
Salt Lake City, UTUS
Aug 14, 2017 - Aug 19, 2017
Live Event
SANS Adelaide 2017
Adelaide, AU
Aug 21, 2017 - Aug 26, 2017
Live Event
SANS Virginia Beach 2017
Virginia Beach, VAUS
Aug 21, 2017 - Sep 01, 2017
Live Event
SANS Chicago 2017
Chicago, ILUS
Aug 21, 2017 - Aug 26, 2017
Live Event
SANS Tampa - Clearwater 2017
Clearwater, FLUS
Sep 05, 2017 - Sep 10, 2017
Live Event
SANS San Francisco Fall 2017
San Francisco, CAUS
Sep 05, 2017 - Sep 10, 2017
Live Event
SANS Network Security 2017
Las Vegas, NVUS
Sep 10, 2017 - Sep 17, 2017
Live Event
SANS Dublin 2017
Dublin, IE
Sep 11, 2017 - Sep 16, 2017
Live Event
SANS Minneapolis 2017
OnlineMNUS
Jun 19, 2017 - Jun 24, 2017
Live Event
SANS OnDemand
Books & MP3s OnlyUS
Anytime
Self Paced