The Case for Endpoint Visibility

Interested in learning
more about security?
SANS Institute
InfoSec Reading Room
This paper is from the SANS Institute Reading Room site. Reposting is not permitted without express written permission.
The Case for Endpoint Visibility
Copyright SANS Institute
Author Retains Full Rights
The Case for Endpoint Visibility
A SANS Analyst Survey
Written by Jacob Williams
March 2014
Sponsored by
Guidance Software
©2014 SANS™ Institute
Introduction
The year 2013 witnessed a seemingly unending parade of headline-grabbing, highprofile data breaches, many of which started out as the result of compromised
endpoints. For example, Target’s high-profile breach announced in December 2013
was reportedly the result of compromised point-of-sale (POS) systems.1 The successful
attacks in early 2013 against Apple, Facebook and Twitter featured well-executed attacks
on endpoints stemming from a “watering hole” tactic that made use of a compromised
website frequented by developers of apps for Apple’s iOS devices.2
With the numerous breaches focused on endpoints, we set out to determine how
organizations are monitoring, assessing, protecting and investigating their endpoints,
as well as remediating breaches upon detection, by conducting the first SANS Endpoint
Security Survey.
The survey was offered online during December 2013 and January 2014, and 948 IT
professionals working in a variety of industries completed it. From the results, we learned:
• More than half of respondents say they’ve already been compromised—or
will be. Just over 47% of respondents were operating under the assumption
they’ve been compromised, with another 5% in the “Other” category, many of
whom say they operate under the assumption that if they have not already been
compromised, they eventually will be.
• Most compromises are unsophisticated. Most respondents (52%) indicated
that the vast majority of their compromises are perpetrated by unsophisticated
attackers, which seems at odds with media reports, where every attack seems to be
the work of advanced persistent threat (APT) groups using stealth techniques.
• More endpoint data is necessary for effective threat hunting. A significant
segment of respondents, exceeding 40% in several categories, is not collecting as
much data as desired for use in detecting and remediating threats.
• Lack of automation causes remediation lag. A lack of automation slows the
process of incident response and remediation, with the largest segment of
respondents (54%) automating one-tenth or less of their response processes.
• Most remediation is performed manually. Only 7% of participants reported
using automated workflows for remediating endpoints, compared to the 77% who
reported the use of the more manual “wipe-and-reimage” tactic.
SANS ANALYST PROGRAM
1
“Target: Breach Caused by Malware,” BankInfoSecurity, 12/24/2013, www.bankinfosecurity.com/-a-6316/op-1
2
“ Hackers Who Attacked Twitter, Facebook, Apple May Have ‘Hundreds’ More Victims,” Huffington Post, 2/20/2013,
www.huffingtonpost.com/2013/02/20/apple-hacked-facebook-twitter_n_2726061.html
1
The Case for Endpoint Visibility
Introduction
(CONTINUED)
As organizations develop strategies for detecting and remediating threats, they should
look to augmenting endpoint visibility with tools that provide the capability to look at a
broader set of endpoint assets. Tools that can detect which endpoints contain regulated
data, such as personally identifiable information (PII), are particularly important.
In addition, there exists a considerable opportunity for organizations to increase
productivity and accelerate recovery from incidents by automating the response and
remediation process. Compromises unfold quickly, and organizations that respond
quickly in remediating threats may prevent the theft of confidential data or reduce the
scope of the damage.
The full results of the inaugural SANS Endpoint Security Survey are summarized in this
whitepaper to help information security professionals track trends in endpoint protection
and identify how their organization’s capabilities compare with the survey base.
SANS ANALYST PROGRAM
2
The Case for Endpoint Visibility
Survey Respondents
The results of the survey are representative of a large cross-section of organizations,
not just those with sizable (or minimal) budgets for endpoint security. One-third (33%)
of respondents represented organizations of more than 10,000 employees, while
organizations with fewer than 1,000 employees comprised just over one-third (34%) of
all responses, as shown in Figure 1.
How many people work at your organization,
either as employees or consultants?
34%
Percentage of
respondents in
organizations with
fewer than 1,000
employees
Figure 1. Organization Size of Respondents
Survey respondents also came from a large cross-section of industries; almost one-fifth
(19%) of responses were from financial, banking and insurance professionals (the largest
group). Government was also well represented in the survey, accounting for another
13% of responses. Other industry groups contributing significantly to the survey results
included high tech, education, health care/pharmaceutical, telecommunications and
manufacturing. This cross-section of responses demonstrates a broad interest in endpoint
protection. The diversity of respondents is also a measure of the quality of data in the
survey. No one industry controls the majority of the responses, as shown in Figure 2.
What is your organization’s primary industry?
Figure 2. Industries of Survey Respondents
SANS ANALYST PROGRAM
3
The Case for Endpoint Visibility
Survey Respondents
(CONTINUED)
As any manager will confirm, people in staff positions may have different concerns and
goals than consultants. We asked respondents to identify their roles and whether they
were consultants or staff; more than four-fifths (82%) of respondents said they were on
staff at the organization they represent.
When asked about their primary work role, the largest group of respondents
encompassed security administrators and security analysts. However, a surprising
number of respondents are in management roles; more than one-third (37%)
of respondents work in IT management (e.g., CIO or related duties) or security
management (e.g., CISO or similar responsibilities). These results indicate that the
37
%
survey topic speaks to the strategic concerns of management while also addressing the
technical concerns of those in the trenches. Figure 3 shows the distribution of responses.
Please indicate your primary role in your organization.
Percentage of
respondents working
in IT or security
management
Figure 3. Primary Work Roles of Respondents
SANS ANALYST PROGRAM
4
The Case for Endpoint Visibility
“Assuming the Worst” as a Start
All information security professionals know that systems can be compromised, so we
asked survey respondents how their organizations currently perceive their “endpoint
security hygiene.” Those operating under the assumption that their endpoints are
clean may prioritize security of internal assets differently from those operating under
the assumption that at least some systems may be compromised, employing fewer
rigors to defense in depth of their endpoints. Those operating from an assumption of
compromise are likely to invest effort in detection and engage in proactive searches
for compromised endpoints. To these organizations, not finding the compromise today
doesn’t mean it isn’t there; it simply means that detection has fallen short.
The numbers are split nearly down the middle, with 47% responding affirmatively
that they are operating under the assumption of compromise and 48% responding
negatively. However, our analysis of the details behind the “Other” responses (5%) tells
47%
an interesting story: Overwhelmingly, such responses indicate that respondents believe
that some of their systems are compromised. (Responses such as “No, but we should
be!” and “Well, not all of them” paint the picture that many professionals understand
the need to operate under the assumption of compromise; whether they do so is
another issue altogether.) Perhaps the most notable response in this category is “It’s
Percentage of
respondents operating
under the assumption
their systems have been
compromised
likely, however I only know what I can see.” This response, like several others, speaks to
the significant challenges of detecting compromises in today’s operating environment.
Figure 4 shows the almost even division of “Yes” and “No” responses.
Are you operating under the assumption
that your systems have been compromised?
Figure 4. Assumption of Compromise
SANS ANALYST PROGRAM
5
The Case for Endpoint Visibility
“Assuming the Worst” as a Start
(CONTINUED)
Evading Detection at the Perimeter
Survey respondents were asked what percentage of incidents over the last 24 months
were the result of threats that should have been blocked by a perimeter security device,
as shown in Figure 5.
What percentage of the incidents in your organization over the
last 24 months were the result of threats that should have been blocked
by a perimeter security device (e.g., firewall or UTM)?
Figure 5. Incidents That Should Have Been Blocked by a Perimeter Security Device
This is admittedly difficult to quantify, particularly for organizations without good
endpoint visibility or a mature incident response (IR) process. Therefore, it’s no surprise
that slightly more than one-fifth (21%) of respondents answered, “I don’t know.”
However, for those who were able to quantify the numbers, the results were very telling.
Although the largest category of respondents (36%) who could identify such incidents
was the group believing that, at most, 10% of these incidents should have been
blocked at the edge—indicating these respondents find perimeter devices effective in
general—a more instructive analysis is to consider where perimeter protection is failing.
To evaluate this, we considered the respondents who believed that the vast majority of
their attackers should have been blocked at the perimeter. Respondents claiming that
31% or more of such perimeter protection failures took place account for a staggering
one-fifth (21%) of respondents.
SANS ANALYST PROGRAM
6
The Case for Endpoint Visibility
“Assuming the Worst” as a Start
(CONTINUED)
Stealth Techniques Not So Pervasive
We also asked respondents what portion of those attacks that evaded perimeter detection
was considered to be the work of advanced adversaries using stealth techniques.
However, this question has its own built-in bias: Organizations may feel that they get
a free pass on a breach if the attack was “advanced” or that because the attacker used
stealth techniques, a compromise was inevitable and there was nothing that could
be done to detect it. For this reason, organizations may report mundane attacks as
advanced and stealthy.3 As such, it was expected that the numbers in these responses
would be elevated somewhat over reality.
Although some
phishing attacks
The results, however, were both surprising and telling. Almost one-fourth (24%) of
participants had no gauge of their adversaries’ skill. However, more than half (52%) of
respondents believed that 20% or less of attacks bypassing perimeter protections used
stealth techniques, as shown in Figure 6.
can be detected
What percentage of threats that initially evaded perimeter detection would
you categorize as advanced adversaries using stealth techniques?
and blocked at the
perimeter, the best
chance for detecting
those that get
through successfully
is through
enhanced endpoint
instrumentation.
Figure 6. Use of Stealth Techniques to Evade Perimeter Detection
Combine this data with that from the last question, and a clear picture emerges:
Attackers are bypassing perimeter protections at will and do not need to use stealth
techniques to do so. This should be a serious point of concern for organizations that
have placed all of their security eggs in the perimeter protection basket. According to
Verizon’s 2013 Data Breach Investigations Report, more than 95% of those attacks tied
to state-affiliated espionage or intelligence activities used phishing as an initial infection
vector.4 Although some phishing attacks can be detected and blocked at the perimeter,
the best chance for detecting those that get through successfully is through enhanced
endpoint instrumentation.
SANS ANALYST PROGRAM
3
“ The Truth Behind the Shady RAT,” Symantec Security Response blog, 4/11/2011,
www.symantec.com/connect/blogs/truth-behind-shady-rat
4
2 013 Data Breach Investigations Report, p. 36,
www.verizonenterprise.com/resources/reports/rp_data-breach-investigations-report-2013_en_xg.pdf
7
The Case for Endpoint Visibility
“Assuming the Worst” as a Start
(CONTINUED)
Time to Respond Is Limited
When we asked survey participants about the acceptable delay in receiving data from all
queried endpoints, the results couldn’t be any clearer: Speed matters. More than four-
26%
fifths (83%) of respondents said they needed results from their endpoint queries in less
than an hour, as shown in Figure 7.
To provide maximum value in detecting and responding to incidents,
what is the acceptable delay between requesting data
and receiving it from all queried endpoints?
Percentage of
respondents wanting
endpoint data in less
than 5 minutes
Without agents
already in place,
obtaining results
from all endpoints in
Figure 7. Maximum Acceptable Delay
less than an hour is
A subset of that group (26% of all respondents) wanted the data in less than five
simply impossible.
minutes, underscoring the need to pre-position any needed software agents on
endpoints. Without agents already in place, obtaining results from all endpoints in less
than an hour is simply impossible.
SANS ANALYST PROGRAM
8
The Case for Endpoint Visibility
“Assuming the Worst” as a Start
(CONTINUED)
Response Costs
Reducing the time spent per endpoint on IR, even marginally, has multiplicative
effects on overall cost savings. Organizations should consider how much time they
are spending per endpoint (compared to the median result) and whether they wish to
reduce such costs.
In our survey, 17% of respondents reported spending 6–10% of their security budgets
on IR, 12% spend 11–25% of their budgets on it and 7% spend more than 25% of their IT
budgets on response, as shown in Figure 8.
12%
How much of your IT security budget
is spent on incident response?
Percentage of
respondents spending
11–25% of their
security budgets on IR
Figure 8. Percentage of IT Security Budget Spent on IR
The cost of IR is closely tied to the number of man-hours required to remediate the
threat. To quantify this, we asked survey participants how many hours they spend per
endpoint when responding to an incident. Although this number is also subject to a
number of variables, if not pure guesswork, we believe that the nature of the question
brings these estimates closer to truth.
SANS ANALYST PROGRAM
9
The Case for Endpoint Visibility
“Assuming the Worst” as a Start
(CONTINUED)
More than one-fourth (27%) of respondents reported that they spend more than four
hours per compromised endpoint in incident response time, while 28% spend between
two and four hours and another 45% spend no more than two hours per endpoint, as
shown in Figure 9.
On average, how many hours do you spend per
compromised endpoint when responding to an incident?
To lower the time expended
per compromised endpoint,
move away from wipeand-reimage remediation
strategies and focus on
more targeted remediation.
Figure 9. Number of Hours Spent per Compromised Endpoint on IR
Assuming a modest cost of $50 per man-hour (which could be even higher if outside
consultants are used), investigative costs can easily top $200 for each infected endpoint
being investigated. Such numbers underscore the case for more automation in endpoint
assessment, remediation and follow-through. One way to lower the time expended per
compromised endpoint is to move away from wipe-and-reimage remediation strategies
and focus on more targeted remediation.
SANS ANALYST PROGRAM
10
The Case for Endpoint Visibility
Automation and Remediation Still Challenging
Any organization seeking to lower IR costs should increase the level of automation in their
processes. Automation would include tools for scanning endpoints, correlating response
data, detecting compromises, remediating threats and managing workflow.
Only 16% of respondents currently automate 51% or more of their IR tasks, whereas 70%
of respondents automate less than one-fifth of their IR tasks. IR efforts rely greatly on
manual processes, as shown in Figure 10.
What percentage of your incident response process is currently automated
through the use of purpose-built tools for remediation workflow?
What do you expect this percentage to be in 24 months?
As the number of
network-based
attacks continues to
grow, organizations
must increase
automation to
merely keep pace,
assuming their
Figure 10. Automation of IR Workflow
current staffing levels
remain constant.
When asked what percentage of their IR processes they expected to have automated in
the next 24 months, respondents’ answers varied significantly. The number of respondents
who expect to have more than half of their processes automated by early 2016 was 34%,
more than double today’s number. The number of respondents expecting to have less
than 20% of their IR automated by then is half that of today, indicating a significant trend
toward automation.
Comparing the cost of qualified incident responders to the cost of automating their tasks,
the choice for organizations wishing to lower their IR costs is obvious. As the number of
network-based attacks continues to grow, organizations must increase automation to
merely keep pace, assuming their current staffing levels remain constant.
SANS ANALYST PROGRAM
11
The Case for Endpoint Visibility
Automation and Remediation Still Challenging
(CONTINUED)
Remediation Lagging
The remediation phase of IR is even less automated than the initial steps are. Only 7% of
respondents reported using automated workflow to manage the remediation process; this
question used a “select those that most apply” answer set, so respondents didn’t have to
choose among automated functions in their answers. The results are shown in Figure 11.
What methods do you use to remediate compromised endpoints?
Select those that most apply.
7%
Percentage of
respondents who
reported using
automated workflow to
manage remediation
Figure 11. Remediation Methods for Compromised Endpoints
Other types of remediation activities included blocking communications at the firewall
and reinstalling the operating system. More than three-quarters (77%) of respondents
reported wiping and reimaging compromised systems, while one-third (33%) reported
that they attempt remediation without reinstalling the OS.
Remediation via wipe-and-reimage almost always causes the loss of some data under
even the most thorough backup regimes, making it a last resort for some organizations.
In those cases where the data loss is trivial, wipe-and-reimage may well be the fastest way
to get the endpoint back in service. Of course wipe-and-reimage isn’t always practical. To
take one example, business processes reliant upon servers that perform critical business
functions—such as payment processing—simply cannot tolerate downtime. In such cases,
any downtime for remediation is financially unacceptable. Organizations with high uptime
requirements should identify strategies that allow them to perform targeted remediation
with a high degree of precision.
SANS ANALYST PROGRAM
12
The Case for Endpoint Visibility
Automation and Remediation Still Challenging
(CONTINUED)
Obstacles to Recovery
Interestingly, the possibility of losing data during a wipe-and-reimage barely makes it into
a list of the five greatest challenges to incident recovery. The four challenges at the top of
that list—each noted by more than 40% of respondents—could all be addressed through
improved endpoint visibility:
• Assessing the impact
• Determining the scope of compromise across multiple endpoints
• Determining the scope of compromise on a single endpoint
• Hunting for compromised endpoints.
Losing data during a wipe-and-reimage is a great challenge to slightly more than onethird (35%) of respondents, while even fewer respondents (33%) consider remediation of
compromised endpoints noteworthy, as shown in Figure 12.
Which of the following are the greatest challenges in recovering from an incident?
Select those that most apply.
Figure 12. Greatest Challenges in Recovering from Incidents
SANS ANALYST PROGRAM
13
The Case for Endpoint Visibility
Detecting Endpoint Threats
When we asked respondents about their risk and compliance concerns, the most
indicated categories (each with 55% or more) were workstations, web servers and
Windows endpoints, respectively. Cloud, OS X and Linux endpoints were also a concern
for some organizations, each of them being reported by more than 10% of respondents,
as shown in Figure 13.
What endpoints are of most concern to you from a risk and security perspective?
55% of respondents indicated
workstations, web servers,
and Windows endpoints are
their biggest risk and security
concerns. Cloud, OS X and
Linux endpoints were selected
by just more than 10%.
Figure 13. Endpoints Rated by Risk
Notable write-in candidates for this category were mobile devices and so-called
bring-your-own-device (BYOD) policies. Organizations seeking to increase visibility
in a cost-effective manner should deploy first where it matters most: the Windows
desktop. However, an endpoint-monitoring solution that allows analysts to access data
across the heterogeneous operating platforms (often, Linux or UNIX derivatives) at the
heart of the enterprise should increase efficiency for those organizations operating on
multiple platforms.
SANS ANALYST PROGRAM
14
The Case for Endpoint Visibility
Detecting Endpoint Threats
(CONTINUED)
Threat Detection Still Network Focused
We also asked participants where visibility matters most for threat detection, with classic
IT favorites IDS/IPS, firewalls and log data being the top finishers, as shown in Figure
14. Several participants indicated in supplemental comments that they use security
23
%
information event management (SIEM) tools.
Where does visibility into threats matter most for
detecting threats in your organization?
Percentage of
respondents who rated
endpoints as the most
important point of
detection
The “disappearing
perimeter” makes
a strong case
for increased
endpoint-detection
capabilities, where
threat detection
operates the same
regardless of the
physical location of
the endpoint.
Figure 14. Where Threat Visibility Matters Most
What is most startling about these responses is that only 23% of respondents felt
endpoint visibility—without regard to the endpoint’s role—to be the most important
point of detection. This indicates that organizations are still taking a network-centric
view of threats by using firewalls or IDS/IPS tools to detect them. Wouldn’t data from the
endpoints themselves provide better opportunities for detection?
The network perimeter, formerly a finite boundary, has become fuzzy thanks to changes
such as the explosive growth in the use of mobile devices and the adoption of BYOD
policies to manage them, as well as increased use of resources by remote workers using
VPN and other technologies. The “disappearing perimeter” makes a strong case for
increased endpoint-detection capabilities, where threat detection operates the same
regardless of the physical location of the endpoint.
SANS ANALYST PROGRAM
15
The Case for Endpoint Visibility
Detecting Endpoint Threats
(CONTINUED)
(They Can’t Get No) Satisfaction
When it comes to well-established defenses such as antivirus and firewalls, some of the
most telling numbers in the survey results are the rates of dissatisfaction with them.
Almost one-fifth (17%) of respondents are unhappy with the results from file scanning
using antivirus tools. Even more startling are the responses from those using SIEM tools;
of these respondents, almost 14% report that they are not satisfied, as shown in Figure 15.
What type of endpoint/perimeter protection are you using in your organization?
How satisfied are you with each?
... a true “defense
in depth” requires
that endpoints
be independently
protected even when
the perimeter is
breached.
Figure 15. Satisfaction with Endpoint and Perimeter Defenses
The highest rates of overall satisfaction from our respondents were noted with border
firewalls (86%), file scanning antivirus (80%), web proxies (61%) and host-based
firewalls (59%), respectively. Although web proxies and firewalls are clearly useful (and
necessary) for perimeter protection, a true “defense in depth” requires that endpoints be
independently protected even when the perimeter is breached.
SANS ANALYST PROGRAM
16
The Case for Endpoint Visibility
Detecting Endpoint Threats
(CONTINUED)
That requires detecting a breach, which is an area where our respondents reported
general satisfaction with the methods they use for threat detection. Classic networkcentric detection through antivirus, firewalls or IDS/IPS tools is still at the top of this list.
Unsurprisingly, respondents reported greater satisfaction with analyzing data in a SIEM
system than they did with manual review of endpoint logs, as shown in Figure 16.
How satisfied are you with the various detection capabilities you use?
Figure 16. Satisfaction with Detection Capabilities
A recent focus in enterprise defense is to instrument the network to increase visibility
through the use of full packet capture or network flow analysis. However, respondents
were not satisfied with these network-centric methods of threat detection, highlighting
the need for additional capabilities that are focused on endpoints rather than the
network.
SANS ANALYST PROGRAM
17
The Case for Endpoint Visibility
Detecting Endpoint Threats
(CONTINUED)
Detecting Compromised Endpoints
Although endpoint visibility may not seem as important to respondents as perimeter
visibility is, our next question indicates that 70% of them collect security data from
endpoints. They’re collecting data including user logins, applications, artifacts of
malware, listening network ports, unauthorized network interfaces and running
processes, as shown in Figure 17.
Which of the following do you collect from endpoints for the purposes of correlation
in the detection and remediation of threats?
That AV alerts are
used to detect more
compromises than
SIEM or manual
review of endpoint
logs tells us that
endpoint data review
has a lot of catching
up to do.
Figure 17. Methods Used in Endpoint Data Collection
These categories were the expected front-runners when it comes to ease of analysis
and acquisition, yet those categories where the majority of respondents considered a
data type useful, but were not collecting it, were most interesting. These can include
network protocol cache entries, route tables, browser history artifacts and PII stored on
endpoints where it is not authorized. As an example of the latter, artifacts of regulated,
but unencrypted, data may be found in a paging file, even if the application processing
them never itself writes the data to disk.
Elsewhere, the existence of prefetch or link files may point to execution and file
access, but these are difficult to access without endpoint visibility solutions. The fact
that respondents would like this data, but are not currently collecting it, suggests the
challenges to obtaining it may be more than technical issues.
SANS ANALYST PROGRAM
18
The Case for Endpoint Visibility
Detecting Endpoint Threats
(CONTINUED)
When evaluating tools for endpoint visibility, decision makers should consider the tool’s
capability to collect such critical information as the presence of illicit PII on an endpoint,
which may indicate the latter’s use as a staging area for exfiltrated data, thereby providing
the first detection of a network compromise. Organizations without the capability to
detect such behavior should carefully evaluate their investigation strategies.
IDS and perimeter firewalls were also two of the top three methods selected for
protecting endpoints, again showing how organizations are still using network-based
devices for the detection of endpoint compromises.
These network-detection methods merely point to the compromised endpoint without
providing any assistance in cleanup or mitigation through endpoint instrumentation.
Figure 18 shows the distribution of threat detection successes, according to respondents.
How did you detect that these threats had compromised your organization?
Choose all that apply.
Figure 18. Methods Used to Detect Compromises
To get closer to the endpoint, respondents are using either SIEM (42%) or manual review
of endpoint logs (38%) for detection. Both of these detection methods take the analyst
closer to the compromise—the endpoint itself has the most artifacts relating to the
compromise. That AV alerts are used to detect more compromises than SIEM or manual
review of endpoint logs tells us that endpoint data review has a lot of catching up to
do. When reviewing endpoint artifacts, context is everything; although an antivirus
alert signals a hostile executable, it does so without analyzing the context. Zero-day
payloads—by definition—can escape signature-based antivirus, but context from, say, a
reputation assessment could be used to uncloak the intruder.
SANS ANALYST PROGRAM
19
The Case for Endpoint Visibility
Detecting Endpoint Threats
(CONTINUED)
By making better use of endpoint data such as browser history, prefetch files and DNS
cache entries, we paint a more complete picture of the machine state.
Merely identifying the technology responsible for threat discovery doesn’t tell the
whole story, however. Analysts may detect threats reactively, by responding to an alert,
or proactively, by actively interrogating their endpoints, looking for anomalies. Our
respondents overwhelmingly rely on reactive detection, as shown in Figure 19.
What percentage of your threats are detected through proactive discovery
(actively interrogating endpoints) versus reactive (responding to IDS/AV/SIEM alerts)?
53%
Percentage of
respondents learning
of more than 50% of
threats by responding
to alerts
Figure 19. Proactive Versus Reactive Threat Detection
Only 17% of respondents reported finding more than half of their threats through
active discovery, while 53% of respondents found more than half of their threats by
responding to alerts. Reactive detection is not necessarily inferior, but organizations
should consider what steps they can take to increase the number of alerts found
through proactive scanning. This often involves adding instrumentation at the
endpoint, where attacks are targeted.
SANS ANALYST PROGRAM
20
The Case for Endpoint Visibility
Conclusions
As the threat landscape continues to expand, organizations will allocate everincreasing resources to endpoint protection. When, despite best efforts, endpoints are
compromised, organizations must then be ready to react to minimize data loss and
remediation costs.
The results of the first SANS Endpoint Security Survey point to a lack of visibility on
endpoints, with organizations relying on a number of network-centric methods. Given
the number of participants’ threats that bypassed perimeter protection devices, it’s no
wonder that greater visibility at the endpoint is called for. Additionally, participants
require rapid responses when requesting data from their endpoints; answers within
an hour may not be enough, as over three-fifths (61%) of respondents require them in
less than 30 minutes. Finally, the survey highlighted a strong need for automation and
proactive discovery of threats in the network.
Organizations that wish to protect their assets from compromise should look to protect
the actual target of attacks: the endpoints that hold the sensitive data. “Best-in-breed”
organizations should seek to increase automation and endpoint visibility, pre-deploying
instrumentation assets where appropriate to minimize response delays.
SANS ANALYST PROGRAM
21
The Case for Endpoint Visibility
About the Author
Jacob Williams is the chief scientist at CSRgroup computer security consultants and has more than a
decade of experience in secure network design, penetration testing, incident response, forensics and
malware reverse engineering. Before joining CSRgroup, he worked with various government agencies
in information security roles. Jake is a two-time victor at the annual DC3 Digital Forensics Challenge.
Sponsor
SANS would like to thank this paper’s sponsor:
SANS ANALYST PROGRAM
22
The Case for Endpoint Visibility
Last Updated: July 31st, 2017
Upcoming SANS Training
Click Here for a full list of all Upcoming SANS Events by Location
SANS Hyderabad 2017
Hyderabad, IN
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS Boston 2017
Boston, MAUS
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS Prague 2017
Prague, CZ
Aug 07, 2017 - Aug 12, 2017
Live Event
SANS New York City 2017
New York City, NYUS
Aug 14, 2017 - Aug 19, 2017
Live Event
SANS Salt Lake City 2017
Salt Lake City, UTUS
Aug 14, 2017 - Aug 19, 2017
Live Event
SANS Chicago 2017
Chicago, ILUS
Aug 21, 2017 - Aug 26, 2017
Live Event
SANS Adelaide 2017
Adelaide, AU
Aug 21, 2017 - Aug 26, 2017
Live Event
SANS Virginia Beach 2017
Virginia Beach, VAUS
Aug 21, 2017 - Sep 01, 2017
Live Event
SANS San Francisco Fall 2017
San Francisco, CAUS
Sep 05, 2017 - Sep 10, 2017
Live Event
SANS Tampa - Clearwater 2017
Clearwater, FLUS
Sep 05, 2017 - Sep 10, 2017
Live Event
SANS Network Security 2017
Las Vegas, NVUS
Sep 10, 2017 - Sep 17, 2017
Live Event
SANS Dublin 2017
Dublin, IE
Sep 11, 2017 - Sep 16, 2017
Live Event
Data Breach Summit & Training
Chicago, ILUS
Sep 25, 2017 - Oct 02, 2017
Live Event
SANS Baltimore Fall 2017
Baltimore, MDUS
Sep 25, 2017 - Sep 30, 2017
Live Event
Rocky Mountain Fall 2017
Denver, COUS
Sep 25, 2017 - Sep 30, 2017
Live Event
SANS SEC504 at Cyber Security Week 2017
The Hague, NL
Sep 25, 2017 - Sep 30, 2017
Live Event
SANS London September 2017
London, GB
Sep 25, 2017 - Sep 30, 2017
Live Event
SANS Copenhagen 2017
Copenhagen, DK
Sep 25, 2017 - Sep 30, 2017
Live Event
SANS DFIR Prague 2017
Prague, CZ
Oct 02, 2017 - Oct 08, 2017
Live Event
SANS Oslo Autumn 2017
Oslo, NO
Oct 02, 2017 - Oct 07, 2017
Live Event
SANS AUD507 (GSNA) @ Canberra 2017
Canberra, AU
Oct 09, 2017 - Oct 14, 2017
Live Event
SANS October Singapore 2017
Singapore, SG
Oct 09, 2017 - Oct 28, 2017
Live Event
SANS Phoenix-Mesa 2017
Mesa, AZUS
Oct 09, 2017 - Oct 14, 2017
Live Event
Secure DevOps Summit & Training
Denver, COUS
Oct 10, 2017 - Oct 17, 2017
Live Event
SANS Tysons Corner Fall 2017
McLean, VAUS
Oct 14, 2017 - Oct 21, 2017
Live Event
SANS Tokyo Autumn 2017
Tokyo, JP
Oct 16, 2017 - Oct 28, 2017
Live Event
SANS Brussels Autumn 2017
Brussels, BE
Oct 16, 2017 - Oct 21, 2017
Live Event
SANS SEC460: Enterprise Threat
San Diego, CAUS
Oct 16, 2017 - Oct 21, 2017
Live Event
SANS Berlin 2017
Berlin, DE
Oct 23, 2017 - Oct 28, 2017
Live Event
SANS Seattle 2017
Seattle, WAUS
Oct 30, 2017 - Nov 04, 2017
Live Event
SANS San Diego 2017
San Diego, CAUS
Oct 30, 2017 - Nov 04, 2017
Live Event
SANS San Antonio 2017
OnlineTXUS
Aug 06, 2017 - Aug 11, 2017
Live Event
SANS OnDemand
Books & MP3s OnlyUS
Anytime
Self Paced