What Lies at the Intersection of AI and Cybersecurity?

What Lies at the
Intersection of AI and
Cybersecurity?
Randy V. Sabett, J.D., CISSP
Vice-Chair, Privacy & Data Protection Practice Group
Cooley LLP
Presented at the ISSA Mid-Atlantic Meeting
attorney advertisement
Copyright © Cooley LLP, 3175 Hanover Street, Palo Alto, CA 94304. The content of this packet is an introduction to
Cooley LLP’s capabilities and is not intended, by itself, to provide legal advice or create an attorney-client relationship.
Prior results do not guarantee future outcome.
March 10, 2017
Disclaimer
• Nothing we discuss today constitutes legal advice. For any specific questions, seek
the independent advice of your attorney. Furthermore, lorem duis autem vel eum
iriure dolor in hendrerit in vulputate velit esse molestie on sequat, vel illum dolore eu
feugiat nulla facilisis at vero eros lorem ipsum. Lorem duis autem vel eum iriure
dolor in hendrerit in vulputate velit esse molestie on sequat, vel illum dolore eu
feugiat nulla facilisis at vero eros lorem ipsumautem vel eum iriure dolor in hendrerit
in vulputate velit esse molestie on sequat, vel illum dolore eu feugiat nulla facilisis at
vero eros lorem ipsum. Lorem duis autem vel eum iriure dolor in hendrerit in
vulputate velit esse molestie on sequat, vel illum dolore eu feugiat nulla facilisis at
vero eros lorem ipsum. Lorem duis autem vel eum iriure dolor in hendrerit in
vulputate velit esse molestie on sequat, vel illum dolore eu feugiat nulla…
2
Obligatory Agenda
• Artificial intelligence – level set
• Evolution of artificial intelligence
• Numerous avenues of research, including cybersecurity
• Cybersecurity concerns
• What happens at the intersection?
• What happens when we add legal considerations to the mix?
• What's next?
“One day they’ll have secrets…
one day they’ll have dreams”
Artificial Intelligence (AI) and Machine
Learning (ML)
• General AI: A machine that replicates the
functionality of the human brain, as
envisioned by many since about the 1940s.
• Narrow AI: A machine that does a specific
task that traditionally has been done by
humans.
• Machine Learning: Algorithms that can
learn and adapt from information to solve
future problems more efficiently.
AI then and now
• Evolution of narrow AI has been in fits and starts
• Critics: general AI has not been a wild success
• Certain applications have shown reasonable progress
• What we have today (public): Watson, Alexa, self-driving cars
• Concerns exist about vulnerabilities and exploits in many
applications, which has led to application of AI to cybersecurity
• Many promising areas of research
Examples of Narrow AI
• Limited Speech Recognition
• Web Shopping and Ad Tools
• Robots in the Home
• Self-driving Cars
• Fraud Detection
• Bar Codes
Other Examples of Narrow AI
•
•
•
•
•
Chess Playing Machines
Optical Character Recognition
Industrial Inspection
Biometrics
Medical Diagnosis
AI and Cybersecurity Trends
Media Mentions
•
In 2016, discussions of
cybersecurity and AI combined
increased significantly,
showing that they have
become increasingly linked in
media chatter. Co-mentions for
cybersecurity and ML also
rose in 2016.
•
https://www.cbinsights.com/blo
g/cybersecurity-artificialintelligence/
Cybersecurity: FUD, liability,
and other scary stuff…
So, what’s happening out there?
•
•
•
•
•
•
•
•
•
Poorly designed applications and software
Greater sophistication of attacker
Use of third party tools and suites that aren’t secure
Failure to properly investigate and contain an incident
Incorrect/ill-advised statements to third-parties
Failure to provide timely or complete breach notice
Breached ethical obligations
Potential liability resulting from alleged third party damages as a result of breach of
your systems
Reputational and other non-monetary harms
11
Cybersecurity Threats
• Some statistics and observations:
• According to recent studies:
• Over 80% of total value of the F500 consists of intellectual
property and other intangibles
• 76% of scanned websites had vulnerabilities, 20% of which were
critical
• Number of data breaches increased 23%
• Average annualized cost of cyber attacks to US. companies was
$11.6 million – a 78% increase over four years
• Many types of business are targeted, not just large
corporations
• Types of cyber attacks depend on what type of data is
stored and the attacker’s motivation.
Verizon DBIR – 2012 vs 2016
[11%]
[82%]
85% [98.6%]
Verizon, “Data Breach Investigation Report”
13
Avg. Cost of a Data Breach
$4.0M
$158/record
Source: Ponemon Institute, Cost of Data Breach Study (2016)
Cyber Risk and Duty of
Care Considerations
Understanding Cyber Risk
• Risk context and risk appetite
• Understand current security risk management posture
• Threats to the organization
• Vulnerabilities and weaknesses
• Potential impacts of cyber incidents
• Current risk mitigation strategies and controls in place
• Risk exposure and risk acceptance
• Review the results of security assessments and audits
• Consider where/when/how AIML fits in to this strategy
Emerging Duty of Care for Privacy/Security
• U.S. federal, state, and industry statutes and reg’s have begun to create a
duty of care for companies that handle sensitive information.
• For example, duty of care re: personal information (PI) comprises:







Only collect PI when necessary for business purposes.
Implement written policies, technical controls, and procedures to protect PI
Provide clear/conspicuous notice and choice about how PI will be collected and used.
Use PI only as permitted under statutes and Privacy Policy.
Ensure service providers provide appropriate security and privacy protections.
When PI is no longer needed, use secure disposal methods.
In most cases (except PCI), the extent of the duty of care will depend on the size of the
business and the resources available, and the type of data stored.
• Duty of care also applies to trade secrets and confidential information
• How long before duty of care includes AIML technology?
Derivative Suits for Cyber Lapses
• Shareholders have responded to attacks with derivative suits
against the board for failing in their fiduciary duties.
• TJX Companies (2007): theft of payment card numbers and personal
information of at least 45.7 million customers over approximately 18
months
• Heartland Payment Systems, Inc. (2009): intercept of unencrypted
payment card information sent to Heartland from outside vendors for
authorization
• Wyndham Worldwide Corporation (2014): three separate data
breaches of personal and financial information of over 600,000
customers
• Target Corporation (2014): “worst data breach in retail history” – theft
of personal and financial data of up to 110 million customers
18
Applicable Law and Regulations
• Fiduciary duties are determined by several different sources:
• State fiduciary duty laws
• State and federal data security laws
• Regulations or best practices specific to certain industries such as securities
trading, payment card issuance, and government contracting.
• The duties regarding data security can be broken into two broad categories:
• (1) duty to provide reasonable security for corporate data
• (2) the duty to disclose security breaches
19
State Law Fiduciary Duties
• Derivative suits related to cybersecurity are based on two theories of fiduciary duty:
• (1) Duty of care: the board made a decision with regard to cybersecurity that was “ill-advised
or negligent”
• Board decisions are generally protected under the “business judgment rule” unless a court finds the
board acted in bad faith or irrationally.
• Most companies have adopted charter provisions that protect directors from personal liability for a
duty of care breach.
• (2) Duty of loyalty: the board failed to act in the face of a reasonably known threat.
• What constitutes a “reasonable” cybersecurity plan?
• Directors are liable for failing to ensure that a “reasonable information and reporting system exists,”
or failing to monitor such a system once it is in place.
20
Regulators
SEC Enforcement Actions
• R.T. Jones Capital Equities Management (2015)
• Sensitive PII stored from 2009 – 2013 on third party web
server
• Web server was attacked in July 2013 by an unknown hacker
who gained access and copy rights to the data on the server
• The cyberattack rendered PII of more than 100,000 individuals
vulnerable to unauthorized acquisition
• SEC found that R.T. Jones had no written policies or
procedures to ensure confidentiality and security of PII
• $75,000 civil fine and censured
“The firm failed entirely to
adopt written policies and
procedures reasonably
designed to safeguard
customer information. For
example, R.T. Jones failed to
conduct periodic risk
assessments, implement a
firewall, encrypt PII stored
on its server, or maintain a
response plan for
cybersecurity incidents.”
SEC Enforcement Actions (cont’d)
• Morgan Stanley Smith Barney (2016)
• 2011 to 2014: then-employee transferred sensitive information from
approx. 730,000 MSSB accounts to his personal server
• Cyberattack on that server resulted in samples of sensitive information
getting posted on Internet with offers to sell greater quantities
• SEC found that:
• MSSB policies and procedures not reasonable
• Restrictions on employee access to sensitive information in
database portals were not in place for over 10 years
• Audits were not performed on the access control mechanisms or
employee use of database portals
• $1M civil fine and censured
“[MSSB] willfully
violated Rule 30(a) of
Regulation S-P (17 C.F.R.
§ 248.30(a)), which
requires every brokerdealer and investment
adviser registered with
the Commission to adopt
written policies and
procedures that are
reasonably designed to
safeguard customer
records and information.”
FTC Enforcement Authority
• FTC Act
•
Section 5 prohibits “deceptive” or “unfair” acts or practices | Section 12 prohibits false advertisements
• Rise of data security “unfairness” enforcement actions
•
FTC historically limited data security enforcement authority to “deceptiveness” claims
•
FTC now takes more expansive view of “unfairness” authority
• 3rd Circuit upheld FTC’s “unfairness” authority in Wyndham decision
•
Case affirmed FTC’s authority to bring actions challenging companies’ data security practices under “unfair practice”
prong of Section 5 of Federal Trade Commission Act
•
Due process satisfied because Wyndham had fair notice that data security practices could be found inadequate, even
though FTC did not issue data security regulations
• But FTC suffered first major setback in LabMD case
•
ALJ held that FTC could not hold company liable for “unfair” data security practice without carrying burden to show
“probability” or “likelihood” of resulting harm, then FTC overturned the decision, now on appeal
Increased Oversight - FCC
• Three privacy-related actions in all of 2014, involving robocalls, violations of
do-not-text requests, and unlawful marketing
• $10M in October 2014 enforcement actions against two companies
• Two carriers provided subsidized phone service to low income consumers and retained a
third party vendor for data storage
• Data was stored in clear and accessible via the Internet
• FCC found violation of Sec. 222(a) (failure to protect CPNI) and Sec. 201(b) (the failure to
notify was unjust or unreasonable practices)
• $25M in April 8, 2015 enforcement action against AT&T
• Call center employees in Mexico, Columbia, and Philippines acquiring names and partial
SSNs, then unlocking stolen AT&T phones that
• FCC again found violations of Sec. 222 and Sec. 201(b)
• "As today’s action demonstrates, the Commission will exercise its full authority
against companies that fail to safeguard the personal information of their customers"
25
Increased Oversight - States
California AG
• February of 2016: released “2016 California Data Breach Report”
• Analysis of info provided to AG pursuant to breach notifications
• 60% of Californians (approx 24M) affected by a data breach in 2015, six
times the 4.3M Californians affected the year before
• AG has made four recommendations:
• Implement the 20 CIS Critical Security Controls
• Expand the use of multi-factor authentication
• Use strong encryption to protect PII on portable devices
• Encourage consumers use of fraud alerts on their credit files.
• WARNING: "failure to implement all the [applicable]
Controls… constitutes a lack of reasonable security“
• https://oag.ca.gov/breachreport2016
26
AI
Cybersecurity
Intersection of AI and Cybersecurity
• Defense of Critical Systems
• Real time, pattern finding, anomaly seeking
• Utilize machine learning algorithms to efficiently, and instantaneously
respond to potential network threats
• Note that “network” here can be any type of network (e.g., CAM bus…)
• Government getting involved through a variety of different
efforts, both AI-specific and in other areas (e.g., IoT)
28
Who makes the decision?
• Human:
• AI performs anomaly detection
• Human notified
• IT analysis
• Human response
• Machine:
• AI performs anomaly detection
• AI decides best method of response
• If decision is to "hack back", one view is that AI becomes an autonomous cyber weapon
• Issues with each of these?
Use Case – Security Software
• AEG (automatic exploit generation): first software system for
fully automatic exploit identification and repair
http://security.ece.cmu.edu/aeg/aeg-current.pdf
• DeepArmor uses machine learning, natural language
processing, and AI algorithms to analyze and secure files
https://sparkcognition.com/deeparmor/
• Cylance uses artificial intelligence for predicting, preventing,
and protecting against cyberattacks
https://www.cylance.com/
Use Case – Automotive
• 2007: DARPA Grand Challenge entrant navigated 55 miles of
city streets, followed all traffic laws, and avoided all hazards
• 2016: ZMP taking pre-orders for totally autonomous vehicle
Use Case – Automotive (cont’d)
• IHS report on install rate of AI systems in new vehicles:
• 8% in 2015, mostly focused on speech recognition
• 109% in 2025 (forecast), due to multiple AI systems of various types
• AI-based cybersecurity expected to be part of such systems
• AI for automotive cybersecurity
• Monitor all software/firmware systems in car
• Detect/examine/adapt to vulnerabilities, exploits, and unexpected
behavior of software in the automotive environment
• Issues around graceful handling of various scenarios
Use Case – Automotive (cont’d)
• Questions for discussion:
• How much certainty is needed before an AI-based system takes action to
what it perceives as a cyber threat in an automotive environment?
• Should actions be different based on critical versus non-critical systems?
• What happens if AI detects a cyber issue in a critical system?
• What should graceful response look like when reacting to cyber attacks?
• What happens if AI is not adopted?
• What liability concerns exist in the above scenarios?
Government Efforts
• DARPA Cyber Grand Challenge (August 2016)
• Three-year project to advance automated cyber defense
• Looking for systems to detect, evaluate, and patch software vulnerabilities
before adversaries have a chance to exploit them
• At DEFCON 2016, several teams competed for 10 hours in a “specially
created computer testbed laden with an array of bugs hidden inside
custom, never-before-analyzed software.”
• Competitors systems had to find and patch within seconds—not the usual
months—flawed code that was vulnerable to being hacked, and find their
opponents’ weaknesses before the defending systems did.
Government Efforts (cont’d)
• Preparing for the Future of AI (October 2016)
• 58-page report from the EOP National Science and
Technology Council (NSTC) subcommittee on
machine learning and AI, based on internal research,
a June 2016 RFI, and five public workshops
• Recommendation: “plans and strategies [of
Government and private industry] should account for
the influence of AI on cybersecurity, and of
cybersecurity on AI”
Government Efforts (cont’d)
• Artificial Intelligence, Automation, and the
Economy (December 2016)
• 55-page follow up report specifically investigates the
effects of AI on the U.S. job market and economy,
and outlines recommended policy responses
• Strategy #1: Invest in and develop AI for its many
benefits.
• First area listed: “cyberdefense and the detection of
fraudulent transactions and messages”
Wrap Up
and
Questions
About the Speaker
Randy V. Sabett, J.D., CISSP
Vice Chair, Privacy & Data Protection Practice Group
Cooley LLP
Mr. Sabett is a former NSA crypto engineer, whose practice focuses on
cybersecurity, privacy, licensing, and IP, dealing with such issues as risk
assessment, corporate liability for privacy and data security, identity management,
EU data privacy issues, active defense, electronic signatures, state and federal
information security laws, and security breaches. Mr. Sabett has managed
numerous data breach responses, involving major retailers, financial and health
care organizations, and on-line service providers. He served on the Commission
on Cybersecurity for the 44th Presidency and has been recognized as a leader in
Privacy and Data Security in the 2007-2016 editions of Chambers USA. Mr.
Sabett is a member of the Boards of Directors for the Georgetown Cybersecurity
Law Institute and the Northern Virginia chapter of ISSA, a frequent lecturer and
author, and has appeared on or been quoted in a variety of national media
sources.