Incentive compatibility in data security

Incentive compatibility
in data security
Felix Ritchie, ONS
(Richard Welpton, Secure Data Service)
Overview
•
•
•
•
•
Research data centres
Traditional perspectives
A principal-agent problem?
Behaviour re-modelling
Evidence and impact
Research data centres
• Controlled facilities for access to
sensitive data
• Enjoying a resurgence as ‘virtual’ RDCs
– Exploit benefits of an RDC
– Avoid physical access problems
• ‘People risk’ key to security
The traditional approach
Parameters of access
• NSI
– Wants research
– Hates risk
– Sees security as essential
• Researcher
– Wants research
– Sees security as a necessary evil
a classic principal-agent problem?
NSI perspective
• Be careful
• Be grateful
Researcher perspective
• Give me data
• Give me a break!
Objectives
VNSI = U(risk-, Research+) – C(control+)
risk = R(control-, trust-) < Rmin
Research = f(Vi+)
Vi (researcheri) = U(researchi+, control-)
A principal-agent problem?
NSI:
Trust
= T(lawfixed)
= T(training(lawfixed), lawfixed)
Maximise research s.t. maximum risk
Risk
= Riskmin
Researcher:
Control = Controlfixed
Maximise research
Dependencies
VNSI
Research
Risk
Vi
researchi
choice variables
control
trust
Consequences: inefficiency?
• NSI
– Little incentive to develop trust
– Limited gains from training
– Access controls focus on deliberate misuse
• Researcher
– Access controls are a cost of research
– No incentive to build trust
More objectives, more choices
VNSI
Research
Risk
Vi
researchi
control
effort
trust
training
Intermission:
What do we know?
Conversation pieces
☒ • Researchers are malicious
☑ • Researchers are untrustworthy
☒ • Researchers are not security-conscious
☒ • NSIs don’t care about research
☑ • NSIs don’t understand research
☑ • NSIs are excessively risk-averse
Some evidence
• Deliberate misuse
– Low credibility of legal penalties
– Probability of detection more important
– Driven by ease of use
• Researchers don’t see ‘harm’
• Accidental misuse
– Security seen as NSI’s responsibility
• Contact affects value
Developing true
incentive compatibility
Incentive compatibility for RDCs
• Align aims of NSI & researcher
– Agree level of risk
– Agree level of controls
– Agree value of research
• Design incentive mechanism for default
– Minimal reward system
– Significant punishments
• Bad economics?
Changing the message (1)
behaviour of researchers
• Aim
– researchers see risk to facility as risk to them
• Message
– we’re all in this together
– no surprises, no incongruities
– we all make mistakes
• Outcome
– shopping
– fessing
Changing the message (2)
behaviour of NSI
• Aim
– positive engagement with researchers
– realistic risk scenarios
• Message
– research is a repeated game
– researchers will engage if they know how
– contact with researchers is of value per se
– we all make mistakes
• Outcome
– improved risk tolerance
Changing the message (3)
clearing research output
• Aim
– clearances reliably good & delivered speedily
• Message
– we’re human & with finite resources/patience
– you live with crude measures, but
– you tell us when it’s important
– we all make mistakes
• Outcome
– few repeat offenders
– high volume, quick response, wide range
– user-input into rules
Changing the message (4)
VML-SDS transition
• Aim
– get VML users onto SDS with minimal fuss
• Message
– we’re human & with finite resources/patience
– don’t ask us to transfer data
– unless it’s important
• Outcome
– most users just transfer syntax
– (mostly) good arguments for data transfer
Changing the message:
summary
•
•
•
•
•
we all know what we all want
we all know each other’s concerns
we’ve all agreed the way forward
we are all open to suggestions
we’re all human
IC in practice
• Cost
– VML at full operation c.£150k p.a.
– Secure Data Service c. £300k
– Denmark, Sweden, NL €1m-€5m p.a.
• Failures
– Some refusals to accept objectives
– VML bookings
– Limited knowledge/exploitation of research
– Limited development of risk tolerance
Summary
• ‘Them and us’ model of data security is
inefficient
• Punitive model of limited effectiveness
• Lack of information causes divergent
preferences
• Possible to align preferences directly
• It works!
Felix Ritchie
Microdata Analysis & User Support
ONS
Objectives
VNSI = U(risk-, Research+) – C(control+)
Vi (researcheri) =
U(risk-, researchi+, control-)
risk = R(control, trust)
control = C(compliance, trust
trust = T(training, compliance)