Confidentiality-preserving Proof Theories for - JSIAM-FAIS

Confidentiality-preserving Proof Theories
for Distributed Proof Systems
Kazuhiro Minami
National Institute of Informatics
FAIS 2011
Distributed proving is an effective way
to combine information in different
administrative domains
• Distributed authorization
– Make a granting decision by constructing a proof
from security policies
– Examples: DL[Li03], DKAL [Gurevich08], SD3
[Jim01], SecPAL [Becker07], and Grey [Bauer05]
• Data fusion in pervasive environments
– Infer a user’s activity from sensor data owned by
different organizations
Distributed Proof System
• Consist of multiple principals, which consist of a
knowledge base and an inference engine
• Construct a proof by exchanging proofs in a peer-topeer way
• Support rules and facts in Datalog with the says
operator (e.g., BAN logic)
Quoted fact
Ç
Ç
Protecting each domain’s confidential
information is crucial
• Each organization in a virtual business
coalition needs to protect its proprietary
information from the others
• A location server must protect users’ location
information with proper privacy policies
• To do these, principals in a distributed proof
system could limit access to their sensitive
information with discretionary access-control
policies
To determine the safety of a system involving
multiple principals is not so trivial
Suppose that principal p0 is willing to disclose the truth
of fact f0 only to p2
What if p2 still derives
fact f2?
Problem Statements
• How should define confidentiality and safety
in distributed proof systems?
• Is it possible to derive more facts that a
system that enforces confidentiality policies
on a principal-to-principal basis?
• If so, is there any upper bound in terms of the
proving power of distributed proof systems?
Outline
• System model based on a TTP
• Safety definition based on non-deducibility
• Safety analysis
– DAC system
– NE system
– CE system
• Conclusion
Abstract System Model
• Parameterize a distributed proof system D
with a set of inference rules I and a finite set
of principals P (i.e., D[P, I])
Datalog inference rule:
• Only consider the initial and final state of
system D based on a trusted third-party model
(TTP)
Reference System D[IS]
•The body of a rule contains a set of quoted facts (e.g., q1 = (p1 says f1))
•All the information is freely shared among principals
(COND)
(SAYS)
TTP is a fixpoint function that
computes the final state of a system
Inference
rules
I
Trusted Third Party (TTP)
fixpoint1(KB)
KB1
fixpoint2(KB)
KB2
p1
fixpointn(KB)
KBn
p2
pn
Soundness Requirement
Any confidentiality-preserving system D[I] should not prove a fact
that is not provable with the reference system D[IS]
Definition
Definition (Soundness)
A distributed proof system D[I] is sound if
Outline
• System model based on a TTP
• Safety definition based on non-deducibility
• Safety analysis
– DAC system
– NE system
– CE system
• Conclusion
Confidentiality Policies
• Each principal defines a discretionary accesscontrol policy on its local fact
• Each confidentiality policy is defined with the
predicate release(principal_name, fact_name)
• E.g., if Alice is willing to disclose her location
to Bob, she could add the policy
– release(Bob, loc(Alice, L)) to her knowledge base.
Attack Model
• A set of malicious colluding principals A try to infer the truth of a confidential
facts f in non-malicious principal pi’s knowledge base KBi
Fact f0 is confidential because all
the principals in A are not
authorized to learn its truth
A
Fact f1 is NOT confidential
because p4 is authorized to
learn its truth
System D
Attack Model (Cont.)
Malicious principals only use their initial and final states ) to perform inferences
A
System D
Attack Model (Cont.)
Malicious principals only use their initial and final states to perform inferences
are available
A
System D
Sutherland’s non-deducibility model inferences
by considering all possible worlds W
Consider two information functions v1: W → X and v2: W Y.
x
Public
view
v1
W’ = { w ⎢ v1(w) = x}
w
w’
v2
X
y
W
y’
This cannot be
possible!
Y
Y’
Private
view
Nondeducibility considers information flow between
two information functions regarding system
configuration
Initial and final states
of malicious principals
in set A
Information
flow
A set of initial
configurations
Confidential facts that
are actually
maintained by nonmalicious principals
Safety Definition
We say that a distributed proof system D[P, I] is safe if
for every possible initial state KB,
for every possible subset of principals A,
for every possible subset of confidential facts Q,
there exists another initial state KB’ such that
1. v1(KB) = v1(KB’), and
2. Q = v2(KB’).
Malicious principals A has
the same initial and final
local states
Non-malicious principals
could posses any subset of
confidential facts
Outline
• System model based on a TTP
• Safety definition based on non-deducibility
• Safety analysis
– DAC system
– NE system
– CE system
• Conclusion
DAC System D[IDAC]
Enforce confidentiality policies on a principalto-principal basis
(COND)
(DAC-SAYS)
Example Derivations in D[IDAC]
(DAC-SAYS)
(COND)
D[P, IDAC] is Safe because deviations performed
by one principal are transparent from others
Let P and A be {p0, p1} and {p1} respectively
Principal p1 cannot distinguish
KB0
KB0 and KB’0
KB1
KB’0
NE System D[INE]
• Introduce function Ei to represent an encrypted
value
•Associate each fact or quoted fact q with an
encrypted value e
• Each principal performs an inference on an
encrypted fact (q, e)
• Principals cannot infer the truth of an encrypted
fact without decrypting it
• TTP discards encrypted facts from the final system
state
Inference Rules INE
(ECOND)
(DEC2)
(DEC1)
(ENC-SAYS)
Example Derivations
(ENC-SAYS)
(ECOND)
(ENC-SAYS)
(DEC1)
(DEC1)
(DEC2)
(ECOND)
Analysis of System D[INE]
• The strategy we use for the DAC system does not
work
• Need to make sure that every malicious principals
receive an encrypted fact of the same structure
Malicious principals A
KB0
KB0
NE System is Safe
• All the encrypted values must be decrypted in
the exact reverse order
• Can collapse a proof for a malicious princpal’s
fact such that all the confidential facts are
only mentioned in non-malicious principals’
rules
• Thus, can make all the confidential facts
transparent from the malicious principals by
modifying non-malicious principals’ rules
Conversion Method – Part 1
• Keep collapsing proofs by modifying nonmalicious principals’ rules
– If a proof contains a subsequence
replace the sequence above with
• Eventually, all the confidential facts only appear
in non-malicious principals rules
Conversion Method – Part 2
• Given a set of quoted facts Q that should be in KB’
• Case 1: (pi says f) is not in Q, but f is in KBi*,
– Remove (pi says f) from the body of every nonmalicious principal rule
• Case 2: (pi says f) is in Q, but f is not in KBi*,
– Remove all non-malicious principal’ rules whose body
contains (pi says f)
CE System D[ICE] is NOT safe
• An encrypted value can be decrypted in any
arbitrary order
(CE-DEC)
• Consequently, we cannot collapse a proof as
we did for the NE system
Summary
• Develop formal definitions of safety for
distributed proof systems based on the notion of
nondeducibility
• Show that the NE system, which derives more
facts than the DAC system, is indeed safe
• Provide an unsafe result of the CE system, which
extends the NE system with commutative
encryption
• The proof system with the maximum proving
power exists somewhere between the NE and CE
systems
Thank you!