Relative Perfect Secrecy: Universally Optimal Strategies and

Relative Perfect Secrecy:
Universally Optimal Strategies and Channel Design
Arman Khouzani and Pasquale Malacaria
Queen Mary University of London
Fontainebleau: April 26, 2016
1 / 24
Content
Shannon Perfect secrecy theorem implies that for perfect
secrecy the adversary should not be able to eliminate any of
the possible secrets.
We answer the following fundamental question: What is the
lowest leakage of information that can be achieved when some
of the secrets have to be eliminated?
We address this question by providing “universally optimal”
randomized strategies,
These strategies guarantee the minimum leakage irrespective
of how we measure optimality.
2 / 24
The Monty Hall problem
There is a prize behind one door, and goats behind two other
doors.
Player chooses one door, game host opens one of the unchosen
doors where a goat is behind
should Player revise his guess?
3 / 24
The Monty Hall problem
4 / 24
The Monty Hall problem: Information Flow
Player should switch because the host has leaked information
about the prize by opening the door...
Leak ⇔ switch
5 / 24
Cloaks
the host strategy consists of a set of ”cloaks”: a cloak being a
set of possible secrets including the real secret
for Monty Hall the cloak size is 2 (the two closed doors)
if the cloak size was 3 then perfect secrecy could be achieved
is there an optimal strategy (for the game host) for cloak size
2?
6 / 24
Examples of cloaks
Given a set of possible secrets a size k cloak is a subset of
secrets of size k including the real secret
e.g. in geolocation privacy a cloak could be a set of possible
locations including the location you actually are
in cryptographic computation a cloak could be a set of
computations taking between time T and T+R including the
actual computation (this is also called a bucket)
in database privacy a K anonymous query would be a cloak of
size K
7 / 24
Optimal?
“is there a optimality theorem generalising Shannon’s
result?”
Optimal in what sense? minimal leakage in terms of bits?
maximum average number of questions to guess the secret?
minimizing the maximal probability of guessing the secret in 1
guess? in n guesses? etc.. (Renyi’s entropies...)
the above are different measures, e.g. there are families of
distributions with unbounded leakage in terms of bits but with
constant maximal probability of guessing the secret in 1
guess....
8 / 24
Optimal!
“is there a optimality theorem generalising Shannon’s
result?”
Optimal in what sense? minimal leakage in terms of bits?
maximum average number of questions to guess the secret?
minimizing the maximal probability of guessing the secret in 1
guess? in n guesses? etc.. (Renyi’s entropies...)
it doesn’t matter how optimal is measured, our strategy
is optimal w.r.t. “all possible measures”.
it holds for all measures that are Schur concave1 , symmetric
and expansible (all the above are examples, and many more)
1
in fact we use something slightly more abstract called core-concavity
9 / 24
Optimal! Schur concavity
f is expansible if adding zero don’t affect the function:
f ([x1 , . . . , xn ]) = f ([x1 , . . . , xn , 0])
f is Schur concave if the more the “distribution” is uniform
the higher the value of f :
Let v = [x1 , . . . , xn ], v 0 = [x10 , . . . , xn0 ]0 with xi ≥ xi+1 ,
0
xi0 ≥ xi 0 +1 , Then
P definePv v0 if forP
P
0
1 ≤ i < n,
x
≥
x
j≤i i
j≤i i and
j≤n xi =
j≤n xi
0
0
f Schur concave if v v implies f (v ) ≤ f (v )
10 / 24
technical preliminaries
Given k and P = (p1 , . . . , pn ), sorted in descending order, let index
J be defined as follows:
)
(
Pn
i=j pi
.
(1)
J := min j : 1 ≤ j ≤ k, pj ≤
k −j +1
for a j ∈ {1, . . . , k} and prior distribution P = (p1 , . . . , pn ), let πj
denote the probability distribution over k elements as the following:
Pn
Pn
i=j pi
i=j pi
πj := p1 , . . . , pj−1 ,
,...,
, i.e.:
k −j +1
k −j +1
Pn
i=j pi
πj (l) = pl : l ≤ j −1, πj (l) =
: j ≤l ≤k
(2)
k −j +1
11 / 24
Main result
Theorem
Let P = (p1 , . . . , pn ) be the prior (sorted in descending order), and
let k be the maximum permissible size of the cloaks. Let index J
and probability distributions πj be defined as before, respectively.
Let the entropy be measured by the symmetric, expansible and
Schur concave function H. Then:
A. The maximum achievable posterior entropy among all
(potentially randomized) cloaking strategies is H(πJ ).a
B. There is an algorithm which explicitly provides a feasible
(randomized) cloaking strategy that achieves the above
a
maximal posterior entropy implies minimal leakage as
leakage = H(prior)-H(posterior)
12 / 24
Algorithm
Input: P = (p1 , . . . , pn ) in descending order, k
Output: δ(M; θ) for ∀θ ∈ Θ,∀M ∈ MP
n
n
o
i=j pi
1: Find J ← min 1 ≤ j ≤ k : pj ≤
k −j +1
X
2: Solve
xM = pi , ∀i = J, . . . , n
M∈M∗ ({1,...,J−1,i})
s. t.: xM ≥ 0,
3:
∀M ∈ M∗ ({1, . . . , J −1})
δ(M; i) ← xM /pi
∀i = J, . . . , n
∀M ∈
4:
5:
δ(M; i) ← xM (k − J + 1)/
δ(M; θ) ← 0
M∗ ({1, . . . , J −1, i})
Pn
j=J pj
∀i = 1, . . . , J −1
∀M ∈ M∗ ({1, . . . , J −1})
everywhere else
13 / 24
Example 1
Consider 4 secrets 1, 2, 3, 4 and k = 3. We have then the following
possible size 3 cloaks:
M1 = {1, 2, 3}, M2 = {1, 2, 4}, M3 = {1, 3, 4}, M4 = {2, 3, 4}
Consider the following prior over the secrets: (0.3, 0.28, 0.22, 0.2).
Then the following is the optimal strategy given by the algorithm
1
2
3
4
0.3
0.28
0.22
0.2
M1 : 0.4444 0.4762 0.6061
0
M2 : 0.3778 0.4048
0
0.5667
M3 : 0.1778
0
0.2424 0.2667
M4 :
0
0.1190 0.1515 0.1667
14 / 24
Example 1
we have: p1 = 0.3 ≤ 1/k = 1/3 = 0.33, hence J = 1, and the
optimal strategy will induce (1/3, 1/3, 1/3) posterior distributions.
1
2
3
4
0.3
0.28
0.22
0.2
M1 : 0.4444 0.4762 0.6061
0
M2 : 0.3778 0.4048
0
0.5667
0
0.2424 0.2667
M3 : 0.1778
M4 :
0
0.1190 0.1515 0.1667
15 / 24
Example 2
Consider the following prior over the secrets: (0.4, 0.35, 0.15, 0.1).
Then the following is the optimal strategy given by the algorithm
1
2
3
4
0.4
0.35
0.15
0.1
M1 0.6000 0.6000 1.0000
0
0
1.0000
M2 0.4000 0.4000
M3
0
0
0
0
M4
0
0
0
0
p1 = 0.4 > 1/k,
p2 = 0.35 > (0.35 + 0.15 + 0.1)/(k − 1) = 0.6/2 = 0.3, and only
p3 = 0.15 ≤ (0.15 + 0.1)/(k − 2) = 0.25/1 = 0.25, thus J = 3,
and the optimal strategy will always include 1 and 2 in the cloak
and induce (0.4, 0.35, 0.125) posterior distributions.
16 / 24
Channel design
A channel is a probabilistic system that given a secret input
produces an observable, it can so be seen as a conditional
distribution
A cloak can be seen as the set of secrets that can result in the
same observables
We can use then our result to design optimal channels, i.e. the
optimal distribution given a constraint on the pre-image size
17 / 24
Developments
The above result is then extended in two directions:
1
more general measures that generalize g-leakage
2
game theoretical interpretation
18 / 24
general measures
more general measures that generalizes g-leakage2
consider not just secrets and their probabilities but also a gain
associated to each secret (“not all secrets are created equal”)
we generalize g-leakage with the following g-entropy family:
Hα,g (θ) = Hα,g (P) := Hα (G P) =
α
log kG Pkα
1−α
G is the gain matrix.
If G is only non-zero on the diagonal then there are also universally
optimal strategies.
2
Additive and multiplicative notions of leakage, and their capacities. Alvim,
Chatzikokolakis, McIver, Morgan, Palamidessi, Smith, CSF 2014 (Winner of
the NSA 3rd Annual Best Scientific Cybersecurity Paper Competition)
19 / 24
Game theoretical development
The optimality algorithm can be proved to be a Nash equilibrium
in some zero sum game.
This game theoretical interpretation is
1
one one side more restrictive than the Schur concave
framework, (i.e. we don’t have a formal game interpretation
for every Schur concave functions, only for few)
2
in those few cases more general, e.g. it provides the optimal
cloaking strategies for a general (non- diagonal) gain matrix in
g-leakage and arbitrary cloaking constraints.
20 / 24
quantum measures POVM
Some recent work3 points out that
“quantum state discrimination can be seen as a special case of
quantitative information flow (QIF) ”
Positive Operator Valued Measures (POVM) used for measurement
of the observed state
quantum state discrimination relates to MED, accessible
information, quantum source coding, quantum channel capacity
3
Minimum guesswork discrimination between quantum states. W Chen, Y
Cao, H Wang, and Y Feng
21 / 24
Quantum Monty Hall POVM
Some other recent work4 investigates a quantum version of Monty
Hall, where the prize can be in a superposition of doors
interestingly the classical probabilities associated with the Monty
Hall scenario are recovered for a natural choice of the
measurement operators.
Maybe Holevo’s theorem applies for quantum cloaks?
4
Positive operator valued measures and the quantum Monty Hall problem C
Zander; M Casas; A Plastino; A R. Plastino
22 / 24