Guessing Attacks in
the pi-calculus
(Work in progress)
Tom Chothia
École Polytechnique
Outline of Talk
• Background:
– Formal security in the pi-calculus.
– Computational arguments.
– Bridging the gap.
• The pi-calculus with guessing.
• Calculating the cost of a guessing attack.
• The computational correctness of the
calculus
pi-g Summary :
• A pi-calculus extended with primitives for
guessing and tracking the effects of the
guess.
• A method of scoring traces in the calculus to
show how hard that “attack” would be using
guesses and multiple tries.
• With most of the simplicity of formal methods
and some of the power of computational
methods.
Pi-calculus
Process P,Q,A ::= 0
| send a b
| rec a b;P
| !P
| new a;P
| (P|Q)
| [a=b]; P
Pi-calculus
Key reduction rules:
( send a b | rec a x;P ) -> P [ b/x ]
new a;( P ) | Q = new a;( P | Q )
if “a” not free in Q
N.B. “send a b | ( new a; rec a x; P)” cannot
communicate.
Pi-calculus
• Analyse a process P with message M.
• An attacker is any context: C[P(M)]
• If for all M’ we have P(M) is weakly bi-similar P(M’)
then M is secure in P.
• N.B. weak bi-simulation is a congruence for the spicalculus
• So above condition implies that for all hostile
attackers A, A[P(M)] is weakly bi-similar to A[P(M’)].
Pi-calculus example
A(M) = new cAB
send cAS cAB;
send cAB M
A
S = rec cAS x;
send cBS x
B = rec cSB x ;
rec x y;F(y)
S
B
Pi-calculus example
A(M) = new cAB
send cAS cAB;
send cAB M
S = rec cAS x;
send cBS x
B = rec cSB x ;
rec x y;F(y)
Inst(M) = new cAS;
new cBS;( A(M) | S | B)
Secrecy: M and M’
Inst(M) ~ Inst(M’)
iff F(M) ~ F(M’)
Pi-calculus example
A(M) = new cAB
send cAS cAB;
send cAB M
S = rec cAS x;
send cBS x
Bspec = rec cSB x ;
rec x y;F(M)
Instspec(M) = new cAS;
new cBS;( A(M) | S | Bspec)
Secrecy: M and M’
Inst(M) ~ Inst(M’)
iff F(M) ~ F(M’)
Authenticity: M : Inst(M)
~ Instspec(M)
Spi-calculus
• The spi-calculus adds encryption +
– New terms:
• {M}N M encrypted with N
• (M,N) and numbers
– New Processes:
• case L of {x}N in P
• let (x,y) = M in P
Computational properties.
• DES no good anymore (fixed 56 bit key).
• RSA key from 128 bits to 512 now 1024
(could have been illegal).
• 802.11 weak nonce.
• RSA timing bug.
Computational analysis
• No fixed nonces and passwords etc.
• Random sampling, e.g. pwd is randomly
chosen from the probability distribution Dn
where n is the size or security parameter:
pwd <-r- Dn
• The chance of C[pwd] failing must be almost
zero.
What is “almost zero”?
• Function f is negligible if for all c there exists N
such that for all x > N we have that f(x) < x-c.
• Dn and Dn’ are indistinguishable if for all
probabilistic, polynomial time Turing machines A,
f (n) = Pr[ x <-r- Dn : A(n,x) = 1]
- Pr[ x <-r- D’n : A(n,x) = 1]
is negligible.
Safe
• We could say that an encryption scheme is
safe if:
Adv(n) = Pr[ k <-r- Keyn, x <-r- Dn : A(n,Ek(x),x) =
1]
- Pr[ k <-r- Keyn, x,y <-r- Dn : A(n,Ek(y),x) = 1]
is negligible.
• But there are many, many other criteria.
Oracles
• Oracles perform functions for the attacker,
such as encryption with a key k: Ek(_).
• To conceal repetitions of encryption:
Adv(n) = Pr[ k <-r- Keysn : AEk(-) (n) = 1]
- Pr[ k <-r- Keysn, x <-r- D : AEk(x) (n) = 1]
• To conceal key identity:
Adv(n) = Pr[ k,k’ <-r- Keysn : AEk(-),Ek’(-) (n) = 1]
- Pr[ k <-r- Keysn, x <-r- D : AEk(-),Ek(-) (n) = 1]
Summary of computational
method
• The computational method uses a randomly sampled
secrete value from a domain of size n.
• Secure: if impossible for a probabilistic, polynomial
time (in n) Turing machines to met a given
criterion with non-negligible probability.
• You roll your own criterion e.g. Type 0,…, Type 7,
IND-CCA, N-PAT-IND-CCA, N-PAT-UNF-IND-CCA,
Block cipher,……
Story so far:
• Protocols and encryption are:
– prefect for computer scientists
– fundamental to computer communication
– often wrong
• Formally analysis is:
– machine checkable, neat.
– often wrong
• Computational analysis is:
– strong, not often wrong
– very hard
Bridging the Gap.
• Find conditions under which you can use
formal analysis to get computational proofs.
• You don’t have to do the proof, just fulfill the
conditions.
–
–
–
–
Abadi and Rogaway
Janvier, Lakhnech and Mazare
Backes, Pfitzmann and Waidner
Micciancio and Warinschi
Abadi and Rogaway
• Define a translation of Delov-Yao terms into
computational terms:
• “K” in Delov-Yao becomes a newly generated
key of length n. Nonces become new, hard to
guess strings.
• Delov-Yao equivalence then implies
computational indistinguishability.
Others
• Janvier, Lakhnech and Mazare: Delov-Yao is
computationally sound in the presence of active
attackers.
• Backes, Pfitzmann and Waidner: a library of formal
methods backed up by computational reasoning.
Symmetric keys, attacker taking part in the protocol.
• Micciancio and Warinschi: Abadi and Rogaway are
not complete but can be made complete by using
stronger security criterion.
Outline of Talk
• Background:
– Formal security in the pi-calculus.
– Computational arguments.
– Bridging the gap.
• The pi-calculus with guessing.
• Calculating the cost of a guessing attack.
• The computational correctness of the calculus
Guessing an attack vs
Guessing attacks.
• What is the chance of finding/guessing that
attack on NSP Needham-Schroeder
Protocol?
• … 2^120
• If the nonce was 64 bits long how can we tell
your automated method not attack the
protocol by guessing the nonce?
Back to the pi-calculus
P[pwd] | Q[pwd] | A
System with a password can be attacked by A knowing
the password.
pi-calculus can restrict a name:
new pwd ( P | Q ) | A
This marks the name as important to the correctness of
the process. So it cannot be guessed.
A Guess Rule
new pwd:Dn ( P | Q ) A
Similar to random sampling of the computational
method: pwd <-r- Dn. Pwd is still unknown to A but
can be guessed, at a price:
new pwd:Dn;P | guess x:Dn;A
pwd:n
new pwd:Dn;(P | A[ pwd / x ])
So the only way for A to find pwd is to be told it or to pay
the price.
pi-calculus with Guessing
Process P,Q,A ::= 0
| send a b
| rec a b;P
| !P
| (P|Q)
| [a=b]; P
| new x:Dn
| guess x:Dn;P
Password checking process
new chn,rew,b1:D2…bn:D2;
send a (chn,rew)
| rec chn x1;[x1=b1];
rec chn x2;[x2=b2];
…
rec chn(xn);[xn=bn];
send rew bingo )
send
rec
x1≠b1
x1=b1
rec
x2≠b2
x2=b2
rec
x3≠b3
x3=b3
Guessing a password
rec
A=
rec a (chn,rew);
!(guess b:D2;send chn b)
| rec rew x )
guess
send
rec
send a
rec a
rec c
x1≠b1
x1=b1
guess D2
send chn
rec c
x2≠b2
rec rew
x2=b2
… …
rec
xn≠bn
xn=bn
rec rew
pi-calculus with Guessing
Top level N ::= new a:Dn;N
| P || A
Process P,Q,A ::= 0
| send a b
| rec a b;P
| !P
| (P|Q)
| [a=b]; P
| new x:Dn
| guess x:Dn;P
Guess rule
The guess rule works between the
attacker and the process i.e. across the
doubt bar:
new a:Dn; P || guess x:Dn;A
a:n new a:Dn; (P || A[a/x] )
A different view:
x1= 0
x2 = 0
x3 = 0
…
x3 = 1
…
x1 = 1
x2 = 1
x3 = 0
x3 = 1
…
…
x1 = 1 x3 = 0
x1 = 0
…
…
2n
…
x2 = 1
x2 = 0
…
…
x3 = 1
…
What is the difference?
• Before, I calculated the probability of
ending up in each state, particular the
unsafe state.
• Now, I calculate the amount of “work”
needed to reach the unsafe state by
following the given path.
Guessing a password
new chn,rew,b1:D2… bn:D2; (
( send a (chn,rew) |
rec chn x1;[x1=b1];( send ack |
rec chn x2;[x2=b2];(send ack |
rec …
rec chn xn;[xn=bn];send rew bingo )
|Q)
A = rec a (chn,rew); send ack
!rec ack;guess b:D2;send chn b
| rec rew x
send a
rec a
rec chn; send ack
rec ack
x1≠b1
x1=b1
send chn
rec chn; send ask
x2≠b2
x2=b2
guess D2
… …
rec chn; send ack
xn≠bn
rec rew
xn=bn
rec rew
A bit of a mind shift:
x1= 0
x1 = 1
x2 = 0
x1 = 0
x1 = 1
n +1
…
P a:3
2 b:2 P3(a) P4c:2
“Cost”
P
P3
P5
P1
P1
P1
3
P2
P2
P2
3
P3
P3
P3
P4
P4
P3
P4
6
6
P4
P4
P3
P4
8
pi-calculus with Guessing
Top level N ::= new a:Dn;N
| P || A
Process P,Q,A ::= 0 | send a b | rec a b;P
| ! P | (P|Q) | [a=b];P
| new x:Dn
| guess x:Dn;P
| (ga)P
Tracing dependences
• Key reduction rules:
[a=ga];P a (ga)P
(ga)sent c d || rec c x;Q
a:c(d) Q[d/x]
Worthless dependencies
new chn,rew,b1:D2… bn:D2; (
( send a (chn,rew) | ! send ack
rec chn x1;[x1=b1];( send ack |
rec chn x2;[x2=b2];(send ack |
rec …
rec chn xn;[xn=bn];send rew bingo )
|Q)
The cost of a trace
Traces ::= P | P
e.g. P a:3
2 b:2 P3a P4a:e(d)
P5,
cost of trace ( ) = cost (, [ ], [ ])
cost ( P
, com
g , com
cost ( a:n , gu, com) = cost ( , (a:n):gu,com)
cost ( a , gu, com) = cost ( , gu, (a,P);com)
cost (P, gu, com )
= map snd ( gu ) )
The cost of a trace
cost ( P(a:b(c)) , (a,n);gu, (a,P);com ) =
If ( new d;P =/=>send b c ) then
snd( gu ) x n-1 + cost ( , gu, com )
else
cost ( , (a,n);gu, (a,P);com )
P a:3
a
2 b,2 P3(a:) P4c:2
“Cost”
P
P3
P5
P1
P1
P1
a:3
P2
P2
P2
a,3 a:P1
P3
P3
P3
P4
P4
P3
4+ b,2
P4
P4
P4
P3 a,3 b,2 a:P1
P4
4+2.2
spi-calculus with Guessing
Top level N ::= new a:Dn;N
| P || A
names ::= a,b,x,n,m,k …
| {a}k
Process P,Q,A ::= 0 | send a b | rec a b;P
| ! P | (P|Q) | [a=b];P
| new x:Dn
| guess x:Dn;P
| (ga)P
| decrypt m as {x}k;P
Decryption verifies the guess
of a key
The attacker can verity a guess by a successful
decryption:
P || decrypt {m}k as {x}gk in A
(gk) P || A{m/x}
Outline of Talk
• Background:
– Formal security in the pi-calculus.
– Computational arguments.
– Bridging the gap.
• The pi-calculus with guessing.
• Calculating the cost of a guessing attack.
• The computational correctness of the calculus
Correctness Summary:
• We map the pi-calculus to a computation setting
• with a matching correctness criterion.
• An sub-exponential attack in the calculus implies the
existence of a poly-time Turing machine that defeats the
criterion.
• No sub-exponential attacker in the calculus implies (given
a correctness for spi), no computational attacker that
defeats the criterion. I.e. any errors are down to spi not
guessing.
Relating the spi-calculus and
the computational model.
• We allow a Turing machine to interact
with a “correct” implementation of a spicalculus process.
AP(c) : the Turing machine A with access
to an oracle for process P.
Process security
• In spi: P(a) bi-similar P(b) for all a,b
• or P(a) | A(a) outputs on a
but P(a) | A(b) does not.
• Adv = Pr[ s <- Dn : AP(s)( n,s,fn(P) ) = 1]
- Pr[ s,t <- Dn : AP(s)( n,t,fn(P) ) = 1]
Unsafe in pi-g implies unsafe
in the computational model.
Theorem 1: If a pi-g calculus process is unsafe then the
translation of the process into the computational setting
is also unsafe.
i.e. if there is a sub-exponential cost attack in the pi-g
calculus then there is a Turing machine A such that
Pr[ s <- Dn : AP(s)( n,s,fn(P) ) = 1]
- Pr[ s,t <- Dn : AP(s)( n,t,fn(P) ) = 1]
is non-negligible.
Proof:
• Let P be the process that produces a finite error trace
with cost polynomial in n.
• There is a polynomial process that can also produce
this trace.
• Add “guess” command to Turing machines and let M
be a polynomial Turing machine that produce the
trace.
Proof:
• Execute each “guess x:Dn” by make n copies of the
machine and running them in parallel.
• For each guess verification stop all incorrect turning
machines. As processes and trace is finite there is a
constant probability that that this verification action
really does verify the guess.
• Result: polynomial time Turing machines that
produces the unsafe trace with a non-negligible
probability.
P a:3
a
2 b,2 P3(a:) P4c:2
“Cost”
P
P3
P5
P1
P1
P1
3
P2
P2
P2
3
P3
P3
P3
P4
P4
P3
P4
6
6
P4
P4
P3
P4
8
Safe in the computational model
implies safety in the pi-g calculus
• pi-g calculus processes are mapping to
the computational model in the same
way as spi-calculus processes.
• Except names are mapped to bit strings
with the same length as their domain
sizes.
Safe in the pi-g calculus implies
safety in the computational model
• A simple result:
There is a zero cost attack in the pi-g
calculus if and only if there is an attack in
the spi-calculus.
Safe in the pi-g calculus implies
safety in the computational model
Theorem 2: if there is no sub-exponential cost
attack in the pi-g calculus then either
– the translation is unsafe in the computational
model,
– or the encoding of the spi-calculus is unsafe.
Proof:
• Let there be a computational attack against P.
• We can write P = new a1,…,an;P’ where
a1,…,an are all the sub-expo length names
used in the attack.
– Unwinding a finite number of replicates if
necessary.
• Hence we have an spi-calculus attack against
P’.
Proof
• Assuming the correctness of the spicalculus, we have a successful
attacking spi-calculus process Pa.
• Therefore “guess a1,…,an;Pa” will be a
successful attacking pi-g calculus
process against P.
Extended Example: Bank Cards
Card = ! ( rec card (test,reply); [ test = pin ];
send reply (acc_no, bon_code)
Customer = send shop ( pin, card )
Shop = rec shop ( test_pin, card )
( new reply:Dtop; send card ( test_pin, reply )
| rec reply ( acc_on, bon_code );
send bank {acc_no,pin,amout,acc_shop}Kbank
Bank = rec packet; decrypt packet as
{acc_no,pin,amout,acc_shop}Kbank
Extended Example: Bank Cards
new acc_on:D10^12
new pin :D10^4,card :D1; ( Card | Customer ) |
new Kbank:D2^128;acc_shop;( Shop | Bank)
• Closed system forces attack to guess Kbank.
Attacker
New system =
new card new KBankA ( Card | Bank || A )
• Gives attacker has access to the card.
• Pin can be guessed and verified using the
card, at cost 10^4. The acc_on is public. So
total cost of attack 10^4.
A safer system
• Account number is never made public. Pin number is
checked by bank.
Card2 = !rec card (test,reply);
send reply { acc_no, test }Kcard
Bank2 = rec packet; decrypt packet as
{test_pin,card_pack,amout,acc_shop}Kbank
in decrypt as { acc_no, test_pin }Kcard in …
new Kcard:D2^128 ( Card2 |
new pin:D10^4;( Customer | Bank2 ) )
A safer system
• Now attacker must guess pin and acc_no
without verification: 10^4*10^12 = 10^16.
• Bank cards allow for a off-line system. Banks
willing to take risk and repay theft.
• Also cards will lock up, after a number of
guess.
Summary
• pi-g gives a cost to a guess, tests
repeat attacks on a process,
• without forcing you to do complexity
analysis on the guess and test.
• Correctness is in terms of the
computational model of security.
Other related work
• Probabilistic polynomial-time process
calculus, Lincon, Mitchell, Scedrov.
• Gavin Lowe and others: Analysing of
guessing weak passwords.
• Iliano Cervesato, Multi-set rewriting with
an additive of each step.
Further work.
• Spi-calculus computational correctness.
• Meta theorem, showing that attacks are not
possible, bi-simulation methods …
• Application to really protocol.
• Automated checking, theorem prover, Prolog
Questions?
Next Slides
Conjecture:
• There exists a mapping from spicalculus processes to Turing machines,
such that:
• a process is safe in the spi-calculus if
and only if it is safe in the computational
setting.
Mapping
0
-> halt
new a;P
-> associate a tape with “a"
send a b
-> write “b” to the tape for “a”
rec a x;P
-> read “x” off the tape for “a”
(P|Q)
-> run the machines for P
and Q in parallel.
Correctness Summary:
• We map the pi-calculus to a computation setting
(Turing machines).
• With a matching correctness criterion.
• An attack in the calculus implies the existence of
a poly-time Turing machine that defeats the
criterion.
• No attacker in the calculus implies (given a
correct implementation), no computational
attacker that defeats the criterion.
My work:
Protocol
Formal
Formal
analysis
analysis
Computational analysis
• Extend formal
methods.
• Make then aware
of “guess” and
domain size.
Complete-ish?
Protocol
Formal
analysis
Guessing
Computational analysis
• Can I find an
interesting example
that can not be found
by existing formal
methods?
• Bank cards, PKI
systems, any system
with many users
Summary
• I presents some formal and computational
methods proving process correct.
• I presented some work that tries to bridge the
gap between then
• I am working on a pi-like calculus for
guessing attacks.
Delov-Yao
• A good, early attempt to apply formal
methods to protocol.
• Protocol is formally defined.
• Attacker can observe messages,
construct new messages, and interact
with the protocol.
Delov-Yao rules
• If m in E then E |- m
• If E |- m and E |- n then E |- (m,n)
• If E |- (m,n) then E |- m and E |- n
• If E |- m and E |- k then E |- {m}k
• If E |- {m}k and E |- k-1 then E |- m
BAN Judgments
Judgments include:
P
P◊X
P
P
X
nonce.
P Q
P
P believes X
P sees X
P once said X
P is trusted to say X
X is a fresh
K is a key for P,Q
K is P’s public key
BAN rule examples
If P believes P and Q share K and P see a message
encrypted with K then P believes X once said X
P P Q
P ◊ {X}K
P Q
If P believes X is a fresh nounce and P believes that Q
once said X then P believes that Q currently believes
X:
P
X
PQ
PQ X
Not complete
nor would you expect it to be.
• Uneven distributions: e.g. pin number
equals 9999 with 1/2 chance, otherwise
it is random.
• “Clever” guessing attacks.
Story so far:
Protocol
Formal
analysis
Computational analysis
We have a world of protocol.
Some can be proved wrong
with computational
methods,
Less can be proved wrong by
formal methods.
We want the ease of formal
analysis with the power of
computational analysis.
Guessing a password
new chn,rew,b1:D2… bn:D2; (
( send a (chn,rew) | rec chn x1;[x1=b1];rec…
…rec chn xn;[xn=bn];send rew bingo )
|Q)
A = rec a (chn,rew); ( !(guess b:D2;send chn b)
| rec rew x )
Canceling the cost
• So having your guess confirmed
cancels some of the cost of making that
guess.
Soundness
Protocol
Formal
analysis
For all unsafe processes
formal analysis finds less than
computational.
pi-calculus with guessing is
sound if any process found to
be unsafe really is.
Guessing
Computational analysis
In particular pi with guessing
falls between formal and
computational methods.
© Copyright 2026 Paperzz