Improving Automated Symbolic Analysis for E-voting Protocols:
Sufficient Conditions for Ballot Secrecy
Abstract—We advance the state-of-the-art in automated symbolic analysis for e-voting protocols by introducing three conditions that together are sufficient to guarantee ballot secrecy.
There are two main advantages to using our conditions
to verify e-voting protocols compared to existing automated
approaches. The first advantage is that it substantially expands
the class of protocols and threat models that can be automatically analysed. In particular, we are now able to systematically
deal with (a) honest authorities present in different phases,
(b) threat models in which no dishonest voters occur, and (c)
protocols whose ballot secrecy depends on fresh data coming
from other phases.
The second advantage is that it can significantly improve
verification efficiency, as the individual conditions are often
simpler to verify. For example, for the LEE protocol, we obtain
a speedup of over two orders of magnitude.
We show the scope and effectiveness of our approach using
ProVerif in several case studies, including FOO, Okamoto,
LEE, JCJ, and Belenios. In these case studies, our approach
does not yield any false attacks, suggesting that our conditions
are tight.
1. Introduction
There have been substantial advances during the last years
in the field of e-voting protocols. Many new approaches have
been developed, and many of the relevant security properties
have become better understood and agreed upon [1], [2],
[3]. The most important security property is that voters’
votes remain private, which is known as ballot secrecy.
However, designing protocols that achieve such properties
has proven subtle: many vulnerabilities have been discovered
in previously proposed protocols [3], [4], motivating the need
for improved analysis techniques. Because of its high level
of automation, formal verification in the symbolic model has
proven to be very effective.
For classical security protocols, there is mature tool
support [5], [6], [7], [8], which enables detecting many flaws
during the protocol design phase, or later, as new threat
models are considered. However, they traditionally did not
handle e-voting protocols [9]. Recently, automated methods
have been proposed [10], [11], [12] to analyse e-voting
protocols. However, their applicability is still extremely
limited both in the type of protocols that they can deal
with and the type of security properties that they analyse.
The underlying reasons for these limitations interact in a
complex way with existing approaches. The initial restrictions
arise from the fact that (a) ballot secrecy is an observational
equivalence-based property and not a reachability property,
and (b) most proposed e-voting protocols use less standard
primitives that are not covered by all tools. This naturally
restricts the set of tools that can be applied, as acknowledged
by [9], [12], [13].
Tools that deal with equivalence properties and an
unbounded number of sessions are ProVerif [7] and
Tamarin [5]1 . While Tamarin is in active development, it
currently cannot deal with typical primitives that are used in
e-voting protocols, such as designated verifier proofs, blind
signatures, etc. In contrast, ProVerif can deal with many of
these properties: for example, in addition to the aformentioned primitives, it can deal with trapdoor commitments,
blind signatures, and designated verifier proofs [10], [11].
ProVerif handles equivalence properties by checking for
so-called diff-equivalence, which is a property that implies
observational equivalence. However, in practice, and in the
context of the typical encoding of protocols and ballot secrecy
in ProVerif’s framework, many protocols fail to satisfy
diff-equivalence, even if they may satisfy observational
equivalence. This effectively limits the class of e-voting
protocols to which ProVerif can be successfully applied.
Recent efforts [11], [12] have aimed to increase the class
of e-voting protocols that ProVerif can deal with, but many
relevant protocols and threat models still cannot be usefully
analysed by ProVerif.
Contributions. In this work, we advance the state-of-the art
in automated symbolic verification of e-voting protocols.
We focus on the most important privacy property: ballot
secrecy. We develop three tight conditions on e-voting
protocols and a main theorem that states that together, they
imply ballot secrecy. The three conditions in our theorem
are directly inspired from our analysis of the different types
of attacks on ballot secrecy. Since each condition focuses on
one specific aspect of ballot secrecy, it is typically simpler
to check the combination of the three conditions than to
verify ballot secrecy directly, as was done in prior works.
This gives rise to a new method to verify ballot secrecy,
improving the state of the art in several aspects.
First, our approach significantly expands the class of
protocols and threat models that can be analysed. Notably,
unlike prior work, we can deal with the following features:
● honest authorities that are present in different phases.
For instance, one encounters such cases to model a
1. While Maude-NPA [14] also supports some equivalence properties, it
currently rarely terminates in positive cases.
●
●
registrar distributing credentials in a registration phase
and then committing credentials of eligible voters to the
bulletin box in a later phase, as in JCJ and Belenios.
threat models in which no dishonest voter is assumed.
protocols whose ballot secrecy also relies on the freshness of some data coming from previous phases. For
example, such data can be credentials created during a
registration phase, as in JCJ and Belenios.
considering (the intended) observational equivalence. This
often leads to negative results even for secure e-voting
protocols, which manifest themselves as false attacks.
Informal Presentation of the Conditions. We analysed
typical attacks and the underlying links. We classified them
and identified three classes of links leading to privacy
breaches. The purpose of each condition is to prevent links
from the corresponding class. Our main theorem states that
those conditions are sufficient to enforce ballot secrecy.
(Dishonest Condition) By adopting a malicious behaviour,
the attacker may be able to link messages that would not be
linkable in the intended, honest execution. For instance, if the
attacker sends tampered data to a voter, the attacker may be
able to observe the tampered part in later actions knowing it
comes from the same voter allowing this attacker to establish
possibly harmful links. Our first condition essentially requires
that a voting system is indistinguishable for the attacker
from a voting system where at each beginning of phase,
all agents forget everything about past phases and pretend
that everything happened as expected, i.e., as in an honest
execution. The previous example would violate the condition,
because in the second system, the attacker would not be able
to observe the tainted data.
(Honest Relations Condition) Even in the expected honest
execution, the attacker may be able to exploit useful links.
Thanks to the previous condition, we can focus on a system
where each role is split into sub-roles for each phase. This
allows us to verify the absence of the former relations
using diff-equivalence, without giving the attacker spurious
structural links, as mentioned above.
(Tally Condition) Finally, we take into account the tally
outcome that enables establishing more links. Typically, the
attacker may link an identity to a vote if it is able to forge
valid ballots related to (i.e., containing the same vote) data
that can be linked to an identity. Indeed, this introduces a
bias in the tally outcome that can reveal the vote in the forged
ballot. This attack class strictly extends ballot independence
attacks [15]. Our last condition requires that when a valid
ballot has been forged by the attacker then it it must have
been forged without meaningfully using voter’s data already
linked to an identity.
Outline. In Section 2, we present the symbolic model we
use to represent protocols and security properties. We then
describe our framework in Section 3 notably defining the
class of e-voting protocols we deal with as well as ballot
secrecy. Next, we formally define our conditions and state
our main theorem in Section 4. We show the practicality of
our approach in Section 5 by explaining how to verify our
conditions and presenting case studies. Finally, we discuss
related work in Section 6 and conclude in Section 7.
Second, our approach can significantly improve verification efficiency. The increased efficiency can occur for two
main reasons. First, because each of our conditions focuses
on one aspect of the problem and simplifies parts not related
to that aspect, it involves smaller processes that are typically
easier to verify. Second, because previous techniques involve
guessing swaps of processes at the start of each phase, they
suffer from an exponential blow up related to the number of
processes in each phase. In practice, we typically observe
a speedup of over two orders of magnitude and even cause
the analysis to terminate in cases where it did not before.
Third, we use our approach to analyse several new case
studies. Thanks to the flexibility and the large class of
protocols we can deal with, we are able to analyse a multitude
of different threat models allowing comprehensive comparisons. Moreover, thanks to the aformentioned advantages, our
approach is able to systematically take the registration phase
into account, whereas prior works often consider registrars
as honest and not model them. We successfully automatically
analysed FOO, Lee, JCJ, Belenios. We also show that our
theorem applies to Okamoto as well.
We conduct a comprehensive comparison with related
work in Section 6 and provide support for the above claims.
While we present our work in the ProVerif framework, our
results are applicable beyond this specific tool. Finally, we
believe that our conditions shed light on three crucial aspects
that e-voting protocols should enforce; thus improving our
understanding of the complex notion ballot secrecy.
Links & Ballot Secrecy. Ballot secrecy boils down to
ensuring that an attacker cannot establish a meaningful link
between a specific voter and a specific vote (that this voter
is willing to cast). For instance, a naive example of such a
link occurs when a voter outputs a signed vote in the clear,
explicitly linking his vote to his identity. However, in more
realistic e-voting protocols, such links can be very complex,
possibly relying on long transitive chains of different pieces
of data from different outputs. For example, if an attacker is
able to link a credential with the identity of the recipient voter
during a registration phase, and then the voter anonymously
sends his vote along with the credential during a casting
phase, then the attacker can link the vote to the voter.
Secure protocols typically prevent establishing either the link
between the credential and the identity or the link between
the vote and the credential.
As noted before, diff-equivalence (as an overapproximation of observational equivalence) is rarely appropriate to directly verify ballot secrecy [9], [11]. An
underlying reason for this is that considering diff-equivalence
gives the attacker more additional structural links than when
2. Model
We model security protocols using the standard process
algebra in the style of the dialect of Blanchet et al. [16]
(used in the ProVerif tool), that is inspired by the applied
pi calculus [17]. Participants are modelled as processes, and
the exchanged messages are modelled in a term algebra.
2
P, Q
Most of the e-voting protocols are structured in a sequence of phases. For example, each voter often starts a
registration phase followed by a voting phase. When this
is done, a counting or tallying phase occurs producing the
result of the voting process. Hence, the notion of phase is a
central feature in e-voting protocols and we thus consider a
process algebra with phases.
Term algebra. We now present the term algebra
used to model messages built and manipulated using various
cryptographic primitives. We assume an infinite set N of
names, used to represent keys and nonces; and two infinite
and disjoint sets of variables X and W. Variables in X
are used to refer to unknown parts of messages expected by
participants, while variables in W are used to store messages
learned by the attacker. We consider a signature Σ (i.e. a set
of function symbols with their arity). Σ is the union of two
disjoint sets: the constructor and destructor symbols, i.e.,
Σ = Σc ∪ Σd . Given a signature F, and a set of initial data A,
we denote by T (F , A) the set of terms built using atoms in
A and function symbols in F. A constructor term is a term
in T (Σc , N ∪ X ). We denote by vars(u) the set of variables
that occur in a term u and call messages the constructor terms
that are ground (i.e., vars(u) = ∅). Sequences of elements
are shown bold (e.g. x, n). The application of a substitution
σ to a term u is written uσ , and dom(σ ) denotes its domain.
Example 1. Consider the signature
Σ = { sign, verSign, pkv, blind, unblind, commit,
open, ⟨ ⟩, π1 , π2 , eq, ok}.
The symbols sign, verSign, blind, unblind, commit and
open of arity 2 represent signature, signature verification, blind signature, unblind, commitment and commitment opening. Pairing is modelled using ⟨⟩ of arity
2, whereas projection functions are denoted π1 and
π2 , both of arity 1. Finally, we consider the symbol
eq of arity 2 to model equality tests, as well as the
constant symbol ok. This signature is split into two parts:
Σc = {sign, pkv, blind, unblind, commit, ⟨ ⟩, ok} and
Σd = {verSign, open, π1 , π2 , eq}.
∶=
∣
∣
∣
∣
∣
∣
∣
0
in(c, x).P
out(c, u).P
let x = v in P else Q
P ∣Q
νn.P
!P
i∶P
null
input
output
evaluation
parallel
restriction
replication
phase
where c ∈ C, x ∈ X , n ∈ N , u ∈ T (Σc , N ∪ X ), i ∈ N, and
v ∈ T (Σ, N ∪ X ).
Figure 1. Syntax of processes
open(commit(xm , y ), y ) → xm
eq(x, x) → ok
verSign(sign(xm , zk ), pkv(zk )) → xm
πi (⟨x1 , x2 ⟩) → xi for i ∈ {1, 2}.
For modelling purposes, we also split the signature Σ
into two parts, namely Σpub (public function symbols, known
by the attacker) and Σpriv (private function symbols). An
attacker builds his own messages by applying public function
symbols to terms he already knows and that are available
through variables in W. Formally, a computation done by the
attacker is a recipe (noted R), i.e., a term in T (Σpub , W).
Process algebra. We assume Cpub and Cpriv are
disjoint sets of public and private channel names and note
C = Cpub ∪ Cpriv . Protocols are modelled as processes using
the grammar in Figure 1.
Most of the constructions are standard. The construct
let x = v in P else Q tries to evaluate the term v and in case
of success, i.e. when v ⇓u for some message u, the process
P in which x is substituted by u is executed; otherwise
the process Q is executed. Note also that the let instruction
together with the eq theory (see Example 3) can encode the
usual conditional construction. The replication !P behaves
like an infinite parallel composition P ∣P ∣P ∣ . . .. We shall
always deal with guarded processes of the form i ∶ P . Such a
construct i ∶ P indicates that the process P may be executed
only when the current phase is i. The construct νn.P allows
to create a new, fresh name n. For a sequence of names
n, we may note νn.P to denote the sequence of creation
of names in n followed by P . For brevity, we sometimes
omit “else 0” and null processes at the end of processes. We
write fv(P ) for the set of free variables of P , i.e. the set
of variables that are not in the scope of an input or a let
construct. A process P is ground if fv(P ) = ∅.
The operational semantics of processes is given by a
labelled transition system over configurations (denoted by
K ) (P ; φ; i) made of a multiset P of ground processes, i ∈ N
the current phase, and a frame φ = {w1 ↦ u1 , . . . , wn ↦ un }
(i.e. a substitution where ∀i, wi ∈ W , ui ∈ T (Σc , N )). The
frame φ represents the messages known to the attacker. Given
a configuration K , φ(K ) denotes its frame. We often write
P ∪P instead of {P }∪P and implicitly remove null processes
from configurations.
Example 4. We use the FOO protocol [18] (modelled as
in [11]) as a running example. FOO involves voters and a
registrar role. In the first phase, a voter commits and then
blinds its vote and sends this blinded commit signed with
As in the process calculus presented in [16], constructor
terms are subject to an equational theory used for for
modelling algebraic properties of cryptographic primitives.
Formally, we consider a congruence =E on T (Σc , N ∪ X ),
generated from a set of equations E over T (Σc , X ).
Example 2. To reflect the algebraic properties of the
blind signature, we may consider the equational theory
generated by the following equations:
unblind(sign(blind(xm , y ), zk ), y ) = sign(xm , zk )
unblind(blind(xm , y ), y ) = xm .
We can also give a meaning to destructor symbols. For this,
we need a notion of computation relation ⇓ ∶ T (Σ, N ) ×
T (Σc , N ) such that for any term t, (t⇓u ⇐⇒ t⇓u′ ) if, and
only if, u =E u′ . While its precise definition is unimportant
for this paper, we describe how it can be obtained from
rewriting systems and give a full example in Appendix A.1.
Example 3. Symbols in Σd (see Example 1) can be given
a semantics through the following rewriting rules:
3
V (id, v )
= 1 ∶ νk.νk ′ .M = commit(v, k ),
e = blind(M, k ′ ), s = sign(e, key(id)),
out(c, ⟨pk(key(id)); s⟩). in(c, x).
if verSign(x, pk(kR )) = e
then 2 ∶ out(c, unblind(x, k ′ )). in(c, y ).
if y = ⟨y1 ; M ⟩
then out(c, ⟨y1 , ⟨M, k ⟩⟩)
⟨pk(kid ), s⟩, w2 ↦ sign(M, kR ), w3 ↦ ⟨n; M ; k ⟩}, and,
s, M are as specified in Figure 2. This corresponds to a
normal, expected execution of one protocol session.
Trace equivalence. We are concerned with trace
equivalence, which is commonly used [9] to express many
privacy-type properties such as ballot secrecy. Intuitively,
two configurations are trace equivalent if an attacker cannot
tell whether he is interacting with one or the other. Before
formally defining this notion, we first introduce a notion of
equivalence between frames, called static equivalence.
Definition 1. A frame φ is statically included in φ′ when
dom(φ) = dom(φ′ ), and
● for any recipe R such that Rφ⇓u for some u, we have
that Rφ′ ⇓u′ for some u′ ;
● for any recipes R1 , R2 such that R1 φ⇓u1 , R2 φ⇓u2 , and
u1 =E u2 , we have that R1 φ′ ⇓ =E R2 φ′ ⇓, i.e. there exist
v1 , v2 such that R1 φ′ ⇓v1 , R2 φ′ ⇓v2 , and v1 =E v2 .
Two frames φ and φ′ are in static equivalence, written φ ∼ φ′ ,
if the two static inclusions hold.
Figure 2. Voter role of FOO (for some channel c ∈ Cpub )
his own signing key key(id) to the registrar. The function
symbol key(⋅) is a free (i.e. involved in no equation nor
reduction) private function symbol associating a secret
key to each identity. The registrar is then supposed to
blindly sign the committed vote with his own signing
key kR ∈ Σc ∩ Σpriv and sends this to the voter. In the
voting phase, voters anonymously output their committed
vote signed by the registrar and, on request, anonymously
send the opening for their committed vote.
The process corresponding to a voter session (depending
on some constant id and a constant v ) is depicted in
Figure 2. The first three equations should be understood
as syntactic sugar, to write processes in a more compact
way. A configuration corresponding to a voter A ready
to vote v1 with an environment knowing the registrar’s
key is K1 = ({V (A, v1 )}; {wR ↦ kR }; 1).
Intuitively, an attacker can distinguish two frames if he
is able to perform some computation (or a test) that succeeds
in φ and fails in φ′ (or the converse).
Example 6. Consider the frame φ from Example 5. The
fact that the attacker cannot statically distinguish the
resulting frame from a frame obtained after the same
execution but starting with V (A, v2 ) instead of V (A, v1 )
is modelled by the following static equivalence: φ ∼? φ′
where φ′ = φ{v1 ↦ v2 }. v1 by the constant v2 which
does actually not hold (see witness given in Section A.1).
The operational semantics of a process is given by the
α
relation Ð
→ defined as the least relation over configurations
satisfying the rules in Figure 3. For all constructs, phases are
just handed over to continuation processes. Except for the
phases, the rules are quite standard and correspond to the
intuitive meaning of the syntax given in the previous section.
The first rule I N allows the attacker to send a message on a
public channel as long as it is the result of a computation
done by applying a recipe to his current knowledge. The
second rule O UT corresponds to the output of a term on a
public channel: the corresponding term is added to the frame
of the current configuration. The rule C OM corresponds to
an internal communication on a private channel that the
attacker cannot eavesdrop on nor tamper with. Finally, the
rule P HASE allows a process to progress to its next phase
and the rule N EXT allows to drop the current phase and
progress irreversibly to a greater phase.
The rules I N ,O UT,N EXT are the only rules that produce
observable actions (i.e., non τ -actions). However, for reasons
that will become clear later on (notably for Definition 4),
we make a distinction when a process evolves using L ET
α1 ...αn
or L ET-FAIL. The relation ÐÐÐÐ→ between configurations
(where α1 . . . αn is a sequence of actions) is defined as the
α
transitive closure of Ð
→.
Then, trace equivalence is the active counterpart of static
equivalence taking into account the fact that the attacker
may interfere during the execution of the process in order to
distinguish between the two situations. We define obs(tr) to
be the subsequence of tr obtained by erasing all the τ actions
(i.e. τ, τthen , τelse ). Intuitively, trace equivalence holds when
any execution of one configuration can be mimicked by an
execution of the other configuration having same observable
actions and leading to statically equivalent frames. We give
a formal definition in Section A.2.
Example 7 (Continuing Example 4). Consider Ki =
({V (A, vi )}; {wR ↦ kR }; 1) for i ∈ {1, 2}. We may
be interested whether K1 ≈? K2 . This equivalence does
actually not hold because there is only one execution
starting with K1 (resp. K2 ) following the trace trh
(defined in Example 5) and the resulting frame is φ
(resp. φ′ as defined in Example 6). But, as shown in
Example 6, φ ∼/ φ′ . Therefore, K1 ≈/ K2 . However, ballot
secrecy is defined differently (in Section 3.2) and we
will establish that the FOO protocol actually satisfies it.
Example 5 (Resuming Example 4). Consider the following
trh
execution of configuration K1 : K1 Ð→ (∅; Φ; 2), where
trh = τ.τ.out(c, w1 ).in(c, R).τthen .τ.phase(2).
out(c, w2 ).in(c, ⟨C, w2 ⟩).τthen .out(c, w3 )
and where C is any constant in Σc ∩ Σpub , R =
sign(verSign(π2 (w1 ), π1 (w1 )), wR ), Φ2 = {w1 ↦
Diff-Equivalence. Trace equivalence is hard to verify,
in particular because of its forall-exists structure: for any
execution on one side, one has to find a matching execution on the other side. One approach is to consider overapproximations of trace equivalence by over-approximating
the attacker’s capabilities. Diff-equivalence is one of them.
4
in(c, R)
IN
(i ∶ in(c, x).P ∪ P ; φ; i) ÐÐÐ→ (i ∶ P {x ↦ u} ∪ P ; φ; i)
O UT
C OM
L ET
L ET-FAIL
N EW
R EPL
PAR
P HASE
(i ∶ out(c, u).P ∪ P ; φ; i) ÐÐÐÐ→ (i ∶ P ∪ P ; φ ∪ {w ↦ u}; i)
τ
(i ∶ in(c, x).P ∪ i ∶ out(c, u).Q ∪ P ; φ; i) Ð
→ (i ∶ P {x ↦ u} ∪ i ∶ Q ∪ P ; φ; i)
τthen
(i ∶ let x = v in P else Q ∪ P ; φ; i) ÐÐ→ (i ∶ P {x ↦ u} ∪ P ; φ; i)
τelse
(i ∶ let x = v in P else Q ∪ P ; φ; i) Ð→ (i ∶ Q ∪ P ; φ; i)
τ
(i ∶ νn.P ∪ P ; φ; i) Ð
→ (i ∶ P ∪ P ; φ; i)
τ
→ (i ∶ P ∪ i ∶ !P ∪ P ; φ; i)
(i ∶ !P ∪ P ; φ; i) Ð
τ
({i ∶ (P1 ∣ P2 )} ∪ P ; φ; i) Ð
→ ({i ∶ P1 , i ∶ P2 } ∪ P ; φ; i)
τ
(i ∶ j ∶ P ∪ P ; φ; i) Ð
→ (j ∶ P ∪ P ; φ; i)
with c ∈ Cpub where R is a recipe such that Rφ⇓u for some message u
N EXT
(P ; φ; i) ÐÐÐÐ→ (P ; φ; j )
out(c, w)
phase(j )
with c ∈ Cpub and w a fresh variable in W
with c ∈ Cpriv
when v ⇓u for some u
when v ⇓̸
where n is a fresh name from N
for some j ∈ N such that j > i
Figure 3. Semantics for processes
This property was originally introduced to enable ProVerif
to analyse some form of behavioural equivalence, and was
later also implemented in Tamarin and Maude-NPA.
Such a notion is defined on bi-processes, which are
pairs of processes with the same structure that only differ
in the terms they use. The syntax is similar to above,
but each term u has to be replaced by a bi-term written
choice[u1 , u2 ] (using ProVerif syntax). Given a bi-process
P , the process fst(P ) is obtained by replacing all occurrences
of choice[u1 , u2 ] with u1 ; similarly with snd(P ).
The semantics of bi-processes is defined as expected via
a relation that expresses when and how a bi-configuration
may evolve. A bi-process reduces if, and only if, both sides
of the bi-process reduce in the same way triggering the same
rule: e.g., a conditional has to be evaluated in the same way
on both sides (see the formal rule in Section A.3). When
the two sides of the bi-process reduce in different ways,
tr
the bi-process blocks. The relation Ð
→bi on bi-processes is
therefore defined as for processes. Finally, diff-equivalence of
a biprocess intuitively holds when for any execution, resulting
frames on both sides are statically equivalent and resulting
configurations on both sides are able to perform the same
kind of actions. A formal definition is given in Section A.3.
As expected, this notion of diff-equivalence is stronger
than trace equivalence. It may be the case that the two sides
of the bi-process reduce in different ways (e.g., taking two
different branches in a conditional) but still produce the same
observable actions. Phrased differently: diff-equivalence gives
the attacker the ability to see not only the observable actions,
but also the processes’ structures. This strong notion of diffequivalence is sufficient to establish some security properties
but is too strong to be useful for establishing ballot secrecy
(we discuss this at greater length in Section 6).
unknown parts symbolised by second-order variables. Symbolic traces represent attacker’s behaviours with non-fully
specified recipes. A symbolic trace can be instantiated to
a concrete trace by replacing the second-order variables by
recipes (i.e., in T (Σpub , W)).
Example 8 (Resuming Example 5). In the last input of trh
(i.e. in(c, ⟨C, w2 ⟩)) the constant C could be replaced by
any recipe without altering the subsequent execution. We
may use the symbolic recipe ⟨X, w2 ⟩ with X ∈ X 2 for
that reason: the choice of the recipe to replace X with
is unimportant.
3.1. Class of e-voting protocols
We explain in this section how we model e-voting
protocols and the considered scenarios. Essentially, we may
consider an arbitrary number of honest voters plus all necessary authorities (e.g., bulletin box, registrar, tally), which
can perform an unbounded number of sessions. Depending
on the threat model, we also consider an arbitrary number
of dishonest voters. We use role to refer to a specific role
of the protocol, such as voter, authority, etc. Together, the
agents performing the roles are able to produce a public
bulletin board of ballots from which the tally computes the
final result (i.e., multisets of accepted votes).
The protocol should specify a fixed finite set of possible
votes V ⊆ T (Σc ∩ Σpub , ∅), which is a set of public constants
(e.g. V = {yes, no} for a referendum). We also distinguish a
specific public constant ∈ T (Σc ∩ Σpub , ∅) modelling the
result of an invalid ballot. We require constants in V ∪ {}
to be free (i.e. they no dot appear in E).
Roles. E-voting protocols specify a process for each
honest role (in particular, the voter role). Dishonest roles
can be left unspecified because they will be played by
the environment. Those processes may use e.g. phases,
private data, private channels but no replication nor parallel
composition, as a role specifies how a single agent behaves
during one session.
Definition 2. An honest role is specified by a process of
the form i ∶ νn.A, where A is a process without parallel
composition, replication nor creation of names (i.e. ν construct). There should be at least a process for the voter
role and one for the ballot box role (noted Ab ). Moreover,
for the specific case of voter role, the corresponding
3. Framework
Whereas in the previous section we recalled previous
work, we present in this section our framework that we need
to establish our results.
Preliminaries. We first define symbolic traces. A
symbolic trace is a trace whose recipes are symbolic; i.e.,
they are from T (Σpub , W ∪ X 2 ), where X 2 is a new set
of second-order variables. Intuitively, a symbolic recipe
R ∈ T (Σpub , W ∪ X 2 ) is a partial computation containing
5
process noted V (id, v ) should be parameterized by
id (modelling an identity) and v (modelling the vote
this voter is willing to cast). Finally, initial attacker’s
knowledge is specified through a frame φ0 .
of votes obtained by applying Extract(⋅) on the ballots in
the bulletin board.
Example 10 (Continuing Example 9). The public test Ψb
is defined as the following term with hole:
Ψb [] = and(verSign(π1 (π2 ([])), pk(skR )),
open(getMess(π1 (π2 ([]))), π2 (π2 ([]))))
There should be at least two honest roles: the voter role
and the bulletin board box role whose process Ab shall
contain (at least) one output on the distinguished public
channel cb ∈ Cpub . Intuitively, each session of the bulletin
box processes input data and may output a ballot on channel
cb (this may depend on private checks). We eventually define
the bulletin board itself as the set of messages output on
channel cb . W.l.o.g., we assume that in role processes A
cannot feature creation of names, since one can always
move the creation of names to the beginning (remember that
A does not feature replication).
If the considered threat model does not consider dishonest voters, then the honest voters cannot be played
by the environment. For such threat models, we model
honest voters explicitly using the following set of processes:
RV = {νid. νn V (id, v ) ∣ v ∈ V}, where n are all the free
names in V (id, v ). In threat models with dishonest voters,
honest voters are played by the environment and we let
RV = ∅. We note R the set of all processes of honest roles,
except the voter role, but including processes in RV and let
Ro be the same set but excluding RV .
Example 9. The process V (v, id) defined in Example 4 is
the voter role one could define for the FOO protocol. We
consider the bulletin box as untrusted, and we therefore
model it by the process Ab = 2 ∶ in(u, x).out(cb , x),
where u ∈ Cpub is a public channel. In contrast, we
leave the registrar unspecified for the moment because
we consider it corrupted and thus played by the environment.We thus have RV = ∅ and R = Ro = {Ab }.
Finally, the initial frame contains the registrar’s key:
φ0 = {w0 ↦ kR }.
where the destructor and is such that and(t1 , t2 )⇓
if and only if t1 ⇓ and t2 ⇓ (see its formal definition
in Section A.1). Indeed, expected ballots are of
the form ⟨X, ⟨sign(commit(k, v ), kR ), k ⟩⟩. The
evaluation of Ψb [b] may fail (i.e. Ψb [b]⇓̸) if either
the signature verification fails or the commit opening
fails. Finally, the extraction function is Extract[] =
wrapVote(open(getMess(π2 (π1 ([]))), π2 (π2 ([]))))
where wrapVote(⋅) corresponds to the identity function
on V and maps all values not in V (modulo =E ) to .
Honest Trace. As said before, no process is given
for dishonest roles. However, we require a notion of honest
trace that itself specifies what behaviour should be expected
from dishonest roles. Intuitively, it is an abstraction of the
trace obtained by executing one session of the voter role,
possibly involving other roles in the honest, expected way.
Definition 4. The protocol shall specify a distinguished
execution that we call the honest execution which should
trh
be of the form: ({V (id, v )} ⊎ Ro ; φ0 ; 1) Ð→ (P ; φh ; kf )
for some vote v ∈ V and identity id ∈ T (Σc , ∅), with
trh = tr0h .out(cb , wb ) (i.e., the last action corresponds to
the casting of a ballot). We assume that trh contains the
action phase(k ) for all 2 ≤ k ≤ kf (no phase is skipped).
We also assume that a symbolic trace th is specified.
We call it the honest trace and require that trh is an
instantiation of th.
The honest trace describes the honest expected execution
of one voter completing the voting process until casting a
ballot possibly through an interaction with different roles.
Here, the notion captures the fact that some corrupted roles
are played by the attacker. Hence the fact that the honest
trace is a symbolic trace with sub-messages that are unknown
and not specified because chosen by the attacker. Note that
the honest trace specifies how conditionals are expected to
evaluate thanks to the τthen /τelse dichotomy.
Example 11 (Resuming Example 9). We consider the
following honest trace (where X ∈ X 2 and R1 =
sign(verSign(π2 (w1 ), π1 (w1 )), wR )):
th = τ.τ.out(c, w1 ).in(c, R1 ).τthen .τ.phase(2).
out(c, w2 ).in(c, ⟨X, w2 ⟩).τthen .out(c, w3 ).τ.
in(u, w3 ).out(cb , w3 )
This is an extension of the symbolic trace described
in Example 8.
Bulletin Board & Tally. We assume a public test Ψb
that everyone can execute on the bulletin board to know if a
ballot is well-formed or not. Formally Ψb [] is a term with a
hole. For instance, Ψb can be a combination of a signature
and ZK proof verification. The protocol should also specify
a term with hole Extract[] that models the extraction of
the vote from a valid ballot. As defined below, we require
that this operator only computes votes or . For a set S , we
denote by SE the set of elements of S quotiented by =E .
Definition 3. The bulletin board and the tally are specified
through a public term Ψb [] ∈ T (Σpub , []) and a term
Extract[] ∈ T (Σ, []) such that: for any message t, it
holds that Extract[t]⇓u for some u ∈ V ∪ {}.
Given a trace tr and a frame φ, we define respectively
the bulletin board and the tally’s outcome:
BB(tr, φ) = {wφ ∣ ∃out(cb , w) ∈ tr, Ψb [wφ]⇓}E
Res(tr, φ) = {v ∣ ∃ba ∈ BB(tr, φ),
Extract(ba)⇓v ∈ V}# .
E-voting Protocols. We finally turn to the definition
of e-voting protocols.
Definition 5. An e-voting protocol is given by a tuple
(V ; φ0 ; V (id, v ); R; (Ψb [], Extract[]); th)
where V ⊆ T (Σc ∩ Σpub , ∅) are the allowed votes,
The bulletin board is the set of messages (modulo =E )
that pass the tally condition and induced by an output on the
specific channel cb . Then, the tally’s outcome is the multiset
6
V (id, V ) and R are the processes modelling honest
roles and φ0 is the attacker’s initial knowledge as in
Definition 2, Ψb [] and Extract[] model the bulletin
board and the tally as in Definition 3, and th describes
the intended, honest execution as in Definition 4.
More formally, we are interested in comparing S and Sr as
defined next (where, for a multiset of processes B, !B refers
to {!P ∣ P ∈ B}# ):
S = (! R) ⊎ {V (A, v0 ), V (B, v1 )}
Sr = (! R) ⊎ {V (A, v1 ), V (B, v0 )}
where v0 , v1 are two distinct votes in V and A, B are two
distinct free, public, constants (i.e. in T (Σc ∩ Σpub , ∅) that
do not occur in E).
A first approximation of ballot secrecy we eventually
define is the trace equivalence between the configurations
(S ; φ0 ; 1) and (Sr ; φ0 ; 1). However, most e-voting protocols
would not satisfy such a property because the attacker may
force a particular voter (e.g. A) to not cast any ballot in order
to infer, from the tally’s outcome, the vote that the other
voter (e.g. B ) has cast. This attack is captured by the model
but is unrealistic in practice. Indeed, in practical scenarios,
it would require the attacker to prevent all voters except
a particular one to cast a vote in order to break the ballot
secrecy of that particular voter.
This justifies the need for only considering fair executions3 where both voters A and B actually cast a vote (which
is in line with e.g. [12]).
Flexible threat models. Our generic definition of evoting protocols allows to model many different threat models. First, the processes that model roles may use different
kinds of channels. For instance, by using private channels for
some inputs and outputs, we model communication channels
that prevent the attacker to eavesdrop on or tamper with
those exchanged messages. By using public channels and
adding the identity of voter in exchanged data, we model
an unsecure, non-anonymous communication channel. In
contrast, by using only a single public channel, we model an
anonymous communication channel, since all voters will use
the same channel2 . Moreover, some roles can be considered
dishonest or honest yielding different threat models. Finally,
different frames φ0 allow to model different initial attacker
knowledge (e.g., secret keys of some roles).
Annotated Processes. Finally, we equip the semantics with annotations that will help subsequent developments.
We assume a notion of annotations over processes so that
we can keep track of role sessions and specific voters
throughout executions. Each action can then be labelled by
this information. For a voter process V (id, v ), we note [id, v ]
the annotation given to actions produced by this process.
Formally we may define such annotations by giving explicit
annotations to processes in the initial multiset and modify
the semantics so that it keeps annotations on processes as
one could expect. Those notations notably allow to define
when a specific voter casts a ballot as shown next.
Definition
6.
Consider
an
e-voting
protocol
(V ; φ0 ; V (id, v ); R; (Ψb [], Extract[]); th). We say
that a voter V (id, v ) casts a ballot w in an execution
tr
(P ⊎ {V (id, v ), !Ab }; φ0 ; 1) Ð
→ K when there exists
an output out(c, wb ) ∈ tr annotated [id, v ] and a
bulletin box (i.e. Ab ) session sb such that actions
from tr annotated sb are in(c, wb ), out(cb , w). We say
that V (id, v ) cast a valid ballot w when, in addition,
Ψb [wφ(K )]⇓.
Definition 7.
Consider an e-voting protocol
(V ; φ0 ; V (id, v ); R; (Ψb [], Extract[]); th).
An
tr
execution (P ; φ0 ; 1) Ð
→ K for P ∈ {S , Sr } is
said to be fair when the two voter processes of identity
A and B in P cast a valid ballot in that execution.
Note that a slightly weaker assumption consists in considering strong phases (or synchronisation barriers) –while we
consider weak phases– as done in [11], [20]. Intuitively, one
may progress to the next strong phase only when all current
processes agree on this decision. Since we consider weak
phases, we need this assumption as a counterpart.
Finally, we give below the definition of ballot secrecy.
It can be understood as the trace inclusion between S and
Sr preserving tally’s outcome with a restriction over the
explored traces (i.e. the ones that are fair).
Definition 8 (Ballot Secrecy). An e-voting protocol
(V ; φ0 ; V (id, v ); R; (Ψb [], Extract[]); th) ensures baltr
lot secrecy when for any fair execution (S ; φ0 ; 1) Ð
→ K,
tr′
there exists a fair execution (Sr ; φ0 ; 1) Ð→ Kr such that:
′
● the attacker observes same actions: obs(tr) = obs(tr );
● the attacker observes same data: φ(K ) ∼ φ(Kr );
● the
attacker observes same tally’s outcome:
Res(tr, φ(K )) = Res(tr′ , φ(Kr )).
3.2. Ballot Secrecy
Next, we define the notion of ballot secrecy that we
aim to analyse. Intuitively, ballot secrecy holds when the
attacker is not able to observe any difference between two
situations where voters are wiling to cast different votes.
However, we cannot achieve such a property by modifying
just one vote, since the attacker will always be able to
observe the difference on the final tally outcome. Example 7
illustrates this problem: one has that K1 ≈/ K2 while the FOO
protocol actually ensures ballot secrecy. Instead, we shall
consider a swap of votes that preserves the tally’s outcome.
Honest executions. For two traces tr1 , tr2 and a
frame φ, we note tr1 ≡φ tr2 if tr1 and tr2 are equal up
to recipes and for all recipes M1 of tr1 we have that
M1 φ⇓ =E M2 φ⇓, M2 being the corresponding recipe in
tr2 (and conversely).
Definition 9. Let th be an honest trace (i.e. a symbolic trace).
2. However, the attacker may have other ways to break anonymity, but
this will be part of our security analysis.
3. This should not be mixed up with the fairness property [12], [15] that
is one of the security property expected from e-voting protocols.
7
●
●
●
●
Any trace tr = thσ for σ a substitution from secondorder variables in th (i.e. unknown parts) to recipes is
an instantiation of th.
Given a trace tr and a frame φ, we say that tr, φ follow
a trace tr′ if for some bijection of handles ρ, one has
tr ≡φ (tr′ ρ).
Given a trace tr and a frame φ, we say that tr, φ follow
the honest trace when it follows an instantiation of the
honest trace
We say that a voter had an honest interaction in an
execution when there exist role session annotations
s1 , . . . , sn such that the subtrace made of all actions
labelled by this voter or si with the resulting frame
follow the honest trace.
would hold for S but not for Sr ) between id-leaking phase
outputs and vote-leaking phase outputs.
Example 13. Consider a voter’s role process V (id, v ) =
1 ∶ out(a, id).out(a, v ) (other components are unimportant here). This trivial protocol is a naive abstraction of a registration phase (voter sends its identity)
followed by a voting phase (voter sends its vote). We
show this does not ensure ballot secrecy. Consider the
tr
following fair execution: (S ; ∅; 1) Ð
→ (∅; φ; 1) with
′
tr = out(a, wid ).out(a, wv ).out(a, wid
).out(a, wv′ )
′
′
and φ = {wid ↦ A, wv ↦ v0 , wid ↦ B, wv ↦ v1 }. This
execution has no indistinguishable counterpart in Sr .
Indeed, because the first message reveals the identity
of the voter, the attacker can make sure that the voter
A executes the first output (i.e. wid ) on the Sr side
as well. After the first output wid ↦ A, the Sr side
can only output either B or v1 (because A votes v1 in
Sr ) but not v0 . However, because the second message
reveals the vote, the attacker expect a message v0 and
can test whether wv equals v0 or not. To summarise, the
only executions of (Sr ; ∅; 1) following the same trace
tr produce frames that are never statically equivalent
to φ. Thus, this protocol does not ensure ballot secrecy
because in a single phase (i.e., phase 1), there is one
output revealing the identity of the voter and one output
revealing the voter’s vote.
However, the process V (id, v ) = 1 ∶ out(a, id). 2 ∶
out(a, v ) ensures ballot secrecy and does not suffer
from the above problem. Indeed, the attacker cannot
force A to execute its first message leaking identity and
then immediately its second message leaking its vote,
because doing so would kill the process V (B, v1 ) (which
is still in phase 1) preventing the whole execution from
being fair. Thus, the attacker has to trigger all possible
actions of A and B of the first phase before moving to
the second phase. Immediately after the first phase, we
end up with the processes {out(a, v0 ), out(a, v1 )} on
the S side and {out(a, v1 ), out(a, v0 )} on the Sr side.
Both are indistinguishable.
In conclusion, in this first iteration, we split outputs
revealing identity and outputs revealing votes in distinct
phases. This enables breaking links between identity and
vote.
In the second item, ρ is used in order to rename the
handles of outputs and the condition on recipes ensure that
tr and tr′ compute messages having the same relations w.r.t.
outputs (handles). For instance, if th = out(c, w).in(c, w) is
some honest trace, we would like to capture the fact that for
a frame φ, any trace out(c, w).out(c, M ) follows th as long
as M computes the same message as w (i.e. wφ =E M φ).
We now consider the trace trh built from th by replacing
each unknown part X by a fixed free, public constant CX
that we add to Σc ∩ Σpub . Note that trh is an instance of
th and both are equal when th has no unknown part (i.e.
second-order variable). We call trh the idealised trace. The
purpose of the idealised trace is to consider an arbitrary,
fixed, instantiation that is uniform for all voters and sessions.
Example 12 (Continuing Example 11). The idealised trace
is th{X ↦ CX }, where CX ∈ Σc ∩ Σp ub.
4. Conditions
We introduce three conditions and prove that together,
they imply ballot secrecy. In Section 4.1 we provide intuition
for our approach and formally define the support notions.
We then define the conditions (i.e., Dishonest, Honest Relations, Tally Condition) in sections 4.2 to 4.4. We prove in
Section 4.5 that our conditions are sufficient.
4.1. Protocol phases and their links
Identity-leaking vs. Vote-leaking Phases. In a nutshell, ballot secrecy boils down to the absence of link between
an identity and the vote this identity is willing to cast.
However, as illustrated by the next example, the attacker is
able to link different actions performed by the same voter as
long as they take part to the same phase. Thus, each phase
of the e-voting protocol must hide and protect either the
identity of voters or the votes voters are willing to cast. It
is thus natural to associate to each phase, a leaking status:
either the phase (possibly) leaks identity (we call such phases
id-leaking) or it (possibly) leaks vote (we call such phases
vote-leaking). In order to ensure ballot secrecy, the Honest
Relations Condition, which we define later, will enforce that
the attacker cannot establish meaningful links (i.e. links that
As we will later see, our approach requires that we
associate a leaking status to each phase (vote-leaking or idleaking). Since a protocol typically only has a few phases,
we could simply try each possible assignment, and we would
still be more efficient than current approaches.
However, in practice and on a case-by-case basis, we
can immediately associate the appropriate leaking status to a
phase. For instance, if a phase includes an output from which
the attacker can extract an identity or anything that can be
related to the voter’s identity (e.g., the voter’s public signature
key), then the whole phase can be considered id-leaking.
Example 14 (Continuing Example 12). We consider
phase 1 as id-leaking since it contains the output
8
out(c, ⟨pk(key(id)), s⟩) and pk(key(id)) is bijectly related to the identity id. The last output of phase 2 leaks
the vote, we thus consider it vote-leaking.
We now define the sub-parts more formally. Let nvi
be the vector made of the constant vi and all vote-leaking
names (with indices i). Let nid
i be the vector made of the
constant idi and all id-leaking names (with indices i). The
pair (nvi , nid
j ) (deterministically) describes the initial data
needed to start one honest interaction of one voter with all
necessary role sessions.
Definition 10. Remind that the voter process must be
of the form V (id, v ) = k ∶ νn.V ′ (id, v ) where V ′
v
is without creation of names. We note V (nid
i , nj )
′
the process k ∶ V (idi , vj )σ where σ maps names
in n to corresponding names in nvi ∪ nid
j . We define
v
similarly A(nid
,
n
)
for
A
∈
R
.
Finally,
we note
o
i
j
id
v
id
v
#
Ro (ni , nj ) = {A(ni , nj ) ∣ A ∈ Ro } .
Id-leaking vs. Vote-leaking Names. As illustrated
by the next example, a name present in different outputs can
also be exploited to link those outputs. This is problematic
when phases of those outputs have a different leaking status
since it would enable linking those phases and thus maybe
an identity with a vote. That is why, similarly to phases, we
associate a leaking status to names created by role processes.
Note that the phase in which the name is created is irrelevant.
What really matters is where the name is used and to what
kind of data it can be linked. Again this classification is
easily done on a case-by-case basis in practice (e.g., if the
name can be extracted by the attacker from an output then
it inherits the leaking status of this output’s phase) but the
different possibilities can all be tested separately.
Example 17 (Resuming Example 15). Assuming r is said
v
to be vote-leaking, one has nid
i = idi and nj = vj , rj ,
id
v
and, V (ni , nj ) = 1 ∶ out(a, idi ⊕ rj ).2 ∶ out(a, vj ⊕ rj )
Example 15. We continue Example 13 and consider
V (id, v ) = 1 ∶ νr.out(a, id ⊕ r).2 ∶ out(a, v ⊕ r) where
⊕ denotes the exclusive or operator, and r is a new, fresh
random coin. This new protocol seems similar to the
previous iteration. However, it does not satisfy ballot
secrecy. Now, the attacker can use the name r to link
the action of the id-leaking phase with the action of the
vote-leaking phase (see Example 20), defeating the role
of the phase which previously broke this link. Note that
only names can lead to the this issue: all other kinds
of data (e.g., public constants) are uniform and do not
depend on a specific voter session (e.g., replace r by a
constant ok ∈ Σc and the resulting protocol ensures ballot
secrecy). Finally, similar issues may occur with names
created by other roles, because a voter may use them
wrongly as before.
v
Intuitively, the process V (nid
i , nj ) corresponds to the
voter role process of identity idi and vote vi that will use
all given names instead of creating fresh ones. Similarly for
v
authorities. Note that in the vectors nid
i , nj , there may be
names that are never used in some roles; we still give the
full vectors as arguments though.
We remark that given names nvi , nid
j , there is
v
a unique (modulo =E ) execution of ({V (nid
i , nj )} ⊎
id
v
Ro (ni , nj ); φ0 ; 1) following the idealised trace: everything
is deterministic once an instance of the honest trace and all
fresh names are chosen. We call that execution the idealised
execution for nvi , nid
j .
Definition 11 (Phase Roles). Given nvi , nid
j , consider the
v
id
unique idealised execution for ni , nj :
trh
v
id
v
({V (nid
i , nj )} ⊎ R(ni , nj ); φ0 ; 1) Ð→ (∅; φ; n).
Example 16 (Continuing Example 14). The name k is
leaked in the last output of phase 2, which is voteleaking, and k is thus vote-leaking. We consider k ′ voteleaking, but note that considering it id-leaking also allows
verification. (i.e. the conditions hold in both situations).
v
id
v
For some A(nid
i , nj ) ∈ R(ni , nj ) and some phase
k
id
number k ∈ [1; n], we note Af (ni , nvj ) the first process
v
resulting from A(nid
i , nj ) of the form k ∶ P for some
P if it exists; and 0 otherwise. Finally, the phase role of
v
A for k , noted Ak (nid
i , nj ), is the process one obtains
k
id
v
from Af (ni , nj ) by replacing by 0 each sub-process
of the form l ∶ P ′ for some P ′ and l > k . Further, the
v
process A∀ (nid
i , nj ) is the sequential composition of all
i
id
v
v
∀
id
v
A (ni , nj ). Finally, V k (nid
i , nj ) and V (ni , nj ) are
defined similarly.
“Divide & Conquer”. One reason that ballot secrecy
is hard to verify using existing techniques, is the fact that diffequivalence in ProVerif is too rigid w.r.t. phases: it does not
allow any flexibility at the beginning of phases. We should be
able to stop there, and start again with a new pairing left-right,
a new biprocess. A core ingredient of our technique is to
split the protocol into independent sub-parts (corresponding
to phases), which allows us to consider much more pairings
including ones (left-right) that are not consistent over phases.
The Dishonest Condition allows us to consider sub-parts
instead of complete roles. Interestingly, this requires that,
once all random data are chosen and fixed, each phase of the
voter process can be seen as a standalone process that does
not need to know the execution of past phases. This is also
important to ensure ballot secrecy, because otherwise the
attacker might link two actions coming from two different
phases and to know that they came from the same voter.
Example 18. Continuing Example 17, we would have
v
2
id
v
V 1 (nid
i , nj ) = 1 ∶ out(a, idi ⊕ rj ), V (ni , nj ) = 2 ∶
v
out(a, vj ⊕ rj ), and, V ∀ (nid
i , nj ) = 1 ∶ out(a, idi ⊕
rj ).2 ∶ out(a, vj ⊕ rj ).
In a nutshell, phase roles describe how roles behave
in each phase, assuming that previous phases followed the
idealised executions for the given names. The main property
we eventually deduce from the Dishonest Condition is that
it is sufficient (i.e., we do not lose behaviours and hence no
attacks) to analyse the phase roles in parallel instead of the
9
r0 ), Ab , 1 ∶ out(a, idB ⊕ choice[r1 , r0 ]), 2 ∶ out(a, v1 ⊕
r1 ), Ab , !Ab }; ∅; 1). We argue that this biprocess is not
diff-equivalent. Indeed, the attacker can xor idA , v0 , an
output of phase 1, and an output of phase 2. For one
choice of the outputs, the attacker may obtain 0 on the left.
This cannot happen on the right. The same interaction is
also an attack trace for ballot secrecy.
v
V 1 (nid
id , ni )
= 1 ∶ M = commit(vi , ki ),
e = blind(M, ki′ ), s = sign(e, key(id)),
out(c, ⟨pk(key(id)); s⟩). in(c, x);
if verSign(x, pk(kR )) = e then 0
2
id
v
V (nid , ni ) = 2 ∶ M = commit(vi , ki ),
out(c, sign(M, kR )). in(c, y ).
if y = ⟨y1 ; M ⟩ then out(c, ⟨y1 , ⟨M, ki ⟩⟩
One requirement of the Honest Relations Condition is
the diff-equivalence of B. However, this first requirement
does not prevent the honest trace to make explicit links
between an output of an id-leaking phase and an input of
a vote-leaking phase (or the converse). This happens when
the honest trace is not phase-oblivious as defined next.
Definition 13. The honest trace is phase-oblivious when:
● in all input ∈ (c, M ) of th in a phase i, handles in M
must not come from phases with a different leaking
status (i.e. either vote or vote-leaking) than the one of
the phase i and
● a variable X of th must not occur in two phases having
different leaking status.
Figure 4. Phase roles of FOO
e-voting system. Note that, by doing so, we do not only put
processes in parallel that were in sequence, we also make
them forget the execution of past phases cutting out some
potential links that rely on that aspect. Indeed, the voter
process in a phase i may use data received in a phase j < i
creating links between those two. When put in parallel, all
parts are standalone processes that are no longer linked by
past execution. Note also that we put standalone processes
in parallel that behave as if previous phases followed one
specific instantiation of the honest trace (i.e., the idealised
trace). This will be crucial for defining Honest Relations
Condition via biprocesses that could not be defined otherwise.
Definition 12. The id-leaking sub-roles (respectively
vote-leaking sub-roles) are as follow:
v
k
id
v
Rid (nid
i , nj ) = {A (ni , nj ) ∣ A ∈ Ro ∪ {V }, k id-leak.}
v
id
v
k
id
v
R (ni , nj ) = {A (ni , nj ) ∣ A ∈ Ro ∪ {V }, k vote-leak.}
Condition 1. (Honest Relations) The Honest Relations
Condition is satisfied if B is diff-equivalent and the honest
trace is phase-oblivious.
Remark 1. Note that the swaps are inconsistent across phases
(i.e. we do not swap same things in all phases). We could
not have defined such non-uniform swaps by relying on
the roles’ processes. Instead, this has been made possible
thanks to our divide & conquer approach (see Section 4.1)
and the Dishonest Condition.
Intuitively, the Dishonest Condition eventually ensures that
S is indistinguishable from an e-voting system based on the
reunion of id-leaking and vote-leaking sub-roles.
Example 19 (Continuing Example 16). The phase roles
V 1 , V 2 are depicted in Figure 4. Finally, Rid = {V 1 }
and Rv = {V 2 , Ab }.
4.3. Tally Condition
4.2. Honest Relations Condition
The Tally Condition’s goal is to prevent ballot secrecy
attacks that exploit the tally’s outcome. Intuitively, the Tally
Condition requires that for any valid ballot produced by S,
either (i) the ballot equals an honest ballot stemming from
an honest execution of A or B , or (ii) it is a dishonest ballot
and in that case, it must be that the vote the Tally would
extract from that ballot is the same before or after the swap
A ↔ B . Formally, we deal with the case (ii) by considering
executions of B so that we always can compare ballots before
or after the swap A ↔ B (i.e. intuitively in S or in Sr ).
This condition aims at ensuring the absence of relations
id
id-vote in honest executions. Let nid
A (resp. nB ) be the public
identity idA (resp. idB ) and as many as necessary (depending
on the protocol) fresh names. We may use A (resp. B ) to
refer to idA (resp. idB ). Let nv0 (resp. nv1 ) be the public
vote v0 (resp. v1 ) and fresh names. We need all those names
to be pairwise distinct. Now we define the biprocess B at
the core of the Honest Relations Condition:
v
v
v
id
id
v
B = ({Rid (nid
A , choice[n0 , n1 ]), R (choice[nA , nB ], n0 ),
Condition 2. (Tally) We assume that B is diff-equivalent.
id
id
v
v
v
id
id
R (nB , choice[n1 , n0 ]), R (choice[nB , nA ], nv1 )}
tr
The
Tally Condition holds if for any execution B Ð
→ B′
⊎ !R; φ0 ; 1)
leading to two frames φl , φr such that the corresponding
The biprocess B represents a system where votes (and voteexecution on the left is fair, it holds that for any ballot
leaking names) are swapped in id-leaking phase and identities
ba ∈ BB(tr, φl ) (with w as the handle) then either:
(and id-leaking names) are swapped in vote-leaking phase.
1) it equals an honest ballot: there exists a voter V (id, v )
The attacker should not be able to observe any difference
which had an honest interaction and cast a valid ballot
in the absence of relation between identity plus id-leaking
w such that wφl =E ba;
names and vote plus vote-leaking names.
2) or it corresponds to a dishonest ballot and must satisfy:
there exists some v ∈ V ∪{} such that Extract(wφl )⇓v
Example 20 (Resuming Example 18). One has B =
({1 ∶ out(a, idA ⊕ choice[r0 , r1 ]), 2 ∶ out(a, v0 ⊕
and Extract(wφr )⇓v .
10
In the first case, we accept to consider another equal (modulo
=E ) ballot to fulfil the condition because the attacker can
often copy ballots that are equal to existing honest ballots.
However, those repeated ballots will not be taken into account
with multiplicity because the tally considers the bulletin board
as a set of ballots quotiented by =E . Hence, those copies
made by the attacker won’t introduce any bias in the tally’s
outcome and are thus not harmful w.r.t. ballot secrecy.
Remark 2. The Tally Condition does not forbid making
copies of a ballot completely “blindly” (i.e., without
being able to link this ballot to a specific voter/identity).
Indeed, votes in vote-leaking phases are identical on both
sides of B and the second case (2) will thus trivially hold.
This actually improves the precision of the condition
since such copies are not harmful w.r.t. ballot secrecy.
In fact, in such a case, the attacker may observe a bias
that he might exploit to learn the vote contained in a
specific ballot, but the attacker would not be able to link
this ballot (and its vote) to a specific voter.
right) using those names. Next, the oracle opening all
valid ballots is as follows: OpenBal = kf ∶ in(cu , x).let z =
Ψ[x] in let v = Extract[x] in out(cu , v )) where cu is some
public channel and kf is the last phase that occurs in the
honest trace th. Finally, we define:
B D = ({Sf (A, v0 ), Sf (B, v1 ), !OpenBal} ∪ ! R; φ0 ).
Example 21 (Continuing Example 19). The process V D
associated to the FOO protocol is shown below:
1 ∶ M = commit(vi , ki ),
e = blind(M, ki′ ), s = sign(ei , key(id)),
out(c, ⟨pk(key(id)); s⟩). in(c, x).
if verSign(x, pk(kR )) = e
then 2 ∶ out(a, choice[unblind(x, ki′ ), sign(M, kR )]).
in(c, y ). if y = ⟨y1 ; M ⟩ then out(c, ⟨y1 , ⟨M, ki ⟩⟩)
Condition 3. (Dishonest) The Dishonest Condition holds
when:
tr
→ (P ; φ; kf ), if a
1) For any fair execution (S ; φ0 ; 1) Ð
voter process V (id, v ), id ∈ {A, B } casts a vote then
V (id, v ) had an honest interaction. Moreover, authority
sessions ai involved in this honest interaction are not
involved in other honest interactions and;
2) If th has some unknown part (i.e. th ≠ trh ), then the
biprocess B D must be diff-equivalent.
4.4. Dishonest Condition
This condition prevents attacks based on actively dishonest interactions where the attacker deliberately deviates
from the honest trace in order to exploit possibly more
links. The idea of the condition is to be able to reduce the
behaviours of the voter system to the parallel composition
of all phase roles that are based on the idealised execution
for some names chosen non-deterministically. We want this
reduction to be invisible for the attacker even when he can
learn the tally outcome. The condition requires that if a voter
process casts a ballot in an execution of the e-voting system
then it must be the case that it had an honest interaction
and all agents involved in that honest interaction are not
involved in others. The condition then requires that when
all id-leaking and vote-leaking names are fixed, the e-voting
system and the sequential composition of the phase roles
(i.e. processes A∀ ) are indistinguishable. To make sure that
the tally’s outcome could not break this equivalence, we test
the former in presence of an oracle opening all ballots.
This second requirement is defined by the diffv
equivalence of a specific biprocess. We let V D (nid
id , ni )
be the biprocess obtained by the (straightforward) merge
v
of the two following processes: (1) V (nid
id , ni ) and (2)
∀
id
v
V (nid , ni ) (i.e. see Definition 11). Recall that the process
V ∀ forgets the past execution at the beginning of each
phase and behaves as if the past execution followed the
idealised trace. In particular, it forgets previous (possibly
malicious) input messages. We similarly define biprocesses
AD for A ∈ Ro . Given an identity id and a vote v , we
define a process:
v
D
id
v
Sf (id, v ) = νnid
0 .νn0 .(ΠA∈R∪{V } A ((id, n0 ), (v, n0 )))
4.5. Main Theorem
Our main theorem states that our three conditions together
enforce ballot secrecy. Before stating and proving it, we first
show the essential property we deduce from the dishonest
condition.
Lemma 1. Let vi , vj be some distinct votes in V.
If the dishonest condition holds then there exists a
tr
fair execution ({V (idA , vi ), V (idB , vj )}∪ !R; φ0 ; 1) Ð
→
(P ; φ; kf ), if, and only if, there exist pairwise distinct
id
v
v
names nid
A , nB , ni , nj (not including vote or identity),
′
a trace tr and an execution
v
v
id
v
({P id ((idA , nid
A ), (vi , ni )), P ((idA , nA ), (vi , ni )),
id
id
v
v
id
P ((idB , nB )′ , (vj , nj )), P ((idB , nB ), (vj , nvj ))}
tr
∪ !R; φ0 ; 1) Ð→ (Q; ψ; kf )
where processes labelled [idA , vi ] (resp. [idB , vj ]) have
an honest interaction and cast a vote. In both directions,
we additionally have that obs(tr′ ) = obs(tr), φ ∼ ψ and
Res(tr, φ) = Res(tr′ , ψ ).
Proof 1 (Proof sketch, see full proof in Section B). We
prove the direction (⇒), the other one can be proven
similarly. The first item of the Dishonest condition allows
id
v
v
us to define names nid
A , nB , n0 , n1 pertaining to the two
disjoint honest interactions. By moving some τ -actions
backwards in the given execution, we explore the same
execution (up to τ -actions) starting with the left-side of
B D . If th = trh then there is only one instantiation of
the honest trace. Therefore, voters A and B followed
v
where Π denotes a parallel composition and nid
0 (resp. n0 )
is made of all id-leaking names except the identity (resp.
vote). Intuitively, Sf starts by creating all necessary names
and is then ready to complete one voter session (according
to processes V and A ∈ R on the left and V ∀ , A∀ on the
11
of idA and idB (yielding together the same multiset of
votes on both sides; i.e. {v0 , v1 }# ) and other ballots.
Finally, we show that the Tally Condition implies that
the part of bulletin boards containing the other ballots
yield the same multiset of votes in both executions.
the idealised trace and thus their executions can be
exactly mimicked by executions of processes A∀ for
A ∈ Ro ∪ {V } with appropriate names. Otherwise (i.e.
trh ≠ th), the diff-equivalence of B D implies that one
can replace V (resp. role process A) by V ∀ (resp. A∀ )
in the latter execution whilst preserving the executability
of the same observable actions (i.e. obs(tr′ )), leading
to statically equivalent frames. Due to the constraint on
phases, putting Ai in parallel instead of in sequence (as
done in A∀ ) allows completion of the same execution.
This yields the desired execution with the appropriate
multiset of processes at the beginning.
Finally, we show that the tally outcome is the same in
both executions. If th = trh , this follows from the equality
of the frames and equality (up to τ actions) of traces.
Otherwise, we prove this by contradiction: We note that
the bulletin boards are the same in both executions (by
static equivalence of frames and preservation of observable actions). Hence, there must be a ballot yielding
two different votes on both sides. Next, we consider an
extension of the above execution of B D where this ballot
is given to the opening oracle. The output of this session
of the oracle corresponds to the vote in the respective
sides. By hypothesis, the output message is thus different
on both sides. Since those are constants, it contradicts
the static equivalence of the resulting frames.
5. Case Studies
We now apply our technique to several case studies, illustrating its scope and effectiveness. We show in Section 5.1
how ProVerif can be leveraged to verify the three conditions.
In Section 5.2, we present and benchmark several e-voting
protocols within our class, and explore several threat models
for one of those protocols in Section 5.3.
We benchmark the verification times of our approach vs.
the only prior comparable approach, i.e., the direct encoding
of ballot secrecy in the modified ProVerif [11], which tries
to swap processes at beginning of phases. Other approaches
are not automated, or do not deal with the protocols and
threat models we consider (see discussions in Section 6). We
summarise our results in Figure 5. All our ProVerif models
can be found at [21].
5.1. Verifying the conditions
We explain in this section how to leverage ProVerif
to systematically verify the three conditions via dedicated
encodings. Writing these encodings is a straightforward
systematic task that we will fully automate in the future.
The Honest Relations Condition can be verified easily
in ProVerif: the first item is a simple syntactical check and
the biprocess B is easy to build and, as already discussed,
is well suitable for diff-equivalence. We explain next how
to verify the two other conditions.
The Tally Condition. We give different ways of verifying
the Tally Condition of increasing complexity.
(Syntactical check) If (i) the vote vi does not syntactically occur at all in outputs of id-leaking phases (i.e. in
v
Rid (nid
i , ni )) and (ii) there is no vote-leaking phase before
an id-leaking phase in the honest trace th then the condition
is satisfied since item 2 trivially holds then.
(Using a ballot-opening oracle) It is also possible to verify
the Tally Condition by analysing the diff-equivalence of the
biprocess B in presence of an oracle opening all ballots (i.e.
OpenBal defined in Section 4.4). The diff-equivalence of
B D = B ⊎ {!OpenBal} implies, for all executions and valid
ballots, item 2 of the Tally Condition. We formally state and
prove the former in Section C.
(Refine the opening oracle) When the final ballot casted by
voters occur in id-leaking phase, a valid, honest ballot of
A yields different votes when on the left or on the right of
B. Giving this ballot to an instance of the oracle OpenBal
allows the attacker to distinguish the left side from the right
side. Nevertheless, the tally condition may hold because of
item 2 of the Tally Condition only concerns dishonest ballot
(i.e. that are not casted by A nor B ). In that case, we need
to make sure that the opening oracle does not open ballots
Theorem 1. If a protocol ensures the Dishonest Condition,
the Tally Condition, and, the Relation Condition then it
ensures ballot secrecy.
Proof 2 (Proof sketch, full proof in Section B). Consider
tr
a fair execution (S ; φ0 ; 1) Ð
→ (P ; φ; kf ). We apply
Lemma 1 (using the Dishonest Condition) and obtain an
execution with a simpler structure, starting with phase
roles. The latter execution starts with a configuration
matching the left part of B, and hence we can use
the diff-equivalence of B (from the Honest Relations
Condition) to build an indistinguishable execution where
idA and idB have swapped their votes. In the latter
execution, processes corresponding to voters idA and
idB still have an honest interaction. The latter follows
from the fact that the honest trace is phase-oblivious
(implied by the Honest Relations Condition). Applying
Lemma 1 again, we deduce the existence of an exetr′
cution (Sr ; φ0 ; 1) Ð→ (Q; ψ; kf ) where idA and idB
follow the honest trace and cast a ballot, and such that
obs(tr′ ) = obs(tr), ψ ∼ φ.
It remains to show that Res(tr, φ) = Res(tr′ , ψ ). When
invoking Lemma 1 twice, we obtained executions that
have the same tally outcome. Thus, it suffices to prove
that when applying the diff-equivalence of B, we obtained
an execution that has the same tally outcome. First, we
note that the bulletin boards are the same in the two
executions (i.e. corresponding to the executions before
and after the vote swap). This is a consequence of static
equivalence over frames and preservation of observable
actions. Next, we split the bulletin boards into ballots
12
resulting from an honest execution of A or B . Note that in
order to define such an oracle, we would need to statically
v
know which ballot A or B will cast for fixed names nid
i , ni
h
by following the honest trace. However, if th ≠ tr then
honest ballots may depend on the chosen instantiation of
the honest trace. This is why, the present refinement can
be used only when th = trh . Assuming the former, we then
let biA be the message corresponding to the ballot casted
v
id
v
by V (nid
A , ni ) in the idealised execution for (nA , ni ); and
i
similarly for bB . It suffices to let the oracle open valid ballots
different from ba0A , ba1B on the right and ba1A , ba0B on the
left. We let Ψ¬A [x] be choice[neq(x, ba0A ), neq(x, ba1A )]
(i.e. neq ∈ Σd is as expected) and symmetrically for Ψ¬B [x].
We let Ψ¬AB [x] be and(Ψ¬A [x], Ψ¬B [x]). We can finally define the oracle decrypting ballots: OpenHonBal =
kf ∶ in(cu , x).let z = Ψ[x] in let z ′ = Ψ¬AB [x] in let v =
Extract[x] in out(cu , v )). We still have that the diffequivalence of BrD = B ⊎ {!OpenHonBal} implies the Tally
Condition (as shown in Section C) but this implications is
now tighter.
The Dishonest Condition Item 2 of the dishonest Condition
can be easily verified in ProVerif for the same reason
as the Honest Relations Condition. Let us explain how
to verify the first item using correspondence properties
of events that ProVerif can verify. We can equip the evoting system S with events that are fired with each input
and output containing exchanged messages and session
annotations. Then, the fact that a specific voter casts a
valid ballot can be expressed by such an event. Further,
the fact that a specific voter had an honest interaction can
be expressed as a sequence of implications of events. For
instance, if th = out(c, w).in(c, ⟨X; w⟩) then one would
write EventIn(a, ⟨x; yw ⟩) ⇒ EventOut((id, v ), yw ) where
id, v are voter annotations and a is a role session annotation,
x and yw are variables and open messages in events must
pattern-match with the exchanged messages. Finally, the fact
that such an honest interaction should be disjoint can be
established once for all by inspecting the idealised execution
and the honest trace.
Unlike the model of [25], we took the registration phase
into account. Note that [24] analyses JCJ for coercionresistance, a stronger property than ballot secrecy. However,
they considered a simpler threat model were the registration
phase is completely hidden from the attacker. Moreover, their
approach required non-negligible human efforts since one
has to “explicitly encode in the biprocess the proof strategy”.
Belenios. We analysed the Belenios protocol [26] (in its
mixnet version) which builds on the Helios protocol [27]
and considered the same threat model as for JCJ. Again,
contrary to [25], we took the registration phase into account.
Okamoto. We finally discuss the protocol due to
Okamoto [28] as modelled in [29]. This protocol features trapdoor commitments that no tool currently supports. Therefore,
it is currently impossible to automatically verify this protocol.
However, this protocol lies in our class and our theorem thus
applies. This could both ease manual verification and future
automated verification (support for trap-door commitments
might be available soon for the Tamarin prover).
5.3. Threat models
We now support the claim that our class of e-voting
protocols is expressive enough to capture a large class of
threat models. We analyse several threat models for Lee et
al. (variant proposed in [29]). This protocol contains two
phases. In the registration phase, each voter encrypts her vote
with the Tally’s public key, signs the ciphertext and (output
i) sends both messages to the registrar. The registrar verifies
the signature, re-encrypts the ciphertext using a fresh random
and (output ii) sends to the voter this signed ciphertext along
with a Designated Verifier Proof (DVP) of re-encryption.
The voter can then verify the signature and the DVP. Finally,
in the voting phase, the voter (output iii) sends its ballot,
which is the previously received signed re-encryption. Note
that the registrar re-encrypts and builds this DVP to enforce
coercion-resistance (the strongest privacy property for evoting protocols); i.e. even if the voter is actively coerced,
the voter can pretend to have voted for a different candidate
by exploiting the DVP.
5.2. E-voting protocols
Lee 1. The first threat model we consider is the only one
analysed in [11]. It considers the registrar’s signature key
and the tally’s private key corrupted, and considers infinitely
dishonest voters. The channel of outputs (i) and (ii) (i.e.
between voters and registrar) is assumed untappable (i.e.
everything is completely invisible to the attacker, even the
fact that a communication happened) while the channel of
output (iii) is anonymous. Since the registrar’s signing key
is corrupted, the dishonest voters do not need to have access
to registrar sessions (they can be played by the environment).
Similarly, since the tally’s private key is known to the attacker,
there is no need to explicitly model the tally when verifying
ballot secrecy directly in ProVerif. This simplifies the models
one needs to consider, partly explaining the effectiveness of
the direct encoding approach (46s).
Lee 2. Now, we no longer consider the tally’s key corrupted.
When verifying ballot secrecy without our conditions, it is
We now describe the different e-voting systems we
verified with our approach and compare (when possible)
with the direct encoding approach (see Figure 5).
FOO. We verified the three conditions for FOO [18] as
described in our running example (with a dishonest registrar
and considering dishonest voters). We use the same modelling
and threat model as in [11].
JCJ. We analysed the JCJ protocol [22] used in the Civitas system [23]. We adapted models from [24]. We fully
modelled the registration phase in which voters request and
then obtain credentials from the registrar. We considered
a threat model where the registrar and the tally are honest
but their secret keys are compromised. A voter requests a
credential on an unsecure channel and receives its credential
on an untappable channel. Voters send their ballot on an
anonymous channel.
13
Protocol
FOO
Lee 1
Lee 2
Lee 3
Lee 4
JCJ
Belenios
Ballot Secrecy?
3
3
3
3
7
3
3
Verif. time of our 3 conditions
0.04
0.04
0.05
0.01
6.64
18.79
0.02
Verif. time of the direct encoding (+ swaps)
0.26
46
=
(collapsed-phases: 45.33)
=
(collapsed-phases: 269.06)
169.94
⋆
⋆
Figure 5. Analysed protocols and threat models: We compare verification times in seconds between our approach (i.e. the time to verify all conditions) and
the direct encoding (i.e. using the swapping technique in ProVerif [11]). We ran ProVerif 1.94 on a single 2.67GHz Xeon core with 48GB of RAM. = means
that ProVerif did not terminate in 45 hours or consumed more than 30GB of RAM. When ProVerif did not terminate, we also tried to over-approximate the
model by collapsing some phases. We use ⋆ when the approach yielded false attacks, and hence is not precise enough to conclude.
thus mandatory to explicitly model the tally. This change
to the model causes ProVerif to not terminate on the direct
encoding of ballot secrecy. We also considered a variant of
the direct encoding approach, in which we over-approximate
the model. We collapsed the two phases into one, which
enables ProVerif to terminate in 45.33 seconds on the directencoding. Unfortunately, this alternative encoding is not
systematic: if the security relies on the phases, one would
obtain false attacks. For instance, removing the first phase
(i.e. no more phase annotation) causes ProVerif to return a
false attack. In contrast, the verification of our conditions
only takes a fraction of a second.
Lee 3. From the latter threat model, we additionally consider
a secure registrar signing key. Similarly, we now need to
explicitly model a registrar for dishonest voters. As for the
previous threat model, ProVerif is not able to directly prove
ballot secrecy. After collapsing phases, it terminates in 269.06
seconds. In contrast, our approach still concludes in less than
a tenth of second.
Lee 4. We modify the previous threat model by weakening the security of the output channel (i); we consider it
unsecure instead of untappable. In this case, ballot secrecy
no longer holds. Indeed, the attacker can eavesdrop the
signed encrypted vote v sent by a voter id at output (i),
extract the ciphertext, signs it with the key of a dishonest
voter and sends this to the registrar authority to get a reencryption of the same vote signed by the registrar. The
latter can be cast as a valid ballot. This ballot contains the
same vote v that id is willing to cast but is different (modulo
=E ) from the ballot of id since based on different randoms.
This interaction violates ballot secrecy because the tally’s
outcome will not be the same when starting with S or Sr .
When verifying our conditions, ProVerif returns an attack on
the tally condition (verified using the ballot-opening oracle
as explained in Section 5.1). By looking at the attack trace,
we can immediately infer the previous, real attack on ballot
secrecy. When using ProVerif on the direct-encoding, we
obtained the attack in 169.94 seconds.
far, the main technique to address this problem has been the
swapping approach. We compare ourselves to this approach
in Section 6.1 and discuss other approaches in Section 6.2.
6.1. Swapping Approach
The original idea of the swapping approach originates
from [20], was proven formally in [11], and afterwards
introduced in ProVerif. It aims at improving the precision
of diff-equivalence for protocols with synchronisation barriers which, intuitively, are phases in the e-voting context4 .
The main idea consists of guessing some permutations of
processes at the beginning of each synchronisation barrier
and then verifying a modified version of the protocol using
those permutations. An example showing this mechanism is
given in Section D. Theoretically, those process permutations
do not break trace equivalence since they transform configurations into structurally equivalent configurations. Note
that this approach is only compatible with replication in a
very constrained way: all phases above and underneath a
replication must be removed5 , causing a loss of precision.
The permutations are implemented using private channels on
which permuted processes exchange their data. Given a model
with barriers, the front-end of ProVerif first generates several
biprocesses without barriers, each corresponding to a possible
swap strategy (i.e. the permutation done at each barrier). The
equivalence holds if one of the produced biprocesses is
diff-equivalent (proven in [11]).
We now support the claims we made in the introduction
about the different problems the swapping approach cannot
tackle (while our approach can). First, it cannot deal with
honest roles present in different phases (except for the
voter role). Indeed, such roles would require synchronisation
barriers. Moreover, because a potentially unbounded number
of dishonest voters communicate with those authorities, this
requires modelling an unbounded number of sessions for
them. However, in the swapping approach, there cannot be
a replication underneath a synchronisation barrier, which
6. Related Work
4. Synchronisation barriers (or strong phases) need all processes in the
multiset to be ready to move to the next phase while phases (or weak
phases) allow to “drop” some processes that are not ready. In the context
of e-voting protocols, those notions intuitively coincide since we consider
fair executions whose voter processes are never dropped.
5. Intuitively, this is necessary to consider finitely many permutations.
As already argued, diff-equivalence in ProVerif is known
to be not enough precise to analyse vote-privacy via a direct
encoding (acknowledged e.g. in [10], [11], [20], [29]). So
14
mean we cannot model such roles.6 We encountered this
problem for a simplified variant of JCJ and Belenios: it was
impossible to model an unbounded number of sessions of
the registrar role that creates and sends credentials to voters
in the registration phase and then send encrypted credentials
to the bulletin box (so that the latter can verify eligibility).
Second, for a similar reason, it cannot tackle threat
models where no dishonest voter is assumed: an arbitrary
number of honest voters have to be explicitly modelled in
the system. Since they are present in multiple phases, this
would require replication.
Third, it gives rise to false attacks when the protocol’s
security also relies on the freshness of some data from
previous phases. The problem is that for the generated
processes, ProVerif considers two different sessions of a
certain phase using the same data resulting from one single
session of a previous phase, as a valid execution. The reason
is that, for a fixed swap strategy, ProVerif replaces synchronisation barriers of a process P by private communications
exchanging all data the process currently knows with another
process Q with which the swap occurs.
Those new private communications are abstracted by
the Horn-Clause approximations used by ProVerif: an input
on a private channel p is not consumed upon use, and can
be replayed later on. Therefore, ProVerif also explores the
possibility of swapping data with an old session of Q whose
data has already been swapped before.
This caused the false attacks for JCJ and Belenios (see
Figure 5); the credential being the fresh data coming from
the registration phase and used during the voting phase.
Finally, the swapping approach suffers from exponential
blow up. Indeed, the compiler produces Πj (nj !) processes
where ni is the number of processes active at phase i (e.g.
3 phases, ni = 3 lead to 216 processes to verify). This
is unsurprising since the compiler has to generate as many
processes as possible swaps, to guess one that yields security.
verification since it hides the problems aforementioned of
diff-equivalence in ProVerif.
Finally, the finite-scenarios are still rather big leading
to state space explosion problems. For instance, they were
unable to automatically verify the JCJ protocol (without
modelling the registration phase at all) for this reason.
Case-by-case Analysis. On a case-by-case basis, it is sometimes possible to directly prove ballot secrecy using diffequivalence and dedicated encodings as it is done in [10].
They verified a strictly stronger privacy property than ballot
secrecy (i.e. coercion-resistance) on JCJ for a specific threat
model (simpler than the one we consider in Section 5).
However, their approach is not systematic and requires nonnegligible human efforts since one has to “explicitly encode
in the biprocess the proof strategy”.
The analysis method we develop in this paper borrows the
methodological approach of [30]: devise sufficient conditions
implying a complex privacy property hard to verify, via
a careful analysis and categorisation of known attacks.
However, we target a different class of protocols and property;
and resulting sufficient conditions are totally unrelated.
7. Conclusion
We presented three conditions that together imply ballot
secrecy. They proved to be tight enough to be conclusive
on several case studies. Verifying ballot secrecy via our
conditions constitutes a new approach which outperforms
prior works: we cover a greater class of e-voting protocols
and threat models and their analysis is more efficient.
Based on the main limitations of our current work, we
identify several avenues for future work. First, our method is
currently unable to deal with revotes. While adding revotes
might be directly achievable, we conjecture that considering
revoting policies (e.g. ballot weeding in Helios as done
in [12]) is a more intricate challenge. Second, we conjecture
that our restriction to fair executions used in our definition of
ballot secrecy (which is in line with e.g. [12]) could possibly
be relaxed. In prior works [11], [20] based on strong phases
(i.e., synchronisation barriers), the corresponding assumption
would be that, at each beginning of phase, processes PA , PB
of voters idA and idB are non-null. One could target an
even weaker assumption by requiring that, at the beginning
of each phase, PA is null, if, and only if, PB is so. Next,
even though our method is systematic, our method currently
requires a minimal amount of manual pre-processing. This
could be addressed, e.g., by developing a simple front-end of
ProVerif taking an e-voting protocol as input and returning
models suitable for the verification of each of our conditions
following encodings described in Section 5.1. Finally, we
believe that our conditions can be adapted to enforce more
complex privacy-type properties in e-voting protocols as
receipt-freeness and coercion-resistance [24], [29].
We conclude that the methodology we presented in this
paper is a successful approach that advances the state-ofthe-art for the highly complex problem of the verification
of privacy-related properties, and paves the way for further
improvements.
6.2. Other approaches
Small-attack Property. A different line of work is to devise
small attacks properties. In [12], they devise such a theorem
for ballot secrecy: i.e. they show that proving ballot secrecy
for some specific finite-scenarios implies ballot secrecy
for the general, unbounded case. The focus on that work
is on complex ballots weeding mechanisms as found in
Helios [27] for instance. They are the first to propose
automated verification that deals with such mechanisms. In
contrast, they require that the pre-tally part of the protocol
must be made of only one voting phase. Moreover, the
single voting phase must be action-determinate (same actions
yield statically equivalent frames). This approach is thus not
able to deal with e-voting protocol modelisations involving
more than one phase like the ones we consider in Section 5.
Moreover, considering only one phase greatly simplifies the
6. Splitting the role process in two multiple processes exchanging data
through a private channel during the synchronisation barrier is not an option
because it does not provide the swapping capability.
15
References
[1]
D. Bernhard, V. Cortier, D. Galindo, O. Pereira, and B. Warinschi,
“Sok: A comprehensive analysis of game-based ballot privacy definitions,” in 2015 IEEE Symposium on Security and Privacy. IEEE,
2015, pp. 499–516.
[2]
V. Cortier, D. Galindo, R. Küsters, J. Müller, and T. Truderung, “Sok:
Verifiability notions for e-voting protocols,” in 36th IEEE Symposium
on Security and Privacy (S&P’16), 2016.
[3]
V. Cortier and B. Smyth, “Attacking and fixing Helios: An analysis
of ballot secrecy,” vol. 21, no. 1, pp. 89–148, 2013.
[4]
S. Kremer and P. Rønne, “To du or not to du: A security analysis of
du-vote,” in Proceedings of the 1st IEEE European Symposium on
Security and Privacy (EuroS&P’16). Saarbrücken, Germany: IEEE
Computer Society, Mar. 2016, pp. 303–323.
[5]
S. Meier, B. Schmidt, C. Cremers, and D. Basin, “The Tamarin
Prover for the Symbolic Analysis of Security Protocols,” in Proc. 25th
International Conference on Computer Aided Verification (CAV’13),
ser. LNCS, vol. 8044. Springer, 2013, pp. 696–701.
[6]
A. Armando et al., “The AVANTSSAR platform for the automated
validation of trust and security of service-oriented architectures,” in
Proc. 18th International Conference on Tools and Algorithms for
the Construction and Analysis of Systems (TACAS’12), vol. 7214.
Springer, 2012, pp. 267–282.
[7]
B. Blanchet, “An Efficient Cryptographic Protocol Verifier Based on
Prolog Rules,” in Proceedings of CSFW’01. IEEE Comp. Soc. Press,
2001, pp. 82–96.
[8]
C. Cremers, “The Scyther Tool: Verification, falsification, and analysis
of security protocols,” in Proceedings of CAV’08, ser. LNCS. Springer,
2008.
[9]
S. Delaune and L. Hirschi, “A survey of symbolic methods for
establishing equivalence-based properties in cryptographic protocols,”
ArXiv e-prints, Oct. 2016.
[20] S. Delaune, M. D. Ryan, and B. Smyth, “Automatic verification of
privacy properties in the applied pi-calculus,” in Proceedings of the
2nd Joint iTrust and PST Conferences on Privacy, Trust Management
and Security (IFIPTM’08), ser. IFIP Conference Proceedings, vol. 263.
Springer, 2008.
[21] “Proverif models for our case studies.” https://sites.google.com/site/
oakland17evoting/, accessed: 2016-11-11.
[22] A. Juels, D. Catalano, and M. Jakobsson, “Coercion-resistant electronic
elections,” in Proceedings of the 2005 ACM workshop on Privacy in
the electronic society. ACM, 2005, pp. 61–70.
[23] M. R. Clarkson, S. Chong, and A. C. Myers, “Civitas: Toward a secure
voting system,” in 2008 IEEE Symposium on Security and Privacy
(sp 2008), May 2008, pp. 354–368.
[24] M. Backes, C. Hritcu, and M. Maffei, “Automated verification of
remote electronic voting protocols in the applied pi-calculus,” in 2008
21st IEEE Computer Security Foundations Symposium. IEEE, 2008,
pp. 195–209.
[25] M. Arapinis, V. Cortier, and S. Kremer, “When are three voters
enough for privacy properties?” in European Symposium on Research
in Computer Security. Springer, 2016, pp. 241–260.
[26] V. Cortier, D. Galindo, S. Glondu, and M. Izabachene, “Election
verifiability for helios under weaker trust assumptions,” in European
Symposium on Research in Computer Security. Springer, 2014, pp.
327–344.
[27] B. Adida, “Helios: Web-based open-audit voting.” in USENIX Security
Symposium, vol. 17, 2008, pp. 335–348.
[28] T. Okamoto, “An electronic voting scheme,” in Advanced IT Tools.
Springer, 1996, pp. 21–30.
[29] S. Delaune, S. Kremer, and M. D. Ryan, “Verifying privacy-type
properties of electronic voting protocols,” Journal of Computer
Security, vol. 17, no. 4, pp. 435–487, Jul. 2009.
[30] L. Hirschi, D. Baelde, and S. Delaune, “A method for verifying
privacy-type properties: the unbounded case,” in Proceedings of the
37th IEEE Symposium on Security and Privacy (S&P’16), M. Locasto,
V. Shmatikov, and U. Erlingsson, Eds. IEEECSP, 05 2016, to appear.
[10] M. Backes, C. Hritcu, and M. Maffei, “Automated verification of
remote electronic voting protocols in the applied pi-calculus,” in Proc.
21st IEEE Computer Security Foundations Symposium, (CSF’08).
IEEE Computer Society Press, 2008, pp. 195–209.
Appendix A.
Model and Definitions
[11] B. Blanchet and B. Smyth, “Automated reasoning for equivalences
in the applied pi calculus with barriers,” in CSF’16: 29th Computer
Security Foundations Symposium, 2016, to appear.
A.1. Computation relation and rewriting systems
[12] V. C. Myrto Arapinis and S. Kremer, “Three voters are enough for
privacy properties,” in ESORICS’16, 2016, to appear.
While the precise definition of the computation relation
is unimportant for this paper, we describe how it can be
obtained from rewriting systems.
A rewriting system is a set of rewriting rules
g(u1 , . . . , un ) → u where g is a destructor, and
u, u1 , . . . , un ∈ T (Σc , X ). A ground term t can be
rewritten into t′ if there is a position p in t and a rewriting
rule g(u1 , . . . , un ) → u such that t∣p = g(v1 , . . . , vn ) and
v1 =E u1 θ, . . . , vn =E un θ for some substitution θ, and
t′ = t[uθ]p (i.e. t in which the sub-term at position p has
been replaced by uθ).
Moreover, we assume that u1 θ, . . . , un θ, and uθ are
messages. Finally, for some term t, we have t⇓u when u is
a message such that t → u. We write t⇓ to denote that there
exists some u such that t⇓u, and write t⇓̸ otherwise.
Example 22. The properties of symbols in Σd (see Example 1) are reflected through the following rewriting
rules:
open(commit(xm , y ), y ) → xm
eq(x, x) → ok
verSign(sign(xm , zk ), pkv(zk )) → xm
πi (⟨x1 , x2 ⟩) → xi for i ∈ {1, 2}.
[13] V. Cortier, “Electronic voting: how logic can help,” in International
Joint Conference on Automated Reasoning. Springer, 2014, pp. 16–25.
[14] S. Santiago, S. Escobar, C. Meadows, and J. Meseguer, “A formal
definition of protocol indistinguishability and its verification using
maude-npa,” in Security and Trust Management. Springer, 2014, pp.
162–177.
[15] V. Cortier and B. Smyth, “Attacking and fixing helios: An analysis
of ballot secrecy,” Journal of Computer Security, vol. 21, no. 1, pp.
89–148, 2013.
[16] B. Blanchet, M. Abadi, and C. Fournet, “Automated verification of
selected equivalences for security protocols,” Journal of Logic and
Algebraic Programming, 2008.
[17] M. Abadi and C. Fournet, “Mobile values, new names, and secure communication,” in Proc. 28th Symposium on Principles of Programming
Languages (POPL’01). ACM Press, 2001.
[18] A. Fujioka, T. Okamoto, and K. Ohta, “A practical secret voting
scheme for large scale elections,” in International Workshop on the
Theory and Application of Cryptographic Techniques. Springer, 1992,
pp. 244–251.
[19] S. Delaune, S. Kremer, and M. D. Ryan, “Verifying privacy-type
properties of electronic voting protocols,” Journal of Computer
Security, no. 4, 2008.
16
Appendix B.
Proofs
This rewriting system is convergent modulo the equational theory E given in Example 2, and induces a
computation relation as described above. For instance,
we have that open(verSign(t, pkv(skA )), kc )⇓v where
t = unblind(sign(blind(commit(v, kc ), kb ), skA ), kb ))
because t =E sign(commit(v, kc ), skA ).
Lemma 1.
Let vi , vj be some distinct votes in V.
If the dishonest condition holds then there exists a
tr
fair execution ({V (idA , vi ), V (idB , vj )}∪ !R; φ0 ; 1) Ð
→
(P ; φ; kf ), if, and only if, there exist pairwise distinct
id
v
v
names nid
A , nB , ni , nj (not including vote or identity),
′
a trace tr and an execution
Example 23. We are able to model the ∧ boolean operators using a destructor and ∈ Σc ∩ Σpub and the
following rewriting rule: and(x, y ) → ok. The induced
computational relation satisfies for any terms t1 , t2 that
and(t1 , t2 )⇓ok if, and only if, t1 ⇓ and t2 ⇓.
v
v
id
v
({P id ((idA , nid
A ), (vi , ni )), P ((idA , nA ), (vi , ni )),
id
id
v
v
id
P ((idB , nB )′ , (vj , nj )), P ((idB , nB ), (vj , nvj ))}
tr
∪ !R; φ0 ; 1) Ð→ (Q; ψ; kf )
where processes labelled [idA , vi ] (resp. [idB , vj ]) have
an honest interaction and cast a vote. In both directions,
we additionally have that obs(tr′ ) = obs(tr), φ ∼ ψ and
Res(tr, φ) = Res(tr′ , ψ ).
A.2. Trace Equivalence
Example 24. Continuing Example 6, we give the formal witness of statically inequivalence. Remind that
φ is as in Example 5 and φ′ = φ{v1 ↦ v2 } i.e. the
frame one obtains from φ by replacing the constant
v1 by the constant v2 . If we let Ro be the recipe
open(π2 (π1 (w3 )), π2 (π2 (w3 ))) and R1 be the recipe
v1 , one would obtain Ro φ⇓v1 and R1 ⇓v1 =E v1 while
Ro φ′ ⇓v2 but R1 ⇓v1 =/ E v2 .
Proof 3. (⇒) First item of the condition implies that A and
B had (disjoint) honest interactions. This allows us to
id
v
v
define properly names nid
A , nB , ni , nj and all (disjoint)
authorities sessions involved in those two disjoint honest
interactions. As a slight abuse of notation, we may omit
the vote or the identity from those vectors and write for
id
instance nid
A to refer to (idA , nA ). Moving backward
some τ actions, one obtains an execution
Remind that obs(tr) is defined as the subsequence of tr
obtained by erasing all the τ actions (i.e. τ, τthen , τelse ).
v
id
v
(({V (nid
A , ni ), V (nB , nj )}∪ ! R
Definition 14. Let K1 and K2 be two configurations. We
say that K1 is trace included in K2 , written K1 ⊑ K2 ,
v
id
v
→ (P ′ ; φ; kf )
⋃A∈Ro {A(nid
A , ni ), A(nB , nj )}; φ0 ; 1) Ð
(1)
with obs(tr′ ) = obs(tr). If th = trh then there is only one
instantiation of the honest trace. Therefore, voters A and
B followed the idealised trace and thus their executions
can be exactly mimicked by executions of processes
A∀ for A ∈ Ro ∪ {V } with appropriate names. Indeed,
v
once names nid
id , ni are fixed, there is only one possible
execution (modulo =E ) following the trace trh and this
is, by definition of those processes, the execution A∀ can
play. In that case, the resulting frame is ψ = φ. Otherwise
(i.e. trh ≠ th), we remark that the multiset of processes
at the beginning of Execution 1 can be reached from
the multiset of processes {Sf (A, v0 ), Sf (B, v1 )}∪ ! R.
The Execution 1 can thus be played by the right side
of the biprocess B D (or by a biprocess one can obtain
from it by applying a bijection of free public constants).
Applying diff-equivalence of B D (i.e. second item of the
Dishonest Condition) and the fact that diff-equivalence
is stable by bijection of free, public constants, allows us
to replace V (resp. role process A) by V ∀ (resp. A∀ ) in
the latter execution whilst preserving the executability of
the same observable actions (i.e. obs(tr′ )) and leading
to a frame ψ ∼ φ. Next, we remark that thanks to the
constraint on phases, putting Ai in parallel instead of
in sequence (as done in A∀ ) allows to complete the
same execution. We can also remove !OpenBal from
the starting multiset whilst preserving the executability.
We thus get the desired execution with the appropriate
multiset of processes at the beginning.
tr′
when, for any K1 Ð
→ K1′ there exists K2 Ð→ K2′ such
that obs(tr) = obs(tr′ ) and φ(K1′ ) ∼ φ(K2′ ). They are
in trace equivalence, written K1 ≈ K2 , when K1 ⊑ K2
and K2 ⊑ K1 .
tr
tr′
A.3. Diff-Equivalence
The semantics of bi-processes is defined as expected via
a relation that expresses when and how a bi-configuration
may evolve. A bi-process reduces if, and only if, both sides
of the bi-process reduce in the same way triggering the same
rule: e.g., a conditional has to be evaluated in the same way
on both sides. For instance, the T HEN and E LSE rules for
the LET construct are depicted in Figure 6.
When the two sides of the bi-process reduce in different
tr
ways, the bi-process blocks. The relation Ð
→bi on bi-processes
is therefore defined as for processes. This leads us to the
following notion of diff-equivalence.
Definition 15.
A bi-configuration K0 satisfies diffequivalence if for every bi-configuration K = (P ; φ)
tr
such that K0 Ð
→bi K for some trace tr, we have that:
● both sides generate statically equivalent frames: fst(φ) ∼
snd(φ);
α
● both sides are able to execute same actions: if fst(K ) Ð
→
AL then there exists a bi-configuration K ′ such that
α
KÐ
→bi K ′ and fst(K ′ ) = AL (and similarly for snd).
17
L ET
L ET-FAIL
τthen
(i ∶ let x = choice[vl , vr ] in P else Q ∪ P ; φ; i) ÐÐ→ (i ∶ P {x ↦ choice[ul , ur ]} ∪ P ; φ; i)
when vl ⇓ul and vr ⇓ur for some ul , ur
τelse
(i ∶ let x = choice[vl , vr ] in P else Q ∪ P ; φ; i) ÐÐ→ (i ∶ Q ∪ P ; φ; i) when vl ⇓̸ and vr ⇓̸
Figure 6. Two rules of the semantics for bi-processes
a vote, obs(tr′ ) = obs(tr), φ ∼ φ1 and Res(tr, φ) =
Res(tr′ , φ1 ). We now deduce from the diff-equivalence
of B I (i.e. the Honest Relations Condition, second item)
and the fact that diff-equivalence is stable by bijection
of names, an execution:
In order to conclude, we shall prove Res(tr, φ) =
Res(tr′ , ψ ) (in the both cases we distinguished). In
the case th = trh , it follows from ψ = φ and the fact
that tr′ has same observable actions as tr. Otherwise,
we assume Res(tr, φ) =/ Res(tr′ , ψ ) for the sake of the
contradiction. We remark that BB (tr, φ) = BB (tr′ , ψ )
follows from φ ∼ ψ and obs(tr) = obs(tr′ ). Further,
there exists an handle w such that out(cb , w) ∈ tr
(and thus out(cb , w) ∈ tr′ ), Ψb [wφ]⇓u for some u
(and thus Ψb [wψ ]⇓u′ for some u′ since φ ∼ ψ and
Ψb is a public term), and Extract(wφ)⇓vl ∈ V ∪ {},
Extract(wψ )⇓vr ∈ V ∪ {} with vl ≠ vr . At least on one
side, the extraction does not lead to (otherwise we have
= ). By symmetry, we assume Extract(wφ)⇓vl ∈ V.
We now consider a straightforward extension of the
previously considered execution of the biprocess B D
by adding the trace trO = τ.in(cu , w).τthen .out(cu , we ).
This trace corresponds to the replication of the OpenBal
process and a usage of one instance of the oracle which
tries to extract a vote from the ballot w. Note that
the given execution can be extended with this trace on
the left (the conditional holds) and because B D is diffO
equivalent, on the right as well. We call φO
l (resp. φr )
the resulting frame on the left (resp. on the right). By
O
diff-equivalence, it holds that φO
l ∼ φr . We remark that
we φl =E vl and we φr =E vr . Remind that vl and vr are
two different public constants not involved in equations
in E . Therefore, we φr =/ E vl . Hence the recipe eq(we , vl )
(remind that vl is a public constant) does not fail when
applied on the left (i.e. on φl ) but does fail when applied
O
on the right (i.e. on φr ) contradicting φO
l ∼ φr .
(⇐) Via a similar proof, we make use of the diffequivalence of B D to get an execution with real role
processes when A and B did not follow the idealised
trace; otherwise, we get the execution with real role processes from the uniqueness of executions following trh .
The execution from S can then be obtained by creating
appropriate names since they are pairwise distinct.
v
v
id
v
({P id (nid
A , n1 ), P (nB , n0 ),
v
v
id
v
P id (nid
→ (Q′ ; ψ ′ ; kf )
B , n0 ), P (nA , n1 )}∪ !R; φ0 ; 1) Ð
(3)
with ψ ∼ φ1 . Remark that in the latter execution,
processes labelled [idA , v1 ] (resp. [idB , v0 ]) have an
honest interaction and cast a vote as well. This relies
on the fact that the honest trace is phase-oblivious by
the Honest Relations Condition. Thereby, if (i) a set of
processes follows the honest trace for all phases having
the same leaking-status isolately, then (ii) it follows the
honest trace. Property (i) on Execution 3 follows from the
fact this property holds in Execution 2 and is transferred
to Execution 3 by the diff-equivalence. For instance,
v
v
id
v
processes in P id (nid
A , n0 ) and P (nB , n1 ) follow the
honest trace phase-by-phase in Execution 2. By diffv
v
id
v
equivalence, P id (nid
A , n1 ) and P (nA , n1 ) follow the
honest trace in Execution 3 and thus processes labelled
[idA , v1 ] have an honest interaction in Execution 3. By
Lemma 1, we deduce the existence of an execution
tr′
(Sr ; φ0 ; 1) Ð→ (Q′′ ; ψ1 ; kf )
tr′′
Theorem 1. If a protocol ensures the Dishonest Condition,
the Tally Condition, and, the Relation Condition then it
ensures ballot secrecy.
tr
Proof 4. Consider a fair execution (S ; φ0 ; 1) Ð
→ (P ; φ; kf ).
We now apply Lemma 1 and obtain an execution
v
v
id
v
({P id (nid
A , n0 ), P (nA , n0 ),
tr′
v
v
id
v
P id (nid
→ (Q; φ1 ; kf )
B , n1 ), P (nB , n1 )}∪ !R; φ0 ; 1) Ð
(2)
id
v
v
where all names in nid
,
n
,
n
,
n
are
fresh,
pairwise
0
1
A
B
distinct names, such that processes labelled [idA , v0 ]
(resp. [idB , v1 ]) have an honest interaction and cast
18
where idA and idB follow the honest trace and cast a
ballot, and such that obs(tr′′ ) = obs(tr′ ), ψ ∼ ψ1 and
Res(tr′ , ψ ) = Res(tr′′ , ψ1 ).
It remains to show that Res(tr0 , φ0 ) = Res(tr′′ , ψ0 ).
Since Res(tr′ , ψ ) = Res(tr′′ , ψ1 ) and Res(tr0 , φ0 ) =
Res(tr′ , φ1 ), it suffices to show Res(tr′ , φ1 ) =
Res(tr′ , ψ ). We let φl = φ1 , φr = ψ , bl = BB(tr′ , φl ),
and, br = BB(tr′ , φr ). First, we have that if out(cb , wb )
occurs in tr′ then wb induces a valid ballot on the left
if, and only if, it induces a valid ballot on the right.
This is because φl ∼ φr and Ψb [] is a public term
(and thus induces a recipe). Next, there are two handles
w0 , w1 corresponding to the honest casting of [idA , v0 ]
(resp. [idA , v1 ]) and [idB , v1 ] (resp. [idB , v0 ]) on the left
(resp. on the right). We have that Extract(wj φl )⇓vj and
Extract(wj φr )⇓v1−j for all j = 0..1. We can now split
the set of valid ballots into ballots w0 , w1 and the others:
for d ∈ {l, r}, one has bd = {w0 φd , w1 φd } ⊔ {b1d , . . . , bld }
where bid = wi φd and Extract({w0 φl , w1 φl }) =
Extract({w0 φr , w1 φr }) = {v0 , v1 }# . It remains to show
that Extract({b1l , . . . , bll }) = Extract({b1r , . . . , blr }). We
actually have that Extract(bir ) = Extract(bir ) for all
1 ≤ i ≤ l as a direct consequence of the Tally Condition.
of the vi must be different from . By symmetry, we
assume v1 ∈ V.
We now remark that the execution of B under consideration can also be executed by B D (or BrD for
the case (ii)). We now consider a straightforward extension of that latter execution by adding the trace
trO = τ.in(cb , wba ).τthen .τthen .out(cu , we ). This trace
corresponds to the replication of the OpenBal (or
OpenHonBal for the case (ii)) process and a usage of
one instance of the oracle which tries to extract a vote
from the ballot wba . Note that the given execution can be
extended with this trace on the left since the conditional
of that session of the oracle holds: indeed the underlying
ballot is valid and for the case (ii) by definition of Ψ¬AB
and the fact that ba is not equal to the ballot of A and
B . Moreover, since B D (resp. BrD ) is diff-equivalent, the
execution can be extended with trO on the right as well.
O
We call φO
l (resp. φr ) the resulting frame on the left
(resp. on the right). By diff-equivalence, it holds that
O
φO
l ∼ φr . We remark that we φl =E vl and we φr =E vr .
Hence the recipe eq(we , vl ) (remind that vl is a public
constant) does not fail when applied on the left but fail
O
when applied on the right contradicting φO
l ∼ φr .
Appendix C.
Case Studies
(Resuming Section 5.1) We now state that B D and BrD
can be use to verify the Tally Condition.
Lemma 2. If (i) B D is diff-equivalent or (ii) BrD is diffequivalent and th = trh then the protocol ensures the
Tally Condition.
Proof 5. The outline of the proof is as follows: we first
remark that B is diff-equivalent as well, then we use the
oracle OpenHonBal or OpenBal to show that the Tally
conditions holds.
Diff-equivalence is stable by removal of processes in the
initial multiset. Formally, for a biprocess B, if B = ({P }∪
Q; φ) is diff-equivalent then (Q; φ) is diff-equivalent.
This implies that B is diff-equivalent as well. Now, let
us show that the Tally condition holds.
(If trh ≠ trh ). It remains to show that the diff-equivalence
of B D implies the Tally Condition. In that case we can
actually prove that the diff-equivalence of B D implies that
for any execution of B, the two executions on both sides
yield exactly the same tally’s outcome (w.r.t. Res(⋅)). The
proof of this can be found in the proof of the Lemma 1
(complete proof in Section B).
(If trh = trh ). Then we show that the diff-equivalence
of BrD implies the Tally Condition. This suffices to
conclude since the diff-equivalence of B D implies the diffequivalence of BrD . Consider a trace tr and an execution
of B of that trace that is fair on the left (i.e. considering
the left part of that execution) leading to the two frames
φl , φr . Since B is diff-equivalent, we have that φl ∼ φr .
Since the left part of the considered execution is fair, A
and B followed the honest trace and casted a ballot. Since
φl ∼ φr and the two executions on both side followed
the same trace, A and B also casted a ballot on the right.
Let ba0A (resp. ba1A ) be the ballot casted by [A, 0] (resp.
[A, 1]) on the left of the execution under consideration.
We define similarly ba1B and ba0B . Note that since here
we deal with the case th = trh , those ballots match with
the ones used in the conditional Ψ¬AB in OpenHonBal.
Let ba be a valid ballot in the BB on the left: ba = wba φl
where out(b, wba ) ∈ tr and Ψb [wba φl ] =E ok. Since Ψb
is public and φl ∼ φr , we also have that bar = wba φr
is a valid ballot in the BB on the right: Ψb [wba φr ]⇓.
Indeed, Ψb [wba ] constitutes a recipe that the attacker
can use. We now distinguish two cases.
(i) If ba =E ba0A or ba =E ba1B then the Tally condition
is satisfied for this ballot (item 1. of the condition).
(ii) Otherwise, ba =/ E ba0A and ba =/ E ba1B and we
shall prove that there exists v ∈ V ∪ {} such that
Extract(wba φl )⇓v and Extract(wba φr )⇓v . For the sake
of the contradiction, we assume that this is not the
case. By definition of Extract(⋅) this implies that
there exists two different vl , vr ∈ V ∪ {} such that
Extract(wba φl )⇓vl and Extract(wba φr )⇓vr . Remind
that vl and vr are two different public constants not
involved in equations in E . Therefore, we φr =/ E vl . One
Appendix D.
Additional explanations on the swapping approach
We consider the voter process from Example 13; i.e.
V (id, v ) = 1 ∶ out(a, id).2 ∶ out(a, v ). We have that
({V (A, choice[v1 , v2 ]), V (B, choice[v2 , v2 ])}; ∅; 1)
is not diff-equivalent since after the two outputs
out(a, A), out(a, B ), the resulting multiset of processes
is {out(a, choice[v1 , v2 ]), out(a, choice[v2 , v1 ])} which
is obviously not diff-equivalent. However, if one is
allowed to swap the order of the two processes on
the right side of that biprocess, he would obtain
{out(a, choice[v1 , v1 ]), out(a, choice[v2 ; v2 , )]} which is
diff-equivalent.
19
© Copyright 2026 Paperzz