Non-Malleable Non-Interactive Zero Knowledge

Non-Malleable Non-Interactive Zero Knowledge
and Adaptive Chosen-Ciphertext Security
A MIT S AHAI
Abstract
then use non-malleable NIZK to build a simple public
key encryption scheme under general assumptions that
achieves the highest level of privacy known to be possible, i.e. security against adaptive chosen-ciphertext attack. This considerably simplifies the only previously
known encryption scheme achieving this level of security under general assumptions.
We introduce the notion of non-malleable noninteractive zero-knowledge (NIZK) proof systems. We
show how to transform any ordinary NIZK proof system into one that has strong non-malleability properties. We then show that the elegant encryption scheme
of Naor and Yung [NY] can be made secure against the
strongest form of chosen-ciphertext attack by using a
non-malleable NIZK proof instead of a standard NIZK
proof.
Our encryption scheme is simple to describe and
works in the standard cryptographic model under general assumptions. The encryption scheme can be realized assuming the existence of trapdoor permutations.
Non-Malleable Non-Interactive Zero-Knowledge.
Zero-knowledge proofs, introduced by Goldwasser, Micali, and Rackoff [GMR], are fascinating and extremely
useful constructs. The intuition behind them is clear
from their name: they should be convincing, and yet
yield nothing beyond the validity of the assertion being
proven. Blum, Feldman, and Micali [BFM] extend this
seemingly contradictory notion to the non-interactive
setting as well; they define a notion of non-interactive
zero-knowledge proofs, which are sent without interaction from the Prover to the Verifier, in a model where all
parties share a common random reference string. NIZK
proofs have proved themselves of great value, and have
been used to achieve chosen-ciphertext security for
encryption schemes [NY, DDN] as well as signature
schemes secure against chosen-message attack [BG].
For NIZK, the formal requirement of [BFM] (later
refined by [FLS]) captures the following requirement:
what one can output after seeing an NIZK proof is indistinguishable from what one can output without seeing it,
if the output is examined independent of the actual reference string. However, the reference string is precisely
what is used to build and verify NIZK proofs! Thus,
nothing in the formal definition prevents the possibility
that seeing one NIZK proof could enable an adversary
to prove many other statements it could not have proved
otherwise, which is very far from the intuition of “zeroknowledge.”1 To some extent, this is unavoidable: one
1 Introduction
Modern cryptography provides us with several fundamental tools, from encryption schemes to zeroknowledge proofs. For each of these tools, we have
some intuition about what they “should” achieve. But
we must be careful to understand the gap between our
intuition and what we can actually achieve. Indeed, a
major goal of cryptography is to refine our tools to bring
them closer to achieving our intuition, while simultaneously refining our intuitions to be consistent with what
is attainable.
In this work, we focus on two basic cryptographic tools: non-interactive zero-knowledge proofs
and public-key encryption schemes. We refine our intuition behind non-interactive zero-knowledge (NIZK)
proofs by defining the notion of non-malleable NIZK,
and give constructions that achieve non-malleability. We
MIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139, USA. E-Mail:
[email protected]. Supported by a DOD/NDSEG
Graduate Fellowship and partially by DARPA grant DABT-96-C0018.
1 This is true even for the adaptive zero-knowledge definition of
NIZK. We give an example in the next paragraph.
1
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
can always duplicate an NIZK proof, and hence prove
something that one possibly could not have proved beforehand. But can we hope to demand the following
requirement: whatever one can prove after seeing an
NIZK proof, one could also have proved before seeing it,
except for the ability to duplicate the proof? This would
come much closer to our intuition of “zero-knowledge.”
Following the paradigm of [DDN] (who studied, among
other topics, similar problems which arise in concurrent
executions of interactive zero-knowledge proofs), we
call this property non-malleability2 for non-interactive
zero-knowledge, and it is precisely this property we introduce and examine in this work.
Note that this non-malleability property does not follow from the current definitions of NIZK, as the following simple example demonstrates: Suppose is a NIZK
proof system for a hard language L 2 NP. Let L0 be
the language of pairs of strings in L, i.e. L0 = f(x; y ) :
x 2 L and y 2 Lg. Then if we define a new proof
system that uses a reference string = 1 2 consisting of the concatenation of two reference strings for
, and has proofs simply consist of pairs of proofs under that x 2 L and y 2 L using reference strings 1
and 2 respectively, it is easy to verify that will be a
NIZK proof system for L0 . However, suppose we see a
proof p = (p1 ; p2 ) that (x; y ) 2 L0 and we do not know
how to prove that y 2 L, but we have a witness to the
fact that x0 2 L. Then we can build a proof p01 under that x0 2 L, and by splicing it with the proof we were
given, produce a new proof p0 = (p01 ; p2 ) under that
(x0 ; y) 2 L0, which we did not know how to do before
seeing p.
Our Results on Non-Malleable NIZK. We formalize the notion of non-malleable NIZK, and give a construction that transforms any ordinary NIZK proof system into a non-malleable NIZK proof system, under the
assumption that one-way functions exist. Our basic construction achieves non-malleability only with respect to
a single proof, i.e. the non-malleability is guaranteed
when the adversary only sees a single proof from the
outside world. We note however, that this suffices for
our application of constructing encryption schemes secure against adaptive chosen-ciphertext attack. We then
give another construction that achieves non-malleability
with respect to any fixed polynomial number of proofs,
where the size of the common random reference string
grows with the bound on the number of proofs, but the
probability of cheating remains negligible.
2 We choose this term since the definition deals with the ability to
modify (or “maul”) an NIZK proof to produce different valid proofs.
As we noted earlier, this seems to us a minimal requirement one should
expect from “zero-knowledge” proofs. Indeed, it is fascinating to ask
what still stronger properties one could hope to define and achieve.
CCA-Secure Encryption – Discussion. In the context of encryption, which is perhaps the best studied notion in cryptography, our basic intuition is to think of
encryption schemes as providing a “secure envelope,”
which only the proper addressee can open. This is a
very compelling metaphor, and is undoubtedly the inspiration for the design of many cryptographic protocols. But what are the essential properties of a “secure
envelope”? The most basic is passive privacy – that
a passive eavesdropper should not learn any useful information about a message from its encryption. Goldwasser and Micali’s notion of semantic security [GM]
is the accepted formalization of this property, and encryption schemes that achieve this property have been
studied extensively. However, we may require stronger
privacy properties from encryption schemes: If encryption is to be used as a primitive in higher level protocols,
we may need security against active attacks, such as
a chosen-ciphertext attack (CCA), where the adversary
has some access to a decryption mechanism. There are
two commonly considered notions of chosen-ciphertext
attack. In the strongest proposed notion, known as an
“adaptive chosen-ciphertext attack” (denoted CCA2),
the adversary is allowed to ask for the decryption of
any ciphertext other than the challenge ciphertext. In
the weaker form, known as a “lunchtime attack” (denoted CCA1), the adversary has access to the decryption
mechanism only prior to receiving the challenge ciphertext which it must decipher. (Formal definitions of security against various kinds of attacks are given in Definition 2.3). Security with respect to the stronger notion
(CCA2) implies other desirable properties which we do
not have space to discuss, such as non-malleability (e.g.
see [DDN, BDPR, BS]), as well. This kind of security
is needed if encryption is to be used in general applications, such as exchange of e-mail, where users may
unwittingly provide attackers with decryptions of selected ciphertexts. Encryption with this strongest property (CCA2-security) has been proposed as a component
in authentication and key exchange protocols [BCK],
electronic payment [SET], and deniable authentication
protocols [DNS]. For more discussion on the importance of chosen-ciphertext security, see [Sho98].
Prior Work on CCA-Secure Encryption. Much
work has been done on achieving chosen-ciphertext security in encryption schemes. Naor and Yung [NY]
gave an elegant construction based on general cryptographic assumptions which achieves security against
the weaker form of chosen-ciphertext attack (CCA1).
Rackoff and Simon [RS] showed how to modify the
scheme of Naor and Yung to achieve security against
adaptive chosen-ciphertext attack (CCA2), but only in a
model with a trusted center assigning certified keys to all
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
parties. More recently, Bellare and Rogaway [BR1, BR]
have proposed efficient schemes whose security relies
on a random oracle, and Cramer and Shoup [CS] have
given an efficient scheme based on the Decisional DiffieHellman assumption. Until now, the only known encryption scheme achieving adaptive chosen-ciphertext
(CCA2) security based on general assumptions was
given by Dolev, Dwork, and Naor [DDN].
Our Results on CCA-Secure Encryption. In this
work, we show how to use non-malleable NIZK to
modify the original elegant scheme of Naor and Yung
and achieve provable security against adaptive chosenciphertext attack based only on general assumptions.
The scheme of Naor and Yung is very simple: A message is encrypted using two independent semanticallysecure encryption functions, and an NIZK proof is provided showing that both ciphertexts are encryptions of
the same message. Unfortunately, the NIZK proof
alone fails to provide security against adaptive chosenciphertext attack (CCA2).3 We show that by simply replacing the NIZK proof with a non-malleable
NIZK proof, one achieves full security against adaptive chosen-ciphertext attack. In contrast, the only previously known scheme based on general assumptions
of [DDN] has a quite involved construction, which exploits an intricate interplay betweeen many encryptions,
NIZK proofs, and other components. Our scheme gives
a simple framework for building encryption schemes secure against CCA2 from two well-defined basic components, namely semantically-secure encryption schemes
and non-malleable NIZK proofs. If efficient implementations of non-malleable NIZK proof systems for the
consistency of encryptions were found for some particular semantically-secure encryption schemes, this would
yield efficient encryption schemes secure against adaptive chosen-ciphertext attack, as well. Based on the current state of knowledge, the NIZK proof system needed
for our scheme can be realized based on any trapdoor
permutation. Thus trapdoor permutations suffice for realizing our encryption scheme.
Overview. We will first formalize our notion of nonmalleable NIZK, as well as a closely related property
called simulation soundness. We then present a construction for achieving non-malleable NIZK, and give
a generalization, based on polynomials, of our construction to achieve non-malleability against any fixed polynomial number of proofs. Finally, we present the construction of an encryption scheme secure against adap3 This
can be seen trivially by considering an NIZK proof system
which simply ignores the last bit of any proof. Thus, in an adaptive
chosen-ciphertext attack, the adversary can simply flip the last bit of
the NIZK proof in the challenge ciphertext and query the decryption
oracle to break the scheme.
tive chosen-ciphertext attack, and formally prove its correctness.
2 Preliminaries
We use standard notations and conventions for writing probabilistic algorithms and experiments. If A is a
probabilistic algorithm, then A(x1 ; x2 ; : : : ; r) is the result of running A on inputs x1 ; x2 ; : : : and coins r. We
let y
A(x1 ; x2 ; : : :) denote the experiment of picking r at random and letting y be A(x1 ; x2 ; : : : ; r). If
S is a finite set then x S is the operation of picking an element uniformly from S . x := is a simple
assignment statement. By a “non-uniform (probabilistic) polynomial-time adversary,” we always mean a circuit whose size is polynomial in the security parameter.
Sometimes we break up algorithms (such as simulators
and adversaries) into multiple stages; in such cases we
will use or to denote state information passed from
one stage to another.
We first define efficient non-interactive proof systems, and then give a definition of adaptive singletheorem non-interactive zero-knowledge (as in [FLS]):
Definition 2.1 [NIPS] = (f; P; V ) is an efficient noninteractive proof system for a language L 2 NP with
witness relation R if f is a polynomial and P and V are
probabilistic polynomial-time machines such that:
(Completeness): For all x 2 L and all w such that
R(x; w) = true , for all strings of length
f (jxj), we have that V (x; P(x; w; ); ) = true .
(Soundness): For all adversaries A, if 2 f0; 1gf (k)
is chosen randomly, then the probability that A( )
will output (x; p) such that x 2
= L but V (x; p; ) =
true is negligible in k .
Definition 2.2 [NIZK] = (f; P; V ; S = (S1 ; S2 ))
is an efficient adaptive single-theorem non-interactive
zero-knowledge proof system for the language L if
(f; P; V ) is an efficient non-interactive proof system and S1 ; S2 are probabilistic polynomial-time machines such that for all non-uniform polynomialtime adversaries A = (A1 ; A2 ), we
have that
S
Pr
[
Expt
(
k
)
=
1
]
,
Pr[
Expt
(
k
)
=
1]
is negligible
A
A
in k , where ExptA (k ) and ExptSA (k ) are:
ExptA (k) :
f0; 1gf (k)
(x; w; ) A1 ()
p
ExptSA (k) :
P(x; w; )
return A2 (p; )
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
(; ) S1 (1k )
(x; w; ) A1 ()
p S2 (x; )
return A2 (p; )
We also use the standard definitions for encryption schemes secure against adaptive chosen-ciphertext
attack (denoted CCA2) and chosen-plaintext attack
(denoted CPA), which can be found for example
in [BDPR]. Note that semantic security is equivalent
to security against chosen-plaintext attack.
Definition 2.3 [CPA, CCA1, CCA2] Let (G ; E ; D) be
an encryption scheme and let A = (A1 ; A2 ) be an adversary. For ATK 2 fCPA; CCA1; CCA2g and k 2 N,
define the advantage of A to be:
def
ATK
AdvATK
A (k ) = 2 Pr[ExptA (k ) = 1] , 1
where:
ExptATK
A (k ):
(pk ; sk ) G (1k )
(m0 ; m1 ; ) AO1 (pk )
b f0; 1g
y Epk (mb )
g AO2 (y; )
return 1 iff g = b
1
2
then O1 () = "
and O2 () = "
If ATK = CCA1 then O1 () = Dsk ()
and O2 () = "
If ATK = CCA2 then O1 () = Dsk ()
(y)
and O2 () = Dsk ()
(y )
Above, Dsk () means the oracle that decrypts any ciphertext except y . We insist, above, that A1 outputs
m0 ; m1 with jm0 j = jm1 j. We say that (G ; E ; D) is secure against ATK if A being non-uniform polynomialtime implies that AdvATK
A () is negligible.
If ATK = CPA
3 Non-Malleable NIZK
In this section, we define non-malleable NIZK and
related notions. The notion of non-malleability for
NIZK is meant to capture the following requirement:
“whatever one can prove after seeing an NIZK proof,
one could also have proved without seeing it, except for
the ability to duplicate the proof.” Put a little more formally, suppose we are given an adversary A that, after seeing a proof p of the statement x 2 L, is able to
produce a proof p0 6= p for some x0 satisfying some
polynomial-time verifiable property R(x0 ), with probability q . Then we should be able to transform A into
another adversary A0 that directly produces a proof for
some x0 that satisfies R(x0 ), with probability negligibly
different than q .
We can turn this into a formal definition of a nonmalleable NIZK proof system, which we give with respect to single proofs. It turns out, however, that we will
need to capture a stronger notion of non-malleability,
which we call adaptive non-malleability, where we allow the adversary to ask for the proof of a theorem of
its choosing. Note that this is not possible the usual scenario, since some party must supply a witness for every
theorem in order for a proof of it to be produced. Of
course if the adversary did this, then this would make
the definition trivial, since then the adversary can produce the proof on its own and is not receiving any outside help. Hence, we instead make this definition with
respect to simulated proofs, which do not require any
witnesses.4 We present here this stronger definition of
non-malleability, and defer the weaker definition of ordinary (non-adaptive) non-malleability (which is implied
by the definition below) to the full version of this paper.
Definition 3.1 [Adaptive Non-Malleable NIZK] Let
be an efficient
non-interactive single-theorem adaptive zero-knowledge
proof system for the language L. We say that is
an adaptively non-malleable NIZK proof system for L
if there exists an efficient non-interactive proof system
= (f ; P ; V ) for the same language L, and a probabilistic polynomial-time oracle machine M such that:
For all non-uniform polynomial-time adversaries A = (A1 ; A2 ) and for all non-uniform
polynomial-time
R, we have that
relations
0 (k) is negliPr Expt
(
k
)
,
Pr
Expt
A;R;
A;
gible in k , where ExptA;R; (k ) and Expt0A;R; (k ) are:
= (f ; P; V ; S = (S1 ; S2 ))
ExptA;R; (k) :
(; ) S1 (1k )
(x; ) A1 ()
p S2 (x; ; )
(x0 ; p0 ; aux) A2 (x; p; ; )
return true iff
( p0 6= p ) and
( V (x0 ; p0 ; ) = true )
( R(x0 ; aux) = true )
0
ExptA;R; (k) :
f0; 1gf(k)
(x0 ; p0 ; aux) M A ()
and
return true iff
( V (x0 ; p0 ; ) = true )
( R(x0 ; aux) = true )
and
We also define another notion for NIZK which we
call simulation soundness, which is similar to but incom4 It is conceivable that we could introduce an all-powerful party
that supplies witnesses to true statements, and make the definition this
way. As done in [FLS] when defining adaptive zero-knowledge, we
choose not to take this route, as it would necessarily give the adversary the power to check membership in L, which is a power we do
not necessarily want to capture. We note, however, that if adaptive
zero-knowledge is defined in this manner, then our simulation-based
definition (giving M the additional power to make one oracle call to
L) would imply the all-powerful party-based definition, as well.
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
parable to non-malleability, but which our construction
also achieves, and which also suffices for constructing
strong encryption schemes. The soundness property of
proof systems states that with overwhelming probability,
the prover should be incapable of convincing the verifier
of a false statement. In this definition, we will ask that
this remains the case even after a polynomially bounded
party has seen a simulated proof of its choosing.
Definition 3.2 [Simulation-Sound NIZK] Let =
(f; P ; V ; S = (S1 ; S2 )) be an efficient non-interactive
single-theorem adaptive zero-knowledge proof system
for the language L. We say that is simulation-sound
if for all non-uniform probabilistic polynomial-time
ad-
versaries A = (A1 ; A2 ), we have that Pr ExptA; (k )
is negligible in k , where ExptA; (k ) is the following experiment:
ExptA; (k) :
(; ) S1 (1k )
(x; ) A1 ()
p S2 (x; ; )
(x0 ; p0 ) A2 (x; p; ; )
return true iff
( p0 6= p) and
(x0 2= L) and
(V (x0 ; p0 ; ) = true )
We further define two technical properties we will desire from our NIZK proof systems. The first captures the
simple requirement that simulated proofs should have
sufficient internal randomness that it should be very unlikely that one can predict what the output of the simulator will be beforehand. Formally, we say an NIZK proof
system has unpredictable simulated proofs if for all
non-uniform polynomial-time adversaries A, we have
that the following experiment has a negligible probability of success:
(; ) S1 (1k )
(x; p) A()
p0 S2 (x; ; )
return true iff p = p0
We also define the notion that no single proof should be
convincing for more than one theorem. Formally, we
say an NIZK proof system has uniquely applicable
proofs if for all x; p; , we have that V (x; p; ) = 1 implies V (x0 ; p; ) = 0 for all x0 6= x. The proof systems
constructed in this paper will always have unpredictable
simulated proofs and uniquely applicable proofs.
The Construction. We now show, assuming that oneway functions exist, how to transform any efficient
non-interactive single-theorem adaptive zero-knowledge
proof system = (f ; P ; V ; S = (S1 ; S2 ))
for a language L into an adaptively non-malleable and
simulation-sound non-interactive zero-knowledge proof
system = (f ; P ; V ; S = (S1 ; S2 )) for a language L. ( will also have unpredictable simulated
proofs and uniquely applicable proofs.)
The necessary additional component will be
what we call a strong one-time signature scheme
(Gen; Sign; V er), where we strengthen the usual
unforgeability requirement to require that no adversary,
when given a signature of a message of its choosing,
can produce a different valid signature of any message,
including the message that was already signed. Such
a signature scheme can be built from any one-way
function as follows: First, choose a universal one-way
hash function h mapping f0; 1g to f0; 1gk (such a
hash function can be based on any one-way function
using the construction of [R]). Then choose 2k strings
x01 ; : : : ; x0k ; x11 ; : : : ; x1k uniformly at random from
f0; 1g3k , and let yib = h(xbi ). The verification key
will be the yib ’s and a description of h. The signing
key will be the xbi ’s. To sign a message m 2 f0; 1g,
one computes u = u1 : : : uk = h(m), and outputs
(xu1 1 ; : : : ; xuk k ). To verify a signature (z1 ; : : : ; zk ) on
message m, one simply computes u = h(m), and
verifies that h(zi ) = yiui for all i. It is straightforward
to verify that this scheme has the properties we desire,
and the details are skipped here. Let us assume that the
public verification key V K produced by Gen(1k ) is
bounded in length by a polynomial q (k ).
We also assume there is a known efficiently com0
putable function g : f0; 1gq(k) ! 2[q (k)] mapping q (k ) bit strings to distinct subsets of [q 0 (k )] =
f1; 2; : : :; q0 (k)g containing precisely q0 (k)=2 elements.
For instance, one such g could be gotten by letting
q0 (k) = 2q(k), and defining g(x) to be the subset of
[q0 (k)] that contains 2i if xi = 0 and 2i , 1 if xi = 1.
Intuition. Dolev, Dwork, and Naor [DDN] implicitly
introduced a powerful method which we call unduplicatable set selection using authentication mechanisms,
and applied this to encryption functions. We adapt this
technique to apply it to NIZK, and show that it can be
used to achieve non-malleability here, as well. Furthermore, by using this in conjunction with a particular
combinatorial construction which we can realize using
polynomials over finite fields, we show how to achieve
non-malleable NIZK for many proofs, if a polynomial
bound on the number of proofs is known beforehand.
We note that Di Crescenzo et al. [DIO] also implicitly
apply unduplicatable set selection to attack the problem
of non-malleable commitment, but do so in a complicated way. The techniques in our work can be used to
provide an alternative, simpler construction to theirs.
The additional resource of the strong one-time signature scheme will be used to implement this notion of
unduplicatable set selection: We choose a verification
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
key/signing key pair (V K; SK ), and then use g (V K )
to select a set of some objects. We then use SK to sign
whatever we do with these objects, but keep SK hidden. To see why we call this unduplicatable set selection, consider what happens if some other party tries to
use the same set of objects, but tries to do something different with them. By the properties of g , it must use the
same verification key V K . By the security of the signature scheme, however, it will be unable to produce a
valid signature unless it merely replicates what it already
saw.
The idea will be to have many reference strings for
and use unduplicatable set selection to select subsets of these reference strings used to prove the desired
statements (i.e. our “objects” will be reference strings,
and what we do with these objects is use them to build
proofs of some fixed theorem). To simulate a proof, one
needs to only select a subset of the reference strings to
come from the simulator, while the rest can be truly random. But now, by the property of unduplicatable set
selection, if the adversary is able to produce a different proof, it must have used a different set of reference
strings, including at least one truly random reference
string. Hence, intuitively, we can produce a proof without any help by simply using the adversary with a simulated proof, and then outputing the proof it must produce
with respect to one of the truly random reference strings.
,
We now formalize this intuition and define :
[Reference
[Prover] P (x; w; )
String
Length]
We think of the new
reference string as consisting of q 0 (k ) reference
strings for , i.e. = 0 1 q0 (k) .
f (k) = q0 (k) f (k).
:
(1) Run Gen(1k ) to obtain a verification key /
signing key pair (V K; SK ) for the one-time
signature scheme.
(2)
i in the set g(V K ), obtain pi =
P (x; w; i ). For i 2= g(V K ), let pi = ,
For each
the empty string.
(3)
(4)
Output (V K; x; p; SignSK (x; p)).
[Verifier] V (x; p = (V K; x0 ; p; z ); ) :
(1)
(2)
(3)
Let p = p1 p2 pq0 (k) .
Check x = x0 , and validity of one-time signature z i.e. V erV K ((x; p); z ) = true .
Decompose p into pi for i in g (V K ).
For each i in g (V K ), verify the proof pi , i.e.
V (x; pi ; i ) = true .
[Simulator]
S1 (1k ) :
(V K; SK ) Gen(1k )
(i ; i ) S1 (1k ) for i 2 g(V K )
i f0; 1gf (k) for i 2= g(V K )
:= 1 q (k)
return (; = (V K; SK; fi g))
S2 (x; ; = (V K; SK; fig)) :
pi S2 (x; i ; i ) for i 2 g(V K ).
pi for i 2= g(V K ).
p = p1 pq (k)
return (V K; x; p; SignSK (x; p))
0
0
That is an efficient non-interactive single-theorem
adaptive zero-knowledge proof system for L, and that it
has unpredictable simulated proofs and uniquely applicable proofs is easy to verify from the construction. We
now prove that this construction also achieves adaptive
non-malleability:
Proof: We follow the intuition presented above. First,
we define a slightly altered version 0 of the proof system . Proofs in 0 are identical to those in , except
the reference string 0 = 1 : : : q0 (k)=2 for 0 consists
of q 0 (k )=2 different reference strings for , and a proof
is considered valid if it is valid for any of these reference strings. Clearly, the soundness error of 0 can only
be polynomially higher than that of ; since has negligible soundness error, we have that 0 also has negligible soundness error, and thus is a non-interactive proof
system for L.
We now exhibit an adversary transformer M that transforms an adversary A = (A1 ; A2 ) into an adversary
that forges a proof for the proof system 0 . On input 0 = 1 : : : q0 (k)=2 (which is a reference string for
0 ), and given oracle access to A1 and A2 , M simply
simulates the experiment ExptA;R; above, except that
after calling S1 to generate = 1 q0 (k) ,
it replaces ai with i , where fa1 ; : : : ; aq0 (k)=2 g =
f1; : : :; q0 (k)g n g(V K ). Since the input distribution to
M is uniform, the resulting distribution on is identical to the distribution output normally by S1 , and by
construction S2 will work precisely as before.
Suppose that A2 (x; p; ; s) does output (x0 ; p0 ; aux)
such that p0 6= p yet V (x0 ; p0 ; ) = true
and R(x0 ; aux) = true .
Now, since p0 =
0
0
0
0
(V K ; x ; p ; z ) 6= p = (V K; x; p; z ), this leaves two
possibilities:
The first case is that V K = V K 0 , so p and p0 differ in
some other component. But the fact that p0 passed the
verification implies that A was able to produce a message/signature pair for V K different than the one given
by M . If this case occurs with non-negligible probability, then we can use A to forge signatures and break
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
the strong one-time signature scheme. We are assuming
this is not possible, thus the case that V K = V K 0 must
occur with negligible probability.
On the other hand, if V K 6= V K 0 , then we know that
the set g (V K ) 6= g (V K 0 ). This means that p0 contains
some valid proof pi for i 2
= g(V K ). Thus, M can simply
output (x0 ; pi ; aux), which is a valid proof for 0 . This
establishes the adaptive non-malleability of our NIZK
proof system.
Note that precisely the same proof shows that is
simulation-sound, since the same reduction would show
that if A is able to output false proofs with repsect to with non-negligible probability, then M will output false
proofs with respect to with non-negligible probability.
But M receives as input only a truly random reference
string for . Hence, by the definition of soundness for
, it must be that M has only negligible probability of
outputing a false proof.
We also note that this construction can be made more
efficient by using a universal one-way hash function
h that maps f0; 1gq(k) to f0; 1gk , to have the sets selected according to g (h(V K )). The same analysis goes
through with only minor modification, namely we must
argue that h(V K 0 ) = h(V K ) occurs with negligible
probability rather than V K 0 = V K , but this will follow
directly from the one-wayness of the hash function h.
Generalizing to many proofs. The proof above
shows that our construction achieves adaptive nonmalleability when the adversary sees a single proof, but
gives no guarantees for the case where more proofs are
observed. Indeed, one can construct counterexamples
where it fails against multiple proofs. Nevertheless,
somewhat surprisingly, this level of security suffices for
the application of building encryption schemes that are
secure against adaptive chosen-ciphertext attack, even
for multiple messages. However, we can explicitly build
proof systems that remain non-malleable against multiple proofs, when a polynomial bound of the number
of proofs is known in advance. Note that this extension is non-trivial; for instance, the natural idea of simply concatenating polynomially many reference strings
to form a new reference string, and choosing a random
one each time to prove a statement, does not work, since
this would retain an inverse polynomial probability of
having the same reference string used twice.
The framework we presented above, based on unduplicatable set selection, however, was designed so that
we could extend it to the case of multiple proofs. Above,
we simply wanted to ensure that the set of i selected
for each verification key (or hash of the verification key)
was distinct, so that at least one i would differ between
the adversary’s proof and the proof it received. Now,
for any fixed polynomial bound t(k ) on the number of
proofs that the adversary can ask for, we will need to ensure that any set selected by the adversary (which will be
distinct from the t(k ) sets it has already seen with high
probability by the property of unduplicatable set selection), there will be at least one i that was not in any of
the t(k ) sets that the adversary has already seen. This
can be accomplished through the use of a combinatorial
set system where no t(k ) sets cover any other set, which
we can build efficiently using polynomials.
To accomplish our modification, we take the construction above and use a new function g , and modify
f accordingly. Recall that the input to g, which would
be a verification key (or the hash of a verification key),
has length q (k ), while the output of g is to be some subset of [q 0 (k )]. We will now suppress the dependence on
k for notational convenience. Let ` = 2qt, and assume
` is a prime power (otherwise take the next higher prime
power). We construct the finite field F ` (which can be
done efficiently). Let q 0 = `2 = O(q 2 t2 ), and associate [q 0 ] with the set F ` F ` . The size of the sets output by g will be `. Now, if g receives as input the bit
string m = m0 m1 : : : m(q,1) , we consider the polynomial fm = m0 + m1 x + m2 x2 + + m(q,1) x(q,1) .
The set output by g (m) will be f(u; fm (u)) : u 2 F ` g.
Now, for any m 6= m0 , since the degree of fm , fm0 is
at most q , 1, we know that fm and fm0 can agree on at
most q , 1 < `=2t values. Thus, for any set of t strings
m1 ; : : : ; mt different from m,
g(m)n
t
[
i=1
g(mi )
`
` , t 2t = 2` :
The simulation should pick t random verification key
and signing key pairs ahead of time, and use simulated
reference strings for the i corresponding to these verification keys. Now, by the analysis above, after seeing t
proofs, the adversary is forced to select a set of i such
that at least half of them are not ones that were involved
in the t proofs the adversary has seen. Thus, it can be
seen that the proof given for the original construction
readily generalizes for this case.
4 Encryption Secure Against Adaptive
Chosen-Ciphertext Attack
In this section, we present and prove the correctness of a simple construction of a public-key encryption
scheme secure against adaptive chosen-ciphertext attack
(CCA2) based on:
(1) Any semantically-secure public-key encryption
scheme (G ; E ; D).
(2) An adaptively non-malleable (or simulationsound) NIZK proof system = (f ; P; V ; S =
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
(S1 ; S2 )) with unpredictable simulated proofs and
uniquely applicable proofs for the language L of
consistent pairs of encryptions, defined formally
below:
L = f(e0; e1 ; c0 ; c1 ) : 9m; r0 ; r1 2 f0; 1g :
c0 = Ee (m; r0 ) and c1 = Ee (m; r1 )g
0
1
We note that L is certainly in NP, since the values
of m; r0 ; r1 would witness membership in L, and certainly such values would always be of size polynomial
in e0 ; e1 ; c0 ; c1 .
Our scheme is a modification the original elegant scheme of Naor and Yung. The scheme of
Naor and Yung is conceptually very simple: A message is encrypted using two independent semanticallysecure encryption functions, and an NIZK proof is provided showing that both ciphertexts are encryptions of
the same message. Unfortunately, the NIZK proof
alone fails to provide security against adaptive chosenciphertext attack. We show that by simply replacing the NIZK proof with an adaptively non-malleable
NIZK proof, one achieves full security against adaptive
chosen-ciphertext attack. More precisely, the construction is as follows:
Let `(k ) be a polynomial bound on the length of messages to be encrypted. Let t(k ) be the induced polynomial bound on the amount of randomness needed by E to
encrypt messages of length up to `(k ). Finally, let q (k )
be then the induced polynomial length of the reference
string required by the proof system .
adaptive chosen-ciphertext attack against the new encryption scheme into a chosen-plaintext attack against
the component encryption scheme (G ; E ; D). Hence
we will conclude that since (G ; E ; D) is secure against
chosen-plaintext attack, the new scheme (G 0 ; E 0 ; D0 ) is
secure against adaptive chosen-ciphertext attack.
Suppose that there were a probabilistic polynomialtime attacker A = (A1 ; A2 ) which achieved inverse
polynomial advantage (k ) in a CCA2-attack against
(G 0 ; E 0 ; D0 ). From now on, to reduce the cumbersome
nature of our notation, we will suppress dependence on
k, but it should be clear where this dependence arises.
We define two experiments that unfurl the definition of a
CCA2-attack: In ExptA (b), where b 2 f0; 1g, the attack
is carried out and the challenge given to the adversary A
is mb (where m0 and m1 were the two messages specified by A after the first phase of the attack). Thus, by the
definition of advantage in a CCA2 attack, we have that
Pr [ ExptA (1) = 1 ] , Pr [ ExptA (0) = 1 ] .
We also define ExptSA (b0 ; b1 ), where b0 ; b1 2 f0; 1g, in
which the attack is carried out by a simulator – now
the challenge is a ciphertext that consists of encryptions of mb0 and mb1 , and a simulated proof of consistency. Note that b0 need not equal b1 , since the simulator
does not need a witness to produce a proof. Formally,
ExptSA (b0 ; b1 ) is as follows:
ExptSA (b0 ; b1 ) :
?
G 0 (1k ) : Call G (1k ) to generate two pairs (e0 ; d0 )
and (e1 ; d1 ) of encryption and decryption keys.
Select a random reference string 2 f0; 1gq(k)
for .
We now prove our main Theorem:
Theorem 4.1 The encryption scheme (G 0 ; E 0 ; D0 ) above
is secure against CCA2.
Proof: Our proof has the same overall structure as the
proof of security found in [NY], but differs in most technical aspects. The main idea will be to transform an
(; ) S1 (1k )
(e0 ; d0 ) G (1k ) ; (e1 ; d1 ) G (1k )
pk := (e0 ; e1 ; ) ; sk := (d0 ; d1 )
(m0 ; m1 ; ) AD1 (pk )
0
sk
Set up challenge:
The public key is pk = (e0 ; e1 ; ).
The private key is sk = (d0 ; d1 ).
Epk0 (m) : Choose r0 ; r1
f0; 1gt(k). Let
c0 := Ee0 (m; r0 ) and c1 := Ee1 (m; r1 ) and
use P to generate a proof p relative to that
(e0 ; e1 ; c0 ; c1 ) 2 L, using m; r0 ; r1 as the witness.
Output (c0 ; c1 ; p).
Dsk0 (c0 ; c1 ; p) : Use V to verify the correctness
of p. If p is valid, output either of Dd0 (c0 ) or
Dd1 (c1 ), chosen arbitrarily.
Set up pk ; sk :
r0 ; r1 f0; 1gt(k)
c0 := Ee (mb ; r0 ) ; c1 := Ee (mb ; r1 )
? p S2 ((e0 ; e1 ; c0 ; c1 ); ; )
y := (c0 ; c1 ; p)
y
g AD2 (y; )
return g
0
0
1
1
0( )
sk
Note that the only lines that differ between ExptSA (b; b)
and ExptA (b) are the ones marked with a ? above. For
ExptA (b), these would be replaced by f0; 1gq(k)
and p
P ((e0 ; e1; c0 ; c1 ); (mb ; r0 ; r1 ); ), respectively.
Since Pr [ ExptA (1) = 1 ] , Pr [ ExptA (0) = 1 ] , it
must be the case that one of the following four quantities
is at least =4:
h
Pr [ ExptA (1) = 1 ] , Pr ExptSA (1; 1) = 1
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
i
(1)
h
i
h
i
Pr ExptSA (1; 1) = 1 , Pr ExptSA (0; 1) = 1 h
i
h
i
Pr ExptSA (0; 1) = 1 , Pr ExptSA (0; 0) = 1 h
i
S
Pr ExptA (0; 0) = 1 , Pr [ ExptA (0) = 1 ]
(2)
(3)
(4)
It is easily seen that if either (1) or (4) were at least
=4, then this would imply a distinguisher for the simulator for . This leaves only the two cases of either
(2) or (3) being at least =4. To analyze these cases, we
first define some important concepts and prove a critical lemma. We define a ciphertext c = (c0 ; c1 ; p) to
be valid with respect to a public key pk = (e0 ; e1 ; )
if V ((e0 ; e1 ; c0 ; c1 ); p; ) = true . Note that only
valid ciphertexts are ever decrypted. We define a ciphertext c to be proper with respect to a public key pk if
(e0 ; e1 ; c0 ; c1 ) 2 L, i.e. the ciphertexts c0 and c1 are encryptions of the same message.
The central observation now is that if the adversary
makes no improper but valid queries to the decryption
oracle during the attack, then the decryption mechanism
needs only one of the decryption keys d0 or d1 in order
to answer all queries made by the adversary, since all
the valid queries are two encryptions of the same message. In this case, we will show how to mount a chosenplaintext attack on the underlying encryption scheme
(G ; E ; D) by simulating a chosen-ciphertext attack with
the adversary and using it to break the underlying encryption scheme. We will fill in the details shortly.
For these ideas to work, however, we need to ensure that the adversary cannot make improper but valid
queries. The relevant experiments here are ExptSA (1; 1),
ExptSA (0; 1), and ExptSA (0; 0). Note that in the case
of ExptSA (0; 1), the adversary is given an improper but
valid challenge ciphertext, and yet we seek to ensure that
it will not be able to produce any other such ciphertexts.
Here we see that the non-malleability of the NIZK proof
system will be critical in denying the adversary the
ability to produce valid improper ciphertexts, even after
it has seen such a ciphertext. We establish the following
lemma:
Lemma 4.2 For all b0 ; b1 2 f0; 1g, and all non-uniform
polynomial-time adversaries A = (A1 ; A2 ), the probability over the experiment ExptSA (b0 ; b1 ) that A will
make, in either stage A1 or A2 , a valid but improper
query to the decryption oracle (different from the challenge ciphertext y ) is negligible in k .
Proof: This lemma follows from the simulation soundness (or similarly from adaptive non-malleability) of .
We build the following machines, which will be plugged
into the definition of simulation soundness:
A01 ():
Initialize c0 := ?
Set up pk ; sk :
(e0 ; d0 ) G (1k ) ; (e1 ; d1 ) G (1k )
pk := (e0 ; e1 ; ) ; sk := (d0 ; d1 )
Simulate first stage of attack:
AD1 (pk ) where any queried
valid improper ciphertext is stored in c0
(m0 ; m1 ; 1 )
0
sk
Set up challenge encryptions:
r0 ; r1 f0; 1gt(k)
c0 := Ee (mb ; r0 ) ; c1 := Ee (mb ; r1 )
return (x = (e0 ; e1 ; c0 ; c1 ); = (1 ; d0 ; d1 ; c0 ))
0 for A1 , and at the same
Above, A01 implements Dsk
time whenever A1 presents a query ciphertext y 0 =
(c00 ; c01 ; p0 ), if y0 is valid, A1 also checks whether y0 is
proper by checking that Dd (c00 ) = Dd (c01 ). If this is
not the case, we let c0 = y 0 .
0
0
1
0
1
1
The simulator will provide the proof of consistency for
the two challenge encryptions computed by A01 . We
then build A02 to complete the simulation of the second
stage of the attack, again looking out for valid improper
queries:
A02 (x = (e0 ; e1 ; c0 ; c1 ); p; ; = (1 ; d0 ; d1 ; c0 )):
y := (c0 ; c1 ; p)
Simulate second stage of attack:
AD2 (y; s1 ) where any queried
valid improper ciphertext is stored in c0
if c0 = ? then abort
else return (x0 = (e0 ; e1 ; c00 ; c01 ); p0 )
0(y)
Above, A02 implements Dsk for A2 , and again simultaneously checks as above to see if A02 makes a valid
improper query, and if so lets c0 be that query y 0 =
(c00 ; c01 ; p0 ).
g
0(y)
sk
We plug the above two machines A01 and A02 into the
definition of simulation soundness. Now, first we note
that if A01 finds a valid improper query c0 = (c00 ; c01 ; p0 ),
i.e. a valid improper ciphertext is found before the challenge ciphertext y = (c0 ; c1 ; p) is given, then by unpredictability of simulated proofs, the probability that the
proof p0 found by the adversary is identical to the the
proof p output by the simulator is negligible (because
of the independent randomization used by the simulator). On the other hand, if A02 finds a valid improper ciphertext c0 6= y , since has uniquely applicable proofs
yet p0 passes the validity test, the proof components
of c0 and y must differ (because we assume no proof
can be convincing for two different theorems). Thus
we see that the probability that p0 6= p, x0 2
= L, and
V (x0 ; p0 ; ) = true is at least the probability that A
makes a valid improper query less something negligible.
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
But the definition of simulation soundness implies that
the former probability is negligible, and hence the latter
must be negligible as well.
Note that since given the decryption keys one can efficiently check the properness of a ciphertext, this argument applies with the definition of adaptive nonmalleability as well: the only changes required are
that A02 should output aux = (d0 ; d1 ) and the relation
R((e0 ; e1 ; c0 ; c1 ); (d0 ; d1 )) should be true iff c0 and c1
decrypt to different messages.
(End of Proof of
Lemma 4.2)
Now we are ready to show how to mount a chosenplaintext attack on the semantically-secure encryption
scheme
(G ; E ; D). iLet ush consider the case
that
h
i
Pr ExptSA (1; 1) = 1 , Pr ExptSA (0; 1) = 1 =4.
(The other case folows by an exactly parallel argument.)
By the lemma, we may assume that the adversary will
never make an improper but valid query. Hence, a single
decryption key will suffice to implement the decryption
oracle for queries made by the adversary. Hence, we
may build the following chosen-plaintext attacker B =
(B1 ; B2 ):
B1 (e) :
Set up pk ; sk :
S1 (1k )
e0 := e ; (e1 ; d1 ) G (1k )
pk := (e0 ; e1 ; ) ; sk := d1
(m0 ; m1 ; s1 ) AD1 (pk )
return (m0 ; m1 ; )
B2 (c; ) :
0
sk
Set up challenge:
r1 f0; 1gt(k)
c0 := c ; c1 := Ee (m1 ; r1 )
p S2 ((e0 ; e1 ; c0 ; c1 ); )
y := (c0 ; c1 ; p)
y
g AD2 (y; s1 )
return g
1
0( )
sk
Above, stores all the necessary state needed to be
transferred from B1 to B2 , i.e. e0 ; e1 ; d1 ; ; s1 ; and
the state information needed by the simulator. As
in the proof of the lemma above, B1 and B2 implement the decryption oracle for A1 and A2 , but because of the lemma, the single decryption key d1 suffices. Thus, we have that B will achieve an advantage only negligibly smaller than =4 in its plaintext
attack on (G ; E ; D), which we assumed is impossible.
Anhexactly parallel argument
i
h holds for the case
i when
S
S
Pr ExptA (0; 1) = 1 , Pr ExptA (0; 0) = 1 =4,
except in this case B knows d0 and the first component of the challenge to A2 is always an encryption of
m0 . Thus, the advantage (k) of any adaptive chosenciphertext attacker must be negligible, and the security
of our encryption scheme is established.
Remark 4.3 We note that a standard hybrid argument
shows that any encryption scheme secure against adaptive chosen-ciphertext attack as defined here is also secure if the adversary is given many encryptions of the
challenge message. While one would certainly hope that
this is the case, in this case it is particularly surprising,
since the non-interactive proof system used here is only
adaptively non-malleable and zero-knowledge for a single theorem.
5 Conclusions
In this paper we motivated and introduced the notion of non-malleable NIZK, showed how to achieve
it against any fixed number of proofs, and constructed
a new simple encryption scheme based on general assumptions secure against adaptive chosen-ciphertext attack based on this notion. As argued in the introduction,
we believe that non-malleable NIZK comes much closer
to achieving our intuitive notion of “zero-knowledge”
for non-interactive proof systems, and hence will find
many other applications.
We finish with a couple of open problems. A major problem left open is how to achieve non-malleable
NIZK proof systems that are secure against an unbounded number of proofs. Another question concerns
our definition of non-malleability for NIZK (Definition 3.1), in which the second experiment allowed the
adversary to give a proof using a possibly different noninteractive proof system. While this does capture the
right semantics (since being able to prove a theorem
without outside help should imply knowledge of a witness regardless of what proof system one uses), it may
be useful to have a construction in which the adversary in the second experiment uses the same proof system. This would ensure a higher level of “knowledgetightness” (the current definition allows for a polynomial
loss), and could be needed in proofs of other constructions that utilize non-malleable NIZK.
Acknowledgments
The author wishes to thank Shafi Goldwasser for providing the impetus for this work by suggesting that there
should be simpler way to achieve full chosen-ciphertext
security under general assumptions than the construction given in [DDN], as well as for providing a great
deal of assistance. We also thank Salil Vadhan for helpful conversations and ideas early in this research. The
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.
author gratefully thanks Cynthia Dwork for introducing him to the notion of non-malleability and convincing
him of its importance. Finally, the author thanks Oded
Goldreich for many helpful comments on the write-up.
References
[BCK] M. B ELLARE , R. C ANETTI AND H. K RAWCZYK, A
modular approach to the design and analysis of authentication and key exchange protocols. Proceedings of the 30th Annual Symposium on Theory of
Computing, ACM, 1998.
[BDPR] M. B ELLARE , A. D ESAI , D. P OINTCHEVAL AND
P. ROGAWAY, Relations among notions of security for public-key encryption schemes. Advances in
Cryptology – Crypto 98 Proceedings, Lecture Notes
in Computer Science Vol. 1462, H. Krawczyk ed.,
Springer-Verlag, 1998.
[BG]
M. B ELLARE , S.G OLDWASSER, New paradigms for
digital signatures and message authentication based
on non-interactive zero knowledge proofs. In G.
Brassard, editor, Advances in Cryptology – CRYPTO
’89, volume 435 of Lecture Notes in Computer Science, pages 194-211, 20-24 August 1989. SpringerVerlag, 1990.
[DP]
A. D E S ANTIS AND G. P ERSIANO, Zero-knowledge
proofs of knowledge without interaction. Proceedings of the 33rd Symposium on Foundations of Computer Science, IEEE, 1992.
[DDN] D. D OLEV, C. DWORK , AND M. NAOR, Nonmalleable cryptography. Proceedings of the 23rd Annual Symposium on Theory of Computing, ACM,
1991. Also Manuscript, 1998. To appear, SIAM J. of
Computing.
[DNS]
C. DWORK , M. NAOR , AND A. S AHAI, Concurrent Zero-Knowledge. Preliminary version appeared
in Proceedings of the 30th Annual Symposium on
Theory of Computing, ACM, 1998. Full version in
preparation.
[FLS]
U. F EIGE , D. L APIDOT, AND A. S HAMIR, Multiple non-interactive zero knowledge proofs based on
a single random string (extended abstract). In 31st
Annual Symposium on Foundations of Computer Science, volume I, pages 308–317, St. Louis, Missouri,
22–24 October 1990. IEEE.
[Go2]
O. G OLDREICH, Foundations of cryptography. Class
notes, Spring 1989, Technion University.
[Go]
O. G OLDREICH, A uniform complexity treatment of
encryption and zero-knowledge. Journal of Cryptology, Vol. 6, 1993, pp. 21-53.
[GM]
S. G OLDWASSER AND S. M ICALI, Probabilistic encryption. Journal of Computer and System Sciences,
28:270–299, 1984.
[BR1]
M. B ELLARE AND P. ROGAWAY, Random oracles are practical: a paradigm for designing efficient
protocols. First ACM Conference on Computer and
Communications Security, ACM, 1993.
[BR]
M. B ELLARE AND P. ROGAWAY, Optimal asymmetric encryption – How to encrypt with RSA. Advances
in Cryptology – Eurocrypt 94 Proceedings, Lecture
Notes in Computer Science Vol. 950, A. De Santis
ed., Springer-Verlag, 1994.
[NY]
[BS]
M. B ELLARE AND A. S AHAI, Non-Malleable Encryption: Equivalence between Two Notions, and an
Indistinguishability-Based Characterization, To appear, CRYPTO ’99.
M. NAOR AND M. Y UNG, Public-key cryptosystems provably secure against chosen ciphertext attacks. Proceedings of the 22nd Annual Symposium
on Theory of Computing, ACM, 1990.
[RS]
C. R ACKOFF AND D. S IMON, Non-interactive zeroknowledge proof of knowledge and chosen ciphertext
attack. Advances in Cryptology – Crypto 91 Proceedings, Lecture Notes in Computer Science Vol. 576,
J. Feigenbaum ed., Springer-Verlag, 1991.
[BFM] M. B LUM , P. F ELDMAN AND S. M ICALI, Noninteractive zero-knowledge and its applications. Proceedings of the 20th Annual Symposium on Theory
of Computing, ACM, 1988.
[R]
J OHN ROMPEL , One-way functions are necessary
and sufficient for secure signatures. Proceedings of
the 22nd Annual Symposium on Theory of Computing, ACM, 1990.
[CS]
[SET]
SETC O (Secure Electronic Transaction LLC), The
SET standard — book 3 — formal protocol definitions (version 1.0). May 31, 1997. Available from
http://www.setco.org/
[GMR] S. G OLDWASSER , S. M ICALI , AND C. R ACKOFF,
The knowledge complexity of interactive proof systems. SIAM Journal on Computing, 18(1):186–208,
February 1989.
[BDMP] M. B LUM , A. D E . S ANTIS , S. M ICALI AND
G. P ERSIANO, Non-Interactive Zero-Knowledge
Proofs. SIAM Journal on Computing, vol. 6, December 1991, pp. 1084–1118.
[DIO]
R. C RAMER AND V. S HOUP, A practical public
key cryptosystem provably secure against adaptive
chosen ciphertext attack. Advances in Cryptology
– Crypto 98 Proceedings, Lecture Notes in Computer Science Vol. 1462, H. Krawczyk ed., SpringerVerlag, 1998.
G. D I C RESCENZO , Y. I SHAI , AND R. O STRO VSKY, Non-Interactive and Non-Malleable Commitment. Proceedings of the 30th Annual Symposium
on Theory of Computing, ACM, 1998.
[Sho98] V ICTOR S HOUP, Why chosen ciphertext security
matters, IBM Research Report RZ 3076, November,
1998.
Authorized licensed use limited to: Univ of Calif Los Angeles. Downloaded on July 27, 2009 at 22:01 from IEEE Xplore. Restrictions apply.