Short Paper On the Generic Hardness of DDH-II

Short Paper On the Generic Hardness of DDH-II
Ivan Damgård, Carmit Hazay, Angela Zottarel
Abstract. The well known Decisional Diffie-Hellman assumption states that given g, g a and g b , for
random a, b, the element g ab is pseudo-random. Canetti in [Can97] introduced a variant of this assumption in which b is still random but a is drawn according to some well-spread distribution. In this paper
we prove that his assumption holds in the generic group model and demonstrate its broad applicability
in the context of leakage resilient cryptography.
1
Introduction
The well known Decisional Diffie-Hellman (DDH) hardness assumption states that given two random group elements g a , g b from a prime order group G, such that g is a generator, it is hard
to distinguish g ab from a uniform element in G. This assumption lies in the heart of the security proofs of many cryptographic primitives, most notable the Diffie-Hellman public-key exchange [DH76], ElGamal public-key encryption scheme [Gam85] and Cramer-Shoup cryptosystem [CS98]. It further has many flavors, such as the bilinear [BF01], the linear [BBS04] and the
n-linear variants [Sha07,HK07]. In this paper we study the DDH-II assumption which was introduced by Canetti [Can97] for the purpose of obfuscation. The DDH-II assumption states that given
a prime order group G and g, g a , g b , where a is drawn from a certain (not necessarily uniform)
distribution, with sufficient min-entropy, whereas b is picked uniformly at random, g ab is indistinguishable from a uniform element in G. Our contribution is twofold: (1) we first present a proof
of hardness for DDH-II in the generic group model. (2) We discuss new applications in the area of
leakage resilient cryptography for which DDH-II is useful. We believe that studying this assumption
will enable to explore and simplify the proofs of many cryptographic constructions.
Security in the Generic Group Model. Whenever a new assumption is introduced, the first
question that is naturally raised is whether this assumption is meaningful or not. Saying differently,
can we really be sure that the underlying problem is hard to solve? Clearly, a positive answer would
imply a solution to the famous P 6= N P. The generic group model [Sho97] allows precisely to
avoid this catch. In this model, the adversary is restricted to perform just some basic operations
(e.g., multiplications and inversions) on given group elements, without exploiting any a priori
information about the group internal structure. A proof of hardness in the generic group model
does not mean that a problem is hard in the real world, precisely because there is no way from
stopping the adversary to use his knowledge about eventual properties of the group. Nevertheless,
such a proof can give some evidence regarding the “real hardness” of an assumption since the
only way now to break the assumption is to exploit its special properties and design in a specific
group. In general, the generic group model has proven to be a precious tool in investigating new
assumptions, and has been used in many different scenarios to establish their meaningfulness; see
[Sho97,MW98,Sma01,Den06,Che06,BW07,Wat12] for just few examples.
Leakage Resilient Cryptography. Until very recently most of the security proofs were carried
out in the so called black-box model [SV98]. In this model, the adversary is only allowed to observe
the input/output behavior of the underlying primitive, without having access to the secret state of
the system. Unfortunately, physical implementations turned out to be non black-box and various
side-channel attacks were proven to compromise severely the integrity of the secret key, rendering
vain the security of the system. See [Koc96,BDL97,BS97,KJJ99,QS01] for some examples. Therefore, in the recent years, a significant body of research has been dedicated to new models, more
adequate for real world attacks, and the field called leakage resilient cryptography has been initiated. Typically, the leakage obtained by the attacker is formalized by a function h applied on the
secret key sk. It is definitely inevitable to restrict the leakage function is some way and, to this
end, several different security models have been proposed [CLW06,MR04,AGV09,NS09,DGK+ 10].
Amongst these there is a model that restricts the output length of the leakage function (bounded
leakage), a model that assumes some residual high min-entropy with respect to the secret key condition on the leakage (entropy-bounded leakage) or a model that assumes that the secret state leaks
only during the actual computations (only computational).
Known solutions against side-channel attacks range from secret and public-key basic primitives such as encryption and digital signatures, to a wide range of multiparty functionalities
(see [MR04,DP08,AGV09,ADW09,DKL09,NS09,FKPR10,DHLAW10,LRW11] and within for additional citations). All these theoretic solutions try to cope with the physical problems rising from
weak implementations of cryptographic schemes proven secure in an ideal model. Although leakage
resilient cryptography substantially uses theoretic tools, we must keep in mind that the main goal
of this recent field is enabling security and privacy in the real world, thus we believe that this work
is relevant in the perspective of achieving important tasks as secure storage, implementation of
cryptographic primitives in small devices and generally for establishing electronic commerce using
cryptographic tools.
Applications. In light of the above discussion, we point out that the description of DDH-II
structure suggests that it might be useful in the context of leakage resilient cryptography. Let
us further elaborate regarding concrete leakage resilient applications that can benefit from this
assumption:
1. In the context of leakage resilient secure computation, Damgård et al. introduced in [DHP11] the
indistinguishability for k-sources (k-IND) assumption, another variant of DDH that is implied by
the DDH-II assumption. Their goal was to design leakage resilient oblivious transfer relying on
this hardness assumption. On one hand our result supports the meaningfulness of the security
reduction given in [DHP11], as we prove that DDH-II is generically hard to solve. On the
other hand, the work of Damgård et al. shows that Canetti’s assumption, as well as k-IND
assumption, are promising tools for leakage resilient cryptography and can be useful to design
particular leakage resilient secure protocols.
2. Another interesting application for the DDH-II assumption is related with leakage resilient
public-key encryption schemes (PKE). This question has been studied intensively lately by
the cryptographic community [AGV09,BG10,BKKV10,DHLAW10,DGK+ 10,NS09]. Nevertheless, almost all of these works solve the leakage resilient PKE problem with the following restrictions applied to the basic semantically secure game [GM82]: (1) leakage is allowed only from
the secret key and (2) only prior to the computation of the challenge ciphertext. Two exceptions
are the work by Halevi and Lin [HL11] that introduced a new “after-the-fact” definition where
the adversary is allowed to obtain leakage from the secret key even after seeing the challenge,
but whose security holds only for plaintexts with sufficiently high min-entropy. Another example
is the recent work by Namiki et al. [NTY11] that shows how to construct public-key encryption
2
schemes allowing partial leakage from the randomness used by the encryption algorithm; see
for more references within [NTY11] regarding papers that examine encryption schemes with
non-uniform randomness. In both cases the constructions use randomness extractors.
The DDH-II assumption allows us to focus our attention on the later problem of concerning
leakage from the randomness. Specifically, consider the ElGamal PKE of which security relies
on the DDH assumption. Then, viewing the non-uniform distribution, that originally was considered as side-information about the secret key, as leakage applied to the randomness, it is
possible to run the same semantically secure game reduced to the hardness DDH-II. This is
carried out due to the symmetry in this assumption. Notice that we can use DDH-II only in the
context where we have leakage either from the secret key or from the randomness, but not from
both. Nevertheless, it enables to obtain (perhaps, for the first time) a leakage resilient PKE
without using any extractors.
For settings in which leakage is viewed as a function of the entire secret state (that is, secret key
and randomness), we point to the work of Bitansky et al. [BCH12] that demonstrates how leakage from the randomness and the secret key can be seen under a broader point of view. Namely,
a public key scheme is defined using a two-party functionality and the leakage is considered as
a weaker variant of passive corruption. In particular, they show a tight connection between this
very general modeling of leakage and non-committing encryption (used for obtaining adaptively
secure communication channels), hinting that building efficient encryption schemes tolerating
leakage from both the secret key and randomness may need to rely on stronger tools. Therefore,
it is worth studying the scenario where leakage is obtained only from the randomness. Moreover, as [NTY11] points out, dealing with leakage from the randomness is a very delicate and
challenging task in itself. In fact, Namiki’s et al. work is in a particular KEM/DEM framework
that simulates the so-called split model of [DF11], where the internal state is divided into two
parts, each handling a different memory part and leaking independently from the other one. In
this particular case, DDH-II assumption would fit very easily in the model and would avoid the
use of randomness extractors.
A different attempt to study the security of ElGamal in the context of leakage resilient cryptography was proposed in [KP10]. This paper considers leakage from the secret key and proves
security in the generic group model. We notice that relying on the DDH-II assumption it would
be easy to achieve the same security notion for ElGamal. This would be a improvement over
the [KP10] PKE, that is constructed over bilinear groups.
3. Finally, the deep connection between DDH-II and the standard Diffie-Hellman assumption suggests that we could also exploit the properties of Canetti’s variant for key-exchange protocols.
This primitive has its roots in the seminal work of Diffie and Hellman [DH76] and since then
has been studied extensively. Given the above structure of DDH-II, the key-exchange [DH76]
protocol can be studied in an environment where the secret exponent of one of the end users is
compromised. In particular, in the same fashion as before, hardness of DDH-II can imply the security of this protocol in the presence of leakage. Authenticated key agreement, a closely related
but stronger primitive, was studied in the leakage resilient setting in [DHLAW10,ADW09,KV09].
These constructions rely on leakage resilient building blocks such as signatures and obtain security against active adversaries.
Organization. Our paper contains in Section 2 some basic notions and definitions. In Section 3
we introduce the proof of our main theorem.
3
2
Basic Notations
For the sake of completeness, we introduce some basic notations and definitions. For a set S we
write x ← S to denote that x is sampled uniformly from S. We use negl to denote a negligible
function f : N → R, namely a function f such that, for any polynomial p(·) and large enough n,
f (n) ≤ 1/p(n).
2.1
The ElGamal PKE
The El Gamal encryption scheme is a PKE which operates on a cyclic group G of prime order p. Let
g denote a random generator in G, then the public and secret keys are hG, p, g, hi and hG, p, g, xi
where x ← Fp and h = g x . A message m ∈ G is encrypted by choosing y ← Fp and the ciphertext
is hg y , hy · mi. A ciphertext c = hα, βi is decrypted as m = β/αx . We use the property that given
y = logg α one can reconstruct m = β/hy and hence a party encrypting m can prove knowledge of
m by proving knowledge of y.
2.2
Hardness Assumptions
Before giving the formal definition of DDH-II, we recall the DDH assumption.
Definition 1. Let G be a cyclic group of prime order p. Let g be a generator of G. The Decisional
Diffie-Hellman Assumption holds if, for every PPT algorithm A:
|Pr[A(g, g a , g b , g ab ) = 1] − Pr[A(g, g a , g b , g c ) = 1]| ≤ negl(k)
where the probability is taken over the random choice of a, b, c ← Fp .
In the following, we specify first Canetti’s definition ([Can97]) of well-spread distribution.
Definition 2. A distribution ensemble X = {Xk }k∈N is well spread if for any polynomial p(·) and
large enough k the maximum probability of an element is smaller than 1/p(k), i.e. maxx (Pr[Xk =
x]) ≤ 1/p(k).
Definition 3. Let G be a cyclic group of prime order p. Let g be a generator of G. The DDH
Assumption II (DDH-II) holds if, for every PPT algorithm A:
|Pr[A(g, g a , g b , g ab ) = 1] − Pr[A(g, g a , g b , g c ) = 1]| ≤ negl(k)
where a is drawn from a well-spread distribution over Fp and b, c ← Fp .
3
The Main Theorem
In this section we prove our main theorem.
Theorem 1. The DDH-II assumption holds in the generic group model.
4
Proof. Let A be a polynomial-time generic group adversary. As usual, the generic group model
is implemented by choosing a random encoding σ : G → {0, 1}m . Normally, instead of working
directly with group elements, A takes as input their image under σ. This way, all A can test is
string equality. A is also given access to an oracle computing group operation and inversion: taking
σ(g1 ), σ(g2 ) and returning σ(g1 g2 ), similarly for inversion. Finally, we can assume that A submits
to the oracle only encodings of elements it had previously received. This is because we can choose m
large enough so that the probability of choosing a string that is also in the image of σ is negligible.
We consider an algorithm B playing the following game with A. Algorithm B chooses 5 bit
strings σ0 , . . . , σ4 uniformly in {0, 1}m . Internally, B keeps track of the encoded elements using
polynomials in the ring Fq [X, Y, T0 , T1 ]
To maintain consistency with the bit strings given to A, B creates a lists L of pairs (F, σ)
where F is a polynomial in the ring specified above and σ is the encoding of a group element. The
polynomial F represents the exponent of the encoded element. Initially, L is set to
{(1, σ0 ), (X, σ1 ), (Y, σ2 ), (T0 , σ3 ), (T1 , σ4 )}.
Algorithm B starts the game providing A with σ0 , . . . , σ4 . The simulation of the oracles goes as
follows:
Group action: Given two strings σi , σj relative to elements in G, B recovers the corresponding
polynomials Fi and Fj and computes Fi + Fj . If Fi + Fj is already in L, B returns to A the
corresponding bit string; otherwise it returns a uniform element σ in {0, 1}m and stores (Fi + Fj , σ)
in L.
Inversion: Given an element σ in G, B recovers its internal representation F and computes −F .
If the polynomial −F is already in L, B returns the corresponding bit string; otherwise it returns
a uniform string σ and stores (−F, σ) in L.
After A queried the oracles, it outputs a bit b0 . At this point, B chooses a bit b and uniform
values x, y, s in Fq and sets X = x, Y = y, Tb = xy and T1−b = s.
If the simulation provided by B is consistent, it reveals nothing about b. This means that the
probability of A guessing the correct value for b is 1/2. The only way in which the simulation could
be inconsistent is if, after we choose value for X, Y, T0 , T1 , two different polynomials in L happen
to produce the same value.
First, we prove that A is unable to cause such a collision on its own. Notice that the substitutions
operated in the formal variables are all independent except for Tb . Hence, A can cause a collision
only producing a multiple of XY . Anyway, notice that L is initially populated with polynomials of
degree at most one and that both the group operation and the inversion oracle don’t increase the
degree of the polynomial. Thus, all polynomials contained in L have degree at most one. This is
enough to conclude that A cannot purposely produce a collision.
It remains to prove that the probability of a collision happening due to a unlucky choice of
values is negligible. In other words, we have to bound the probability that two distinct Fi , Fj in L
evaluate to the same value after the substitution, namely Fi (x, y, s) − Fj (x, y, s) = 0. This reduces
to bound the probability of hitting a zero of Fi − Fj .
5
Recall the the Schwartz–Zippel lemma says that, if f is a degree d polynomial in Fp [X1 , . . . , Xn ]
and S ⊂ Fp then
d
Pr[f (x1 , . . . , xn ) = 0] ≤
|S|
where x1 , . . . , xn are chosen uniformly from S.
Now, let H∞ (X ) = −log (maxx (Pr[Xk = x])) be the min-entropy of the distribution ensemble
X = {Xk }k∈N . Then, given fixed x2 , . . . , xn ∈ Fp
Pr [f (x, x2 , . . . , xn ) = 0] ≤
x←X
d
2H∞ (X )
Going back to our case, we have that
X
Pr [(Fj − Fi )(x, Y, S)|Y = y, S = s]Pr[Y = y, S = s]
Pr [(Fj − Fi )(x, y, s) = 0] =
x←X
y,s←Fp
x←X
y,s
≤
X
y,s
=
Pr
x←X
1
2H∞ (X )
Pr[Y = y, S = s]
1
2H∞ (X )
= max(Pr[Xk = x])
x
Since by assumption the distribution X is well-spread, we have that, for any polynomial p and
large enough k, the probability of a collision is less or equal than 1/p(k), which means precisely
that the probability of a collision is negligible. This concludes the proof of the theorem.
t
u
4
Future Directions
The particular form of the DDH-II assumption implies that it can be possible to release some
partial information about the exponent and still being able to prove security. This assumption
seems to have much potential but also must be employed carefully. For instance, it is important
to note that standard hybrid arguments do not immediately hold here as in the standard setting
(without considering leakage). We thus propose to further study this assumption and in particular
to examine its benefits in leakage resilient cryptography.
Specifically, it would be very interesting to show a construction of a PKE scheme that allows
leakage from the randomness before, as well as after, the challenge ciphertext is produced. The DDHII assumption takes one step in this direction as it enables to obtain leakage from the randomness of
the challenge ciphertext (but not from the secret key) when instantiating the PKE with ElGamal. In
addition, we suggest to examine another variant of DDH, also proposed by Canetti [Can97], where
instead of handing g a , the adversary sees f (a) for some uninvertible function. This assumption
could be useful in the auxiliary input leakage setting.
References
[ADW09]
Joël Alwen, Yevgeniy Dodis, and Daniel Wichs. Leakage-resilient public-key cryptography in the
bounded-retrieval model. In CRYPTO, pages 36–54, 2009.
6
[AGV09]
Adi Akavia, Shafi Goldwasser, and Vinod Vaikuntanathan. Simultaneous hardcore bits and cryptography
against memory attacks. In TCC, pages 474–495, 2009.
[BBS04]
Dan Boneh, Xavier Boyen, and Hovav Shacham. Short group signatures. In CRYPTO, pages 41–55,
2004.
[BCH12]
Nir Bitansky, Ran Canetti, and Shai Halevi. Leakage-tolerant interactive protocols. In TCC, pages
266–284, 2012.
[BDL97]
Dan Boneh, Richard A. DeMillo, and Richard J. Lipton. On the importance of checking cryptographic
protocols for faults (extended abstract). In EUROCRYPT, pages 37–51, 1997.
[BF01]
Dan Boneh and Matthew K. Franklin. Identity-based encryption from the weil pairing. In CRYPTO,
pages 213–229, 2001.
[BG10]
Zvika Brakerski and Shafi Goldwasser. Circular and leakage resilient public-key encryption under subgroup indistinguishability - (or: Quadratic residuosity strikes back). In CRYPTO, pages 1–20, 2010.
[BKKV10] Zvika Brakerski, Yael Tauman Kalai, Jonathan Katz, and Vinod Vaikuntanathan. Overcoming the hole
in the bucket: Public-key cryptography resilient to continual memory leakage. In FOCS, pages 501–510,
2010.
[BS97]
Eli Biham and Adi Shamir. Differential fault analysis of secret key cryptosystems. In CRYPTO, pages
513–525, 1997.
[BW07]
Xavier Boyen and Brent Waters. Full-domain subgroup hiding and constant-size group signatures. In
Public Key Cryptography, pages 1–15, 2007.
[Can97]
Ran Canetti. Towards realizing random oracles: Hash functions that hide all partial information. In
CRYPTO, pages 455–469, 1997.
[Che06]
Jung Hee Cheon. Security analysis of the strong diffie-hellman problem. In EUROCRYPT, pages 1–11,
2006.
[CLW06]
Giovanni Di Crescenzo, Richard J. Lipton, and Shabsi Walfish. Perfectly secure password protocols in
the bounded retrieval model. In TCC, pages 225–244, 2006.
[CS98]
Ronald Cramer and Victor Shoup. A practical public key cryptosystem provably secure against adaptive
chosen ciphertext attack. In CRYPTO, pages 13–25, 1998.
[Den06]
Alexander W. Dent. The hardness of the dhk problem in the generic group model. IACR Cryptology
ePrint Archive, 2006:156, 2006.
[DF11]
Stefan Dziembowski and Sebastian Faust. Leakage-resilient cryptography from the inner-product extractor. In ASIACRYPT, pages 702–721, 2011.
[DGK+ 10] Yevgeniy Dodis, Shafi Goldwasser, Yael Tauman Kalai, Chris Peikert, and Vinod Vaikuntanathan.
Public-key encryption schemes with auxiliary inputs. In TCC, pages 361–381, 2010.
[DH76]
Whitfield Diffie and Martin E. Hellman. New directions in cryptography. IEEE Transactions on Information Theory, 22(6):644–654, 1976.
[DHLAW10] Yevgeniy Dodis, Kristiyan Haralambiev, Adriana López-Alt, and Daniel Wichs. Efficient public-key
cryptography in the presence of key leakage. In ASIACRYPT, pages 613–631, 2010.
[DHP11]
Ivan Damgård, Carmit Hazay, and Arpita Patra. Leakage resilient secure two-party computation. IACR
Cryptology ePrint Archive, 2011:256, 2011.
[DKL09]
Yevgeniy Dodis, Yael Tauman Kalai, and Shachar Lovett. On cryptography with auxiliary input. In
STOC, pages 621–630, 2009.
[DP08]
Stefan Dziembowski and Krzysztof Pietrzak. Leakage-resilient cryptography. In FOCS, pages 293–302,
2008.
[FKPR10] Sebastian Faust, Eike Kiltz, Krzysztof Pietrzak, and Guy N. Rothblum. Leakage-resilient signatures. In
TCC, pages 343–360, 2010.
[Gam85]
Taher El Gamal. A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE
Transactions on Information Theory, 31(4):469–472, 1985.
[GM82]
Shafi Goldwasser and Silvio Micali. Probabilistic encryption and how to play mental poker keeping
secret all partial information. In STOC, pages 365–377, 1982.
[HK07]
Dennis Hofheinz and Eike Kiltz. Secure hybrid encryption from weakened key encapsulation. In
CRYPTO, pages 553–571, 2007.
[HL11]
Shai Halevi and Huijia Lin. After-the-fact leakage in public-key encryption. In TCC, pages 107–124,
2011.
[KJJ99]
Paul C. Kocher, Joshua Jaffe, and Benjamin Jun. Differential power analysis. In CRYPTO, pages
388–397, 1999.
[Koc96]
Paul C. Kocher. Timing attacks on implementations of diffie-hellman, rsa, dss, and other systems. In
CRYPTO, pages 104–113, 1996.
7
[KP10]
[KV09]
[LRW11]
[MR04]
[MW98]
[NS09]
[NTY11]
[QS01]
[Sha07]
[Sho97]
[Sma01]
[SV98]
[Wat12]
Eike Kiltz and Krzysztof Pietrzak. Leakage resilient elgamal encryption. In ASIACRYPT, pages 595–612,
2010.
Jonathan Katz and Vinod Vaikuntanathan. Signature schemes with bounded leakage resilience. In
ASIACRYPT, pages 703–720, 2009.
Allison B. Lewko, Yannis Rouselakis, and Brent Waters. Achieving leakage resilience through dual
system encryption. In TCC, pages 70–88, 2011.
Silvio Micali and Leonid Reyzin. Physically observable cryptography (extended abstract). In TCC,
pages 278–296, 2004.
Ueli M. Maurer and Stefan Wolf. Lower bounds on generic algorithms in groups. In EUROCRYPT,
pages 72–84, 1998.
Moni Naor and Gil Segev. Public-key cryptosystems resilient to key leakage. IACR Cryptology ePrint
Archive, 2009:105, 2009.
Hitoshi Namiki, Keisuke Tanaka, and Kenji Yasunaga. Randomness leakage in the kem/dem framework.
In ProvSec, pages 309–323, 2011.
Jean-Jacques Quisquater and David Samyde. Electromagnetic analysis (ema): Measures and countermeasures for smart cards. In E-smart, pages 200–210, 2001.
Hovav Shacham. A cramer-shoup encryption scheme from the linear assumption and from progressively
weaker linear variants. IACR Cryptology ePrint Archive, 2007:74, 2007.
Victor Shoup. Lower bounds for discrete logarithms and related problems. In EUROCRYPT, pages
256–266, 1997.
Nigel P. Smart. The exact security of ecies in the generic group model. In IMA Int. Conf., pages 73–84,
2001.
Claus-Peter Schnorr and Serge Vaudenay. The black-box model for cryptographic primitives. J. Cryptology, 11(2):125–140, 1998.
Brent Waters. Functional encryption for regular languages. In CRYPTO, pages 218–235, 2012.
8