Dias nummer 1

Trends in MPC for Information
Assurance
Prof. C. Pandu Rangan
(IIT Madras)
Outline of the Talk
• Information Assurance
• Shannon’s Model
• PRMT and PSMT
• Secret Sharing and Variants
• Hashing
• Proxy Re-Encryption
Information Assurance (IA) : Definition
• IA is the practice of managing informationrelated risks.
• IA practitioners seek to protect the
Confidentiality, Integrity and Availability of
data and their delivery systems.
• The above goals are relevant whether the data
are in storage, processing or transit and
whether threatened by malice or accident.
IA in a NutShell : IA is the process of
ensuring that the right people get the right
information at the right time.
Information Assurance (IA) : Definition Contd …
• IA’s broader connotation also includes reliability
and emphasizes strategic risk management over
tools and tactics.
• IA includes other corporate governance issues
such as privacy, compliance, audits, business
continuity and disaster recovery.
IA is interdisciplinary and draws from multiple
fields, including fraud examination, forensic
science, military science, management science,
systems engineering, security engineering, and
criminology, in addition to computer science.
Foundation Concept
Adversary with Unbounded
Computing Power
Perfectly Reliable Message Transmission
(PRMT)
P1
Pn
S
P2
mF
P3
P4
R
P5
Pk
Communication Network
• n players form the vertices of a communication network.
• The underlying network is connected and synchronous.
• All links are assumed to be secure.
• A sender S has to send a message m (element of finite field F)
reliably to a receiver R.
Perfectly Reliable Message Transmission
(PRMT)
P1
Pn
S
P2
mF
P3
P4
R
P5
Pk
Communication Network
• n players form the vertices of a communication network.
• The underlying network is connected and synchronous.
• All links are assumed to be secure.
• A sender S has to send a message m (element of finite field F)
reliably to a receiver R.
• Some of the players may be corrupt.
Perfectly Secure Message Transmission
(PSMT)
P1
Pn
S
P2
mF
P3
P4
R
P5
Pk
Communication Network
PSMT = PRMT + Adversary should get no information about m
The above should hold even if adversary has
unbounded computing power
Modeling Corruption
The behaviour of corrupt players is modeled using a
centralized Adversary. The adversary is classified
based on:
Computational Power: Bounded
(cryptographic) or Unbounded
(information-theoretic)
Pn
P1 P2 P3 P4
Pk
P5
Extent of Corruption: Passive
or Active (Byzantine) or Mixed
Type of Corruption: Static or
Adaptive or Mobile
Corruption capacity:
Threshold or Non-Threshold.
Information-Theoretic (Shannon)
Security (Strongest Notion of Security)
Security = Privacy + Correctness
Privacy: Given any adversarial model, the corrupt
players should get no information about the
message transmitted even if they collude.
Correctness: The message transmitted by the
sender should correctly reach the Receiver.
Multiparty Computation
• General framework for describing computation between
parties who do not trust each other
• Example: elections
– N parties, each one has a “Yes” or “No” vote
– Goal: determine whether the majority voted “Yes”,
but no voter should learn how other people voted
• Example: auctions
– Each bidder makes an offer
• Offer should be committing! (can’t change it
later)
– Goal: determine whose offer won without revealing
losing offers
More Examples of MPC
• Example: database privacy
– Evaluate a query on the database without
revealing the query to the database owner
– Evaluate a statistical query on the database
without revealing the values of individual
entries
– Many variations
A Couple of Observations
• In all cases, we are dealing with
distributed multi-party protocols
– A protocol describes how parties are
supposed to exchange messages on the
network
• All of these tasks can be easily
computed by a trusted third party
– The goal of secure multi-party computation
is to achieve the same result without
involving a trusted third party
Secret Sharing Protocols
[Shamir 79, Blakley 79]
 Set of players P = {P1 , P2, … , Pn}, dealer D (e.g., D = P1).
 Two phases
– Sharing phase
– Reconstruction phase
 Sharing Phase
– D initially holds s and each player Pi finally
holds some private information vi.
 Reconstruction Phase
– Each player Pi reveals his private information
v’i on which a reconstruction function is
applied to obtain s = Rec(v’1, v’2, …, v’n).
Secret Sharing (cont’d)
Secret s
Sharing
Phase
v1
v2
v3
Reconstruction
Phase
…
Dealer
vn
Less than t +1 players
have no info’ about the
secret
Secret Sharing (cont’d)
Secret s
Sharing
Phase
v1
v2
v3
…
Dealer
vn
Reconstruction
Phase
 t +1 players can
reconstruct the secret
Secret s
Players are assumed to
give their shares
honestly
Verifiable Secret Sharing (VSS)
[Chor, Goldwasser, Micali 85]
 Extends secret sharing to the case of active corruptions
(corrupted players, incl. Dealer, may not follow the
protocol)
 Up to t corrupted players
 Adaptive adversary
 Reconstruction Phase
– Each player Pi reveals (some of) his private
information v’i on which a reconstruction
function is applied to obtain
s’ = Rec(v’1, v’2, …, v’n).
VSS Requirements
 Privacy
– If D is honest, adversary has no Shannon
information about s during the Sharing phase.
 Correctness
– If D is honest, the reconstructed value s’ = s.
 Commitment
– After Sharing phase, s’ is uniquely determined.
Game Theory in Cryptography
• Work on distributed computing and on
cryptography has assumed agents are either
honest or dishonest
• honest agents follow the protocol and dishonest
agents do all they can to subvert it
• Game theory assumes all agents are rational
and try to maximize their utility
• Both views make sense in different contexts,
but their combination is more appropriate to
practical situations
Rational Secret Sharing
• Players are assumed to be rational
• Each player’s preferences are such that
getting the secret is better than not getting it
• secondarily, the fewer of the other agents
that get it, the better
•The problem is, no player wants to send his share !
Rational secret sharing has applications in highly
competitive real world scenario, where players
are modeled selfish.
Rational Secret Sharing Contd
Preferences and Payoffs :
For any player pi , let w1,w2,w3,w4 be the payoffs
obtained in the following scenarios
w1 − pi gets the secret, others do not get the secret
w2 − pi gets the secret, others get the secret
w3 − pi does not get the secret, others do not
get the secret
w4 − pi does not get the secret, others get the secret
The preferences of pi is specified by w1 > w2 > w3 > w4
Rational Secret Sharing Contd
Underlying Assumptions:
At each step, a player receives all the messages that were
sent to him by other players at the previous step
The system is synchronous and message delivery takes
fixed delay
Communication is guaranteed
At each step all the players send their shares
simultaneously
Rational Secret Sharing Contd
STOC ’04 : Impossibility of deterministic mechanism
for Rational Secret Sharing, by Halpern
and Teague[1]. Proposed a randomized
protocol for achieving the Rational Secret
Sharing
SCN ’06 : Gordon and Katz[3] improved the
randomized protocol (interference of
dealer is minimized)
PODC ’06 : Abraham et. al[4] analyzed the Rational
Secret Sharing in a setting where players
form coalitions
CRYPTO ’06 : Lysyanskaya and Traindopolus[5] analyzed the
problem in the presence of few malicious players
Rational Secret Sharing Contd
The Intuition:
• Suppose the players repeatedly play the game (repeatedly share
the secret)
• If a player does not cooperate by not sending his share in the
current game, then the other players do not send him their shares
in the further games (Grim Trigger Strategy)
• Hence, every player because of the fear of not receiving any share
from other players in the further games, will cooperate in the
current game.
•This punishment strategy acts as an incentive for a player to
cooperate in the current game.
Adversary with Polynomial
Time Computing Power
Hashing
• A hash algorithm is used to condense messages
of arbitrary length to a smaller, fixed length
message digest
• Federal Information Processing Standard
(FIPS) 180-3, the Secure Hash Standard (SHS)
[FIPS 180-3], specifies five Approved hash
algorithms:
- SHA-1, SHA-224, SHA-256, SHA-384,
and SHA-512
• Secure hash algorithms are typically used
with other cryptographic algorithms
Hashing Properties
• Collision resistance: It is computationally
infeasible to find two different inputs to
the hash function that have the same hash value
• Preimage resistance: Given a randomly chosen
hash value, hash_value, it is computationally
infeasible to find an x so that
hash(x) = hash_value
This property is also called the one-wayness
property
• Second preimage resistance: It is
computationally infeasible to find a second
input that has the same hash value as any other
specified input
Strength of a Hash Function
• Work factor of an algorithm is the number of
hash operations performed in that algorithm
• If the work factor is 2x , then the strength is
defined to be x bits
Strength of Approved Hash Functions
SHA – 1
SHA – 224
SHA – 256
SHA – 384
SHA - 512
Collision
Resistance
Strength
in
bits
< 80
112
128
192
256
Preimage
Resistance
Strength
in
bits
160
224
256
384
512
160 - K
224 - K
256 - K
384 - K
512 - K
Second
Preimage
Resistance
Strength
in
bits
• Random Oracle Model and Hash
Function
• Provable Security
Proxy Re-encryption
• In 2005, the digital rights management (DRM)
of Apple's iTunes was compromised partially
due to the fact that an untrusted party (i.e.,
the client's resource) could obtain the
plaintext during a naive decrypt-and-encrypt
operation
This flaw could be prevented by using a
secure Proxy Re-encryption scheme.
Proxy Re-encryption
• The previous scenario can be generalized as
below:
Stores data encrypted
with server secret
Server
Malicious user can get
the decrypted data
Decrypts & encrypts
with user A’s public key
User A
decrypts
Proxy Re-encryption
No information
about data is
known
Server
Proxy re-encrypts
to user A
User A decrypts
Proxy Re-encryption
Allows a semi-trusted proxy to convert
ciphertexts for an entity B (delegator) to a
ciphertext for C (delegatee), such that:
• The proxy cannot see the underlying
plaintext.
• The delegatee alone cannot decrypt the
delegator’s ciphertext
Applications
• Organization e-mail forwarding
• Secure distribution of authentic content
• Digital right management
Signcryption with Proxy Re-encryption: Scheme
Signcryption with Proxy Re-encryption: Scheme
(contd..)
Signcryption with Proxy Re-encryption: Scheme
(contd..)
Signcryption with Proxy Re-encryption: Scheme
(contd..)
Signcryption with Proxy Re-encryption: Scheme
(contd..)
Signcryption with Proxy Re-encryption: Scheme
(contd..)
Reference
•
Matthew Green, Giuseppe Ateniese, Identity-Based Proxy Re-encryption, Applied
Cryptography and Network Security, LNCS 4521, pp. 288-306, Springer-Verlag,
2007.
•
Giuseppe Ateniese, Kevin Fu, Matthew Green, and Susan Hohenberger, “Improved
Proxy Reencryption Schemes with Applications to Secure Distributed Storage”. ACM
TISSEC,9(1): pp 1-30,Feb 2006.
•
Toshihiko Matsuo, Proxy Re-encryption Systems for Identity-Based Encryption ,
Pairing 2007, LNCS 4575, pp. 247-267
•
Ran Canetti and Susan Hohenberger, “Chosen Ciphertext Secure Proxy Reencryption.” CCS 2007: Proceedings of the 14th ACM conference on Computer and
communications security, pp.185-194, 2007. Also available at eprint archive:
http://eprint.iacr.org/2007/171.pdf
•
Tony Smith. DVD Jon: buy DRM-less Tracks from Apple iTunes, March 18, 2005.
Available at http://www.theregister.co.uk/2005/03/18/itunes_pymusique.