csse2011_submission_103.pdf

Online Social Network with
Flexible and Dynamic Privacy Policies
Fatemeh Raji
Ali Miri
Mohammad Davarpanah Jazi
Behzad Malek
Department of Electrical
and Computer Engineering
Isfahan University
of Technology
Isfahan, Iran
[email protected]
Department of
Computer Science
Ryerson University
Toronto, Canada
[email protected]
Department of Electrical
and Computer Engineering
Isfahan University
of Technology
Isfahan, Iran
[email protected]
School of Information
Technology and Engineering
University of Ottawa
Ottawa, Canada
[email protected]
Abstract—Online Social Networks (OSN) have become enormously popular in recent years. OSN enable people to connect
with their friends based on sharing details about their personal
information. However, there are some serious privacy problmes
that require mechanisms to alleviate them : 1) There should be a
mechanism to protect user-generated data from OSN providers.
2) There should be a fully customizable and flexible access control
mechanism enabling users to protect private information against
attackers and unauthorized friends. 3) The aforementioned access
control system should also allow for efficient classification and
revocation of OSN users.
To meet these requirements, this paper presents a privacy
protection mechanism for OSN with a flexible and dynamic
privacy control. In the proposed approach, the users keep control
of their data without any help from the OSN provider or a
trusted third party. The proposed approach employs broadcast
encryption (BE) scheme for data communication to intended
users in the network. The privacy and efficiency analysis show
the proposed architecture is a great improvement over previous
approaches in privacy protection of OSN users.
Index Terms—Online Social Network, Privacy, Revocation,
Flexible Access Control, Broadcast Encryption, Efficiency
I. I NTRODUCTION
Online social networks (OSN) represent real life interaction
of online users and various computer networks. OSN such
as Facebook and MySpace have attracted millions of users,
many of whom have integrated these sites into their daily
routines [1]. OSN allow users in many different ways to
communicate with others, usually referred to as “friends”,
by sending messages and sharing photos, links, videos, etc.,
sharing more and more personal information. While OSN
offer a new opportunities for connecting people and creating
exchange of all kinds of information, they place the privacy
of their users at a great risk.
Current OSN are administered by a single authority, and
there is a strong motivation to monopolize the market. This
is well represented by the virtual value of OSN providers,
which in the market capitalization is 500 million active users
valued at $25 billion (US) [2]. As a result, OSN have become
storage houses of a tremendous amount of data in the form
of messages, photos, links, and personal conversations. This
concentration makes the online users vulnerable to large
scale privacy breaches from intentional and unintentional data
disclosures [3].
As social networks have grown in size, it has become
increasingly necessary to control which friends get to see what
personal information. Users do not wish to share everything
with everyone. Currently, most OSN have a single binary
method that defines a sharing relationship between online
users. However, human relationships are much more complicated and there is a need for greater flexibility. In reality, the
social relationship between users has a varying definition, and
it best can be described by the interaction and the type of data
being exchange. It would be useful to implement flexibility
in and to allow complexity in defining privacy relationship
between users of OSN.
For example, a user called Alice wants her best friend to
see her party pictures excluding other college friends. This
may be because, the user Alice has different interaction habits
with her best friend than with her college friends. As a result,
users should always be able to determine what information
to share and with whom it is being shared. To reach this
goal, users should be able to create different groups of friends
while being capable of regulating independent privacy settings
for each group. Moreover, the OSN users should be provided
with flexible access polocies for friends. In the real life, a user
may have various relationships with some friends depending
on the occassion. Let’s assume that Alice creates relations
called ”Best-Friend” including Alice’s best friends only and
”College” including all her college friends. It may be required
to set meaningful policies for sharing a photo with both the
best friends and college friends by creating (Best-Friend AND
College). Furthermore, Alice may want to send a message to
friends in the Best-Friend except Bob. This can be categorized
by (Best-Friend EXCEPT Bob). In other words, users should
be able to share their personal information with any arbitrary
combination of friends over defined relations.
The defined access policies in OSN should satisfy not only
flexible classification of friends but also efficient revocation
of them. Relationship may change over time, and removing a
friend from defined relations is very common in OSN. Thus,
friends’ revocation should impose minimal cost to OSN users.
Contribution: In this paper, we suggest a privacy approach
for OSN with flexible and dynamic accesses policies. The
proposed method gives OSN users full control over their information, so that they can interact with their friends with fewer
violations of privacy. The poposed access control employs
broadcast encryption (BE) to send the one-time relation key to
friends. In the proposed approach, the user simply publishes
the encrypted content to the storage servers. Only the intended
friends who can derive the decryption key would be able to
decrypt and access the user’s private data. In other words, the
shared information is accessible to neither OSN provider nor
unauthorized friends. Finally, our scheme is supplied with an
efficient revocation mechanism without any need for re-keying
of information.
Organizaion: In this work, we first review some of the
related works in Section II. The main protocol is given in
Section III. We later analyze the privacy of the proposed
protocol in Section IV, while efficiency discussions of the
our scheme are give in Section V. Finally in Section VI,
conclusions are provided.
II. R ELATED W ORK
A. Privacy Preserving in OSN
Privacy concerns in OSN have been repeatedly raised in
the last few years. There are a few architectures found in the
literature [4-11] that provide the ability for users to protect
their privacy. However, they are some shortcomings, especially
in protecting the users against the OSN providers. Table I
summarizes the comparison of privacy protocols proposed for
OSN.
TABLE I: Comparison of privacy approaches proposed for OSN
Methods
OSN Provider
Flexible
Grouping
Dynamic
Revocation
FlyByNight [4]
FaceCloak [5]
NOYB [6]
Persona [7]
Lockr [8], [9]
BE-Based [10]
GCC [11]
Semi trusted
Un-trusted
Un-trusted
Un-trusted
Trusted
Trusted
Semi trusted and uses
trusted third parties
–
–
–
X
X
–
X
–
–
–
–
–
X
–
In FlyByNight [4], the user can securely communicate with
each friend or a group of them with OSN provider collaboration. FlyByNight has employed public key cryptography for
one-to-one communication and proxy cryptography for oneto-many communication. FlyByNight has some deficiencies
as the following: 1) FlyByNight relies on trustworthy of
both FlyByNight servers and OSN. 2) The user can only
communicate with one group at any given time. 3) If the user
removes a friend from a defined group, all key setup for this
group should be repeated from the beginning.
FaceCloak [5] protects user privacy by shielding the user’s
personal information from OSN and unauthorized users. Face-
Cloak encrypts the user’s profile item and stores them on a
third party server. Then, FaceCloak replaces the user’s profile
item with appropriate fake data and sends them to OSN server.
In future, only the user’s friends will be able to replace the fake
information with the real ones since they receive the necessary
keys from the user. FaceCloak has some main limitations as the
following. 1) The user has to define the same privacy setting
for all her friends. 2) Adding or removing friends requires the
new execution of all FaceCloak setup steps.
NOYB [6] uses an encryption and encoding technique
to provide privacy. It partitions a private data into atoms.
NOYB uses dictionaries which store on users’ computers or
trusted third party. Then each private atom is pseudorandomly
substituted with fake atoms from dictionaries. NOYB has some
limitations as the following. 1) There is no option for flexible
classification of user’s friends. 2) Key revocation is handled
by issuing a new key.
Persona [7] takes the advantage of attribute-based encryption (ABE) and public key infrastructure (PKI) to ensure data
confidentiality and enable secure sharing of contents with
entire groups defined by the user. However, revoking a user
from Persona requires the new execution of all protocol steps.
Lockr [8], [9] assigns social attestations for each user and
social access control lists (ACLs) for each private data. To
access a private data, the user should present an attestation
certifying the social relationship listed in the ACL. Lockr has
some deficiencies as the following: 1) It trust OSN provider
and a third party server. 2) It didn’t propose an efficient
solution for key revocation.
Reference [10] employs broadcast encryption (BE) for
storing data and public key encryption with keyword search
(PEKS) for storing and retrieving it form third party server.
This approach supports efficient revocation. However, it has
some important problems as the following: 1) It trusts to OSN
provider for setting up the encryption domain. Moreover, the
OSN provider is the credential authority of the system. 2) Each
user’s friend can be assigned with only one relation. In other
words, there is no intersection between the defined relations.
Thus, if a private data is shared for multiple groups, the user
has to produce multiple copies of the encrypted data.
GCC [11] uses broadcast encryption to provide access
control with efficient revocation. However, GCC has some
important problems as the following. 1) The OSN users have to
fully trust a set of users (kernel users) which are synchronous.
2) The users share their private data and enforce access control
with the help of OSN provider. 3) There isn’t any solution for
having flexible access control.
B. Broadcast Encryption
Broadcast encryption (BE) is a cryptographic solution that
involves sharing a cryptographic key between multiple (more
than two) members in a group. Members can arbitrarily
select any subset of members for sharing a cryptographic key.
Members leave and join the group depending on the credentials
they receive from the group manager at any time. We refer to
Fig. 1: Data Sharing for the user Alice
the group manager as admin that is responsible for managing
the group and distributing keys to group members.
There exist various broadcast encryption schemes that can
be used for secure group communication. For a comprehensive
survey of most recent group key multicast protocols, the reader
can refer to [13]. One of the key requirements of a secure
broadcast encryption scheme is resistance against group members’ collusion. In a collusion resistant broadcast encryption
scheme, excluded members are not able to cooperate together
to obtain the current encrypted message in the broadcast or to
compromise other members in the group.
Boneh, Gentry and Waters [12] have proposed a collusion
resistant scheme that has short ciphertexts, i.e. the size of the
broadcast message is fixed and does not change with the size
of the broadcast group. Their BE scheme is comprised of the
following four algorithms:
Setup: The setup algorithm takes as an input the number of
users in the system and outputs the public key shared between
all users and secret key belonging only to the admin.
KeyGen: The key generation algorithm takes as input an
index (identity), which represents the identity of a given user.
It outputs a private key for the correspondent user (member).
Encrypt: The encryption algorithm takes as an input the
message and the subset of users, who will access the message.
Using the public key, it outputs a header and a message
encryption key. Header contains data to help intended users to
find the message encryption key, and the message encryption
key is used to encrypt the broadcast message.
Decrypt: The decryption algorithm takes as input the user’s
index and its corresponding secret key, the header for the given
set and the public key. If the user’s index is included in the
set of intended users, the decryption algorithm can output the
correct message encryption key.
III. T HE P ROPOSED A RCHITECTURE
Our architecture is composed of two phases: setup and data
sharing. In the following, we have explained each phase with
examples. Moreover, using the notations presented in Table II,
we have demonstrated a detailed description of our approach
in Fig. 2 .
A. Setup
Setup phase is for adjusting and preparing the required
auxiliary secret information setting for the user like Alice and
her friend like Bob.
User Setup: When Alice registers to the OSN, she generates
a BE public/secret key pair using the setup algorithm of the
BE scheme. Note that the security of the given BE system
depends on the BE secret key. However, the BE public key
should be published to all friends. In the proposded approach,
users have a choice on where to store their private data. Thus,
at this stage Alice chooses a storage server for future storing
of private data.
Friend Setup: Whenever Alice adds a new friend Bob, she
assigns a unique index and appreciate private key for Bob
using the key generation function of BE. Alice employs out
of band communication to disseminate the private key related
to Bob. This information is needed until Bob can securely
communicate with Alice later. It is worthwhile to mention that
the generated private key is never changed even Alice changes
the relationships with Bob.
Relationship Setup: Alice categorizes Bob based on the
relationship she has with Bob in real world. For this purpose,
Alice picks up the index of Bob wants to be added in a defined
relationship. Then, Alice writes the index set correspondent
to the relationship on her storage server. Note that, Alice can
define different relationship for Bob. In other word, the defined
relationship group can have intersection member. Grouping
friends restricts the information available to them and makes
sharing easy.
B. Data Sharing
In our method, users send their public data to OSN server as
usual. On the other hand, users store the encrypted private data
on their storage server. This mechanism allows each user to
retain control of the private data along with the access control
policies. Consequently, our approach is user centric i.e. only
the users decide about where to store the private data and who
can view it. Furthermore, only authorized friends decrypt and
read users’ private data in addition to write private comments
for users. An advantage of our method is that we don’t use a
permanent group key for each defined relation like traditional
group communication techniques [13]. Instead, we employ
different keys for each private data. Consequently, the user
doesn’t need to store many keys securely specially in OSN
that users have a lot of friends and relationships. The process
of data sharing is shown in Fig. 1.
Sending a message or sharing a photo/video for a subset
of friends: If Alice wants to share a private data for a subset
of her friends like ”college”, Alice initially picks up the
indexes of ”college” friends. Then Alice execute BE encryption function to get a pair of an encryption key and header
information. As discussed in section II-B, header contains
data to achieve the encryption key by ”college” friends. Alice
encrypts the data with the encryption key using a symmetric
algorithm. Finally, Alice writes the broadcast message with
the auxiliary information (header and index set) on her storage
server. It is worth mentioning that Alice does not publish the
resulted broadcast message. Instead Alice stores the message
on her own storage server. Clearly, Alice doesn’t broadcast the
encrypted data.
Reading a message or viewing a photo/video of a friend:
Suppose one of Alice’s friends in ”college” relation like Bob,
wants to read Alice’s shared data. At first, Bob reads Alice’s
storage server to get the encrypted data. Since Bob’s index
is included in the index set, Bob can execute BE decryption
algorithm using his BE private key. The result will be the
message encryption key. Finally, Bob symmetrically decrypts
the encrypted data with the resulted key.
Sending a comment for the shared message/photo/video:
If Bob wants to send a comment for Alice’s shared data, Bob
executes BE encryption algorithm. Note that the input of this
algorithm are Alice’s BE public key and the index set of the
Alice’s data. The outputs will be a pair of header information
and an encryption key. Bob encrypts the comment with the
encryption key. Finally, Bob sends the result to Alice’s storage
server.
Add/Remove friends to/from a relation: The introduced
approach gives more flexibility to users in defining the audiance sets of their private data. It is because users don’t share
any relation key with their friends. Instead, the encryption
keys are obtained on the fly using BE encryption algorithm.
If Alice wants to add a new friend to her defined relations
or remove a friend from them, the index sets related to the
relations are only changed. Thus, the modified index sets will
be used for communication later. It is worthwhile mentioning
that our approach respects group forward secrecy [13] i.e. once
a group member is excluded from a group, the group member
is not able to decrypt future encrypted data. Furthermore, we
complies backward secrecy [13] i.e. new member to a group
must not be able to decrypt past broadcast data communicated
in the group.
IV. P RIVACY A NALYSIS
We have presented the design of a new infrastructure
for developing OSN with guaranteeing user privacy in the
previous section. The users can choose the privacy level for
each shared content in OSN. In order to protect the privacy
of users, we hid users’ data with a cryptographic technique so
that only the users decide which friends are allowed to access
private data. In the following possible attacks originated from
different entities of OSN infrastructure are discussed.
A. OSN providers
As discussed in section I, OSN are considered a potential
threat for users privacy and should not have access to users’
private data.
In the proposed approach, the users’ private data is stored on
third party servers (storage servers) in encrypted form beyond
the reach of OSN providers. Thus, OSN providers do not have
access to shared data unless the users decide to grant them
access. In other words, the provided confidentiality guarantee
that the shared data is protected via BE encryption, so only
the intended friends can access to it.
B. Storage Servers
The storage servers, which are responsible for storing the
encrypted data, can also become the target of attackers. We
assume the storage servers are honest but curious in terms
that they correctly follow the protocol specification but may
attempt independently to learn privacy of users. However,
since users’ data are stored as encrypted form on storage
servers so only the validated friends can get the corresponding
plaintext of private data. As a result, nobody except the authorized friends can access users’ data even has the encrypted
shared data. It should be noted that users, who are concerned
about their privacy, can acquire their own storage resources
especially when the privacy awareness has increased and the
storage prices have dropped significantly.
C. Malicious Friends
Generally, in each sharing system, when users decide to
grant access to some of their private information, these data
could be used in a malicious way or disclosed without consent.
In our scheme, Alice has no control over the BE private key
once she sends it to Bob. If Bob send his key to user Ted, Ted
will be able to access Alice’s private data which was not shared
for Ted. Since the malicious friends have social relationship
with users, the social relationship in the real world will prevent
them to do this type of disclosure. However, if Alice detects
TABLE II: The notations used in the flowchart of the proposed approach
User
BEPubKi
BESecKi
BEPrvKi
BE-Setup()
BE-KeyGen(j)
BE-Enc(IndexSet,BEPubKi)
BE-Dec(IndexSet,Index,BEPrvKi,Header,BEPubKj)
Specify-StorageServer()
SSAddri
Define-Relation(RelName)
Generate-Index()
OutOfBandComm(FName,Data)
ModifyRel(RelName)
Read-StorageServer(SSAddr,Data)
Write-StorageServer(SSAddr,Data)
Sym-Enc(K,P)
Sym-Dec(K,C)
Select-Friends()
The current user of the BE domain
The BE public key of user i
The BE secret key of user i
The BE private key of user i
Setting up the BE domain to get a public/secret key pair
Generating BE private key for friend j
Executing the encrypt function of BE with the audience index set and the BEPubK of user i
Executing the decrypt function of BE with the audience index set, index
and BEPrvK of of user i and BEPubK of user j
Specifying a storage server and getting the address of it
The address of storage server belongs to user i
Defining a relation with name RelName and outputting the index set of its members
Generating a unique index for a friend
Sending Data to friend with the name FName via a secure channel
Modifying the membership of the relation RelName and outputting the new index set of it
Reading a storage server in address SSAddr to get Data
Writing Data on a storage server in address SSAddr
Encrypting the plain value P using a symmetric encryption function with key K
Decrypting the cipher value C using a symmetric encryption function with key K
Select the index set of the friends
Fig. 3: The scenario of sharing/reading in private OSN
Bob acting maliciously, Alice can easily remove Bob from her
defined relationship, since the proposed architecture supports
efficient revocation.
V. E FFICIENCY A NALYSIS
We have compared the privacy architectures introduced recently for OSN users in section II. As it can be seen from Table
I, among the various privacy protecting frameworks, only
Persona [7] allows users to apply flexible access control over
who may view their data without any trust to OSN provider.
Therefore, in this section we compare the performance of our
proposed protocol with Persona [7] with a similar scenario
of data sharing and reading. Note that, the overhead of setup
setting for both approaches are not interfered here since they
can be adjusted offline.
Suppose in the private OSN, the user Alice has n number
of friends that are categorized into N number of relations
(N < n). Alice shares m number of private data for a defined
relation like College in the time interval [t1 , t2 ]. Alice also
performs r number of revocation (r ≤ m) for College relation
in this interval since revocation frequently takes place in OSN
environment. On the other hand, during the interval [t1 , t2 ],
Alice’s friend, e.g. Bob who is a member of College relation
accesses to all Alice’s shared data. The discussed scenario is
shown in Fig. 3.
Persona encrypts user’s data with attribute-based encryption
(ABE) [14] that has four fundamental algorithms (setup, key
generation, encryption and decryption). When Alice shares the
first private data at time t1 , Persona generates a relation key
for College and encrypts it using ABE encryption function.
Then, Persona encrypts the data with the generated key symmetrically. For the second sharing, one of the following two
cases will be occurred:
1) If Alice has performed any revocation after the first
sharing, Persona generates a new key for Bob (all friends in
College relation) using ABE key generation function. Moreover, Persona creates a new relation key for College relation
and disseminate the key using ABE encryption function.
Finally, Persona encrypts the data with the new relation key
symmetrically
2) However, if Alice has not carried out any revocation after
the first sharing, Persona symmetrically encrypts Alice’s data
with the current relation key of College.
In Persona, this process repeated for all m number of data
sharing. Thus, Persona totally accomplishes (r + 1) number of
ABE encryption and r number of ABE key generation function
for the user Alice.
When Bob wants to read Alice’s shared data at time t1 ,
Persona executes ABE decryption function to achieve College
relation key. Then, Persona decrypts Alice’s shared data with
the resulted key and gives the data to Bob. For the second data
reading, one of the following two cases will be happened:
1) If Alice has performed any revocation after the first
sharing, Persona has to enforce ABE decryption function to get
the new relation key of College. After that, Persona decrypts
data symmetrically with the generated key.
2) If Alice has not fulfilled any revocation after the first
sharing, Persona decrypts the data with the current relation
key of College symmetrically.
This process continues for all m number of data reading.
Consequently, Persona executes (r + 1) number of ABE
decryption function for the user Bob. Note that Persona
also employs symmetric cryptography to protect private data.
However, since the cost of symmetric encryption algorithms
is negligible, it can be ignored.
Fig. 2: The flowchart of the proposed approach
In the proposed method, BE encryption function is enforced
for each data sharing. Even if Alice accomplishes any revocation, it doesn’t effect on the computation cost of data
sharing. Consequently, Alice totally executes m number of
BE encryption function in the interval [t1 , t2 ]. Similarly, the
user Bob excecutes m number of BE decryption function in
total.
The more expensive overhead of BE and ABE comes
from pairing (Pairing), point multiplication (MUL) and point
exponentiation (EXP). BE [12] accounts for 3 MUL and n
EXP in total. For ABE [14], the encryption function involves
1 Pairing, 2N + 2 EXP and 1 MUL operations and ABE
key generation function consumes 2N + 2 EXP and N MUL
operations in total.
(a) Total time needed to share m number of private data
(b) Total time needed to read m number of private data
Fig. 4: Efficiency Analysis
Furthermore, for an average condition, there are n = 130
and N = 10 for OSN user [15]. Assuming the size of each
element in an elliptic curve group is 512 bits [16], we will
reach the depicted overhead for the user Alice and Bob in
Fig. 4a and 4b respectively.
As it can be seen from Fig. 4a, if Alice share 20 private
data in the time interval [t1 , t2 ] and performs at least two
revocations, the computation cost of our approach will be
lower than Persona. Moreover, according to Fig.4b, if Alice
does at least one revocation, the computation cost of our
approach will be lower than Persona too. It is worthwhile to
mention that based on the dynamic characteristics of OSN, it
is very common that revocations occurs frequently for users’
friends and the defined memberships change over time.
VI. C ONCLUSIONS
There are many benefits in joining the social networks, but
online users require some restricting mechanism on access
to their personal data not only from unauthorized users, but
also from OSN providers. In this paper, we have presented
a privacy control model, which is fully customizable to the
users’ requirements. Users are allowed to categorize their
friends into different relations and to share data with an
arbitrary groups of friends. Moreover, revocation is done very
efficiently without the need to renew the users’ key agreement
and overload the network with key updates. Our approach
removes the needs for the users to securely protect the privacy
settings. Groups are publicly known, but the messages for each
group are encrypted and the encryption keys are obtained on
the fly using broadcast encryption scheme. This is very useful
for large networks, where a user has a lot of friends and
corresponding relationships in the his/her network.
R EFERENCES
[1] Dana Boyd and Nicole Ellison, Social Network Sites: Definition,
History, and Scholarship, Journal of Computer-Mediated Communication, 2007.
[2] Facebooks
Market
Cap
On
SecondMarket
Is
Now
$25
Billion
(Bigger
Than
Yahoo),
http://techcrunch.com/2010/06/04/facebook-secondmarket25-billion, accessed January 2011.
[3] Facebook
statement
of
rights
and
responsibilities:
http://www.facebook.com/terms.php, accessed January 2011.
[4] Matthew M. Lucas, Nikita Borisov: “FlyByNight: mitigating the
privacy risks of social networking”, 7th ACM workshop on
Privacy in the electronic society (WPES’08), 2008, pp. 1–8.
[5] Wanying Luo, Qi Xie, and Urs Hengartner: “FaceCloak: An
Architecture for User Privacy on Social Networking Sites”, IEEE
International Conference on Privacy, Security, Risk and Trust
(PASSAT-09), Vancouver, Canada, 2009, pp. 26–33.
[6] Saikat Guha, Kevin Tang, and Paul Francis: “NOYB: privacy in
online social networks”, first workshop on Online social networks
(WOSP’08), 2008, pp. 210–230.
[7] Randy Baden, Adam Bender, Neil Spring, Bobby Bhattacharjee,
and Daniel Starin: “Persona: An Online Social Network with User
Defined Privacy, and Scholarship”, Annual conference on Special
Interest Group on Data Communications (ACM SIGCOMM),
2009, pp. 135–146.
[8] Amin Tootoonchian, Kiran K. Gollu, Stefan Saroiu, Yashar Ganjali, and Alec Wolman, “Lockr: Social Access Control for Web
2.0”, The First ACM SIGCOMM Workshop on Online Social
Networks (WOSN), Seattle, WA, 2008, pp. 43–48.
[9] Amin Tootoonchian, Stefan Saroiu, Yashar Ganjali, and Alec Wolman “Lockr: Better Privacy for Social Networks”, The 5th ACM
International Conference on emerging Networking EXperiments
and Technologies (CoNEXT), Rome, Italy, 2009, pp. 169–180.
[10] Jinyuan Sun, Xiaoyan Zhu, and Yuguang Fang, “A PrivacyPreserving Scheme for Online Social Networks with Efficient
Revocation”, The The 29th conference on Information communications (INFOCOM), San Diego, CA, 2010, pp. 1–9.
[11] Yan Zhu, Zexing Hu, Huaixi Wang, Hongxin Hu and GailJoon Ahn, “A Collaborative Framework for Privacy Protection
in Online Social Networks”, The 6th International Conference
on Collaborative Computing (CollaborateCom), Chicago, USA,
2010, pp. 40–45.
[12] Dan Boneh, Craig Gentry, and Brent Waters, “Collusion Resistant
Broadast Encryption with short ciphrertexts and private keys”,
Advance in Cryptology: CRYPTO’05, 2005, pp. 258–275, 2005.
[13] Yacine Challal and Hamida Seba, “Group Key Management Protocols: A Novel Taxonomy”, International Journal of Information
Theory, Volume 2, Issue 2, pp. 105–118, 2005.
[14] John Bethencourt, Amit Sahai, and Brent Waters, “Ciphertextpolicy attribute-based encryption”, IEEE Symposium on Security
and Privacy, Berkeley, California, 2007, pp. 321–334.
[15] Facebook Statistics: https://www.facebook.com/press/info.php?statistics,
accessed January 2010.
[16] Syh-Yuan Tan, Swee-Huay Heng, and Bok-Min Goi, “Java Implementation for Pairing-Based Cryptosystems.”, Lecture Notes in
Computer Science, Computational Science and Its Applications,
Volume 6019, pp. 188–198, 2010.