Entropic Measures of a Probability Sample Space and Exponential

World Academy of Science, Engineering and Technology
International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering Vol:8, No:1, 2014
Entropic Measures of a Probability Sample Space
and Exponential Type (α, β) Entropy
International Science Index, Physical and Mathematical Sciences Vol:8, No:1, 2014 waset.org/Publication/9997345
Rajkumar Verma,
Abstract—Entropy is a key measure in studies related to
information theory and its many applications. Campbell for the first
time recognized that the exponential of the Shannon’s entropy is just
the size of the sample space, when distribution is uniform. Here is
the idea to study exponentials of Shannon’s and those other entropy
generalizations that involve logarithmic function for a probability
distribution in general. In this paper, we introduce a measure of
sample space, called ‘entropic measure of a sample space’, with
respect to the underlying distribution. It is shown in both discrete and
continuous cases that this new measure depends on the parameters
of the distribution on the sample space - same sample space having
different ‘entropic measures’ depending on the distributions defined
on it. It was noted that Campbell’s idea applied for Rènyi’s parametric
entropy of a given order also. Knowing that parameters play a role in
providing suitable choices and extended applications, paper studies
parametric entropic measures of sample spaces also. Exponential
entropies related to Shannon’s and those generalizations that have
logarithmic functions, i.e. are additive have been studies for wider
understanding and applications. We propose and study exponential
entropies corresponding to non additive entropies of type (α, β),
which include Havard and Charvât entropy as a special case.
Keywords—Sample space, Probability distributions, Shannon’s
entropy, Rènyi’s entropy, Non-additive entropies .
I. I NTRODUCTION
n
ET Δn = {P = (p1 , . . . , pn ) : pi ≥ 0, i=1 pi =
1}, n ≥ 2 be a set of n-complete probability
distributions.
For any probability distribution P = (p1 , . . . , pn ) ∈
Δn , Shannon’s entropy [9], is defined as
L
H(P ) = −
n
p(xi ) log p(xi )
(1)
i=1
Various generalized entropies have been introduced in the
literature, taking the Shannon entropy as basic and have
found applications in various disciplines such as economics,
statistics, information processing and computing etc.
Generalizations of Shannon’s entropy started with Rènyi’s
entropy [8] of order-α, given by
n
1
(p(xi ))α , α= 1, α> 0 (2)
log
Hα (P ) =
(1 − α)
i=1
Campbell [1] studied exponentials of the Shannon’s and
Rènyi’s entropies, given by
E(P ) = eH(P )
(3)
Bhu Dev Sharma
and
Eα (P ) = eHα (P )
(4)
where H(P ) and Hα (P ) represent respectively the Shannon’s
and Rènyi’s entropies. It may also be mentioned that Koski
and Persson [5] studied
E(α,β) (P ) = eH(α,β) (P )
(5)
exponential of Kapur’s entropy [4] given by
n
(p(xi ))α
1
Hα,β (P ) =
, α= β, α, β> 0
log i=1
n
β
(β − α)
i=1 (p(xi ))
(6)
It is interesting to notice that, in the case of discrete uniform
distribution P ∈ Δn , (3), (4) and (5) all reduce to n, just the
‘size of sample space of the distribution’.
In fact, when we consider corresponding entropies in the
continuous case,uniform distribution in a finite interval (a, b),
the exponential of these entropies are equal to length [b − a]
of the sample space.
Measures, as we know, are important for concepts and their
applications. Here we raise a question: Is there a measure
of the sample space in terms of the probability distribution
defined over it? In this paper we introduce such a measure
and study it.
Further, it is well known that, parameters in measures
play a significant role in widening their applications and
meaningfulness. In this paper we introduce a measure for
general sample space of a probability distribution, involving
parameters also.
It may be recalled that Shannon’s and Rènyi’s entropies
are additive and involve logarithmic function. The idea
has significantly been advanced in non-additive measures
by Havrda and Charvât [2] and Sharma and Taneja [10].
Exponential entropies corresponding to these expend this
study. Since Sharma and Taneja’s entropy holds Havrda and
Charvât entropy as a particular case, we take for studying
exponential ‘type (α, β)’ entropy, corresponding to Sharma
and Taneja [10] entropy of type (α, β) which is a two
parametric non-additive generalization of Shannon’s entropy
given by:
n
n
α
β
(α,β)
i=1 (p(xi )) −
i=1 (p(xi ))
(P ) =
,
(7)
H
2(1−α) − 2(1−β)
where
Rajkumar Verma, Department of Mathematics, Jaypee Institute of Information Technology, Noida, U.P-201307, INDIA, (e-mail: [email protected]).
Bhu Dev Sharma, Department of Mathematics, Jaypee Institute
of Information Technology, Noida, U.P-201307, INDIA, (e-mail:
[email protected]).
International Scholarly and Scientific Research & Innovation 8(1) 2014
α= β, α, β> 0.
This paper is organized as follows: In section II we
define a measure called “entropic measure of sample
117
scholar.waset.org/1999.7/9997345
World Academy of Science, Engineering and Technology
International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering Vol:8, No:1, 2014
space of a probability distribution”, in general, based on
Shannon’s entropy. In section III we propose a generalized
“order-α entropic measure” for sample space of a probability
distribution. In section IV. we introduce exponential “type
(α, β)” entropy and discuss its limiting and particular cases.
In section V. we study some properties of exponential “type
(α, β)” entropy and brief our conclusions are presented in
Section VI.
From Definition in (1), we get
(ii) Inverse λ-Power distribution [3]: For S = {i|i =
1, . . . , ∞} and λ > 1,
pi =
PROBABILITY DISTRIBUTION
International Science Index, Physical and Mathematical Sciences Vol:8, No:1, 2014 waset.org/Publication/9997345
E(S, P ) = e
H(P )
X = (x1 , . . . , xn ), Y = (y1 , . . . , yn ),
(9)
their joint occurrences are given by the set of points
(10)
(11)
which is also the size of product sample space.
This is also the case, if X and Y were continuous random
variables uniformly distributed in finite intervals, (a, b) and
(c, d), as can be quickly verified.
Note 3: The entropic measure E(S, P ) that we define, as will
be seen below, is finally a function of the parameters of the
distribution, which makes it interesting further.
Examples of discrete distributions: The entropic measures of
sample space S, under the following distributions are different
depending only on the parameters of the distributions:
(i) Geometric distribution [3]: For S = {i|i = 0, 1, . . . , ∞},
then
p+q =1
1
H(P ) = − [p log p + q log q].
q
i−λ
(15)
ζ (λ)
.
ζ(λ)
(16)
i=1
Using Definition in (1), we get
ζ (λ)
E(S, ζ(λ)) = ζ(λ)e−λ ζ(λ)
(17)
Examples of continuous distributions:
(i) Two sided power distribution [6]: For −∞ < a ≤ m ≤
b < ∞ and n > 0
⎧
n−1
⎪
⎨ n x−a
, if a < x ≤ m,
b−a m−a
n−1
(18)
f (x) =
⎪
n
b−x
⎩ b−a
, if m ≤ x ≤ b,
b−m
then
H(P ) = log (b − a) − log n +
n−1
.
n
(19)
From Definition in (1), we get
(b − a) exp ( n−1
n )
(20)
n
where T s(P ) represents two sided power distribution, is a
function of parameters a, b only.
E(S, T s(P )) =
when both the distributions on X and Y are uniform, and their
resulting joint distribution is also uniform, U , and
pi = qpi ,
∞
where ζ(λ) represents inverse λ-power distribution, is a
function of parameter λ only.
Note 1: As pointed out earlier, E(S, P ) gives size/order
of the sample space when the distribution is uniform on
a finite sample space. The condition of ‘finiteness’ of the
sample space, as illustrated by examples below, may not
be necessary. In fact an infinite sample space, depending
upon the distribution defined on it may have finite entropic
measure.
Note 2: The idea of size of the sample space is expandable
to multivariate cases also. If we consider bi-variate situation
specified by two discrete random variables X and Y :
E(XY, U ) = eH(XY ) = nm
ζ(λ) =
H(P ) = log ζ(λ) − λ
(8)
where H(P ) is Shannon’s entropy of the distribution.
XY = (xi yj |i = 1, . . . , n; j = 1, . . . , m),
i−λ
,
ζ(λ)
then
Definition 1(Entropic Measure of a Sample Space): Given a
probability distribution P on a sample space S , the entropic
measure of S with respect to the distribution P is defined as:
(14)
where G(P ) stands for geometric distribution, is a function
of parameter p only.
II. E NTROPIC MEASURE OF THE SAMPLE SPACE OF A
We proceed with the following formal definition:
1 1 1−p
1−p p
p
E(S, G(P )) =
(12)
(13)
International Scholarly and Scientific Research & Innovation 8(1) 2014
(ii) Two piece normal distribution [6]: For −∞ < μ < ∞,
σ1 > 0 and σ2 > 0,
⎧ (x−μ)2
1
2
⎨
π σ1 +σ2 exp {− 2σ1 2 }, if x ≤ μ,
f (x) =
(21)
2
2
1
⎩
exp {− (x−μ)2 } if x > μ,
π σ1 +σ2
2σ2
2
1
1
.
H(P ) = − log
2
π σ1 + σ2
Using Definition in (1), we get
1
π
(σ1 + σ2 )
E(S, T p(P ) =
exp
2
2
then
(22)
(23)
where T p(P ) represents two piece normal distribution, is a
function of parameters σ1 , σ2 only.
(iii) Exponential distribution [3]: For 0 ≤ x < ∞ and
λ > 0,
λe−λx , if x ≥ 0 ,
(24)
f (x) =
0,
if x < 0,
118
scholar.waset.org/1999.7/9997345
World Academy of Science, Engineering and Technology
International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering Vol:8, No:1, 2014
then
H(P ) = log (λ) − 1.
(25)
From Definition in (1), we get
E(S, Ed(P )) =
λ
e
(26)
where Ed(P ) represents exponential distribution, is a function
of parameter λ only.
International Science Index, Physical and Mathematical Sciences Vol:8, No:1, 2014 waset.org/Publication/9997345
(iv) Asymmetric Laplace distribution [6]: For −∞ < θ < ∞
and φ1 , φ2 > 0,
|x−θ|
1
2φ1 exp(− φ1 ), if x ≥ θ,
(27)
f (x) =
|x−θ|
1
2φ2 exp(− φ2 ), if x < θ,
then
H(P ) = 1 + log (2) +
(log φ1 + log φ2 )
.
2
(28)
Using Definition in (1), we get
E(S, Al(P )) = 2e
φ1 φ2
(29)
where Al(P ) represents two piece Asymmetric Laplace
distribution, is a function of parameters φ1 , φ2 only.
(v) Generalized Pareto distribution [6]: For x > 0 (if
c ≤ 0 and k > 0) or for 0 < x < kc (if c > 0 and k > 0),
cx 1
1
f (x) = (1 − ) c −1
k
k
III. G ENERALIZED ORDER -α E NTROPIC M EASURE OF A
S AMPLE S PACE
(30)
As we mentioned earlier, parametric generalization, in
particular Rènyi’s order-α entropy has been studied with quite
some interest. In this section, we introduce “order-α entropic
measure” of a sample space in respect of an underlying
probability distribution.
Definition 2 (Order-α Entropic Measure of a Sample Space):
Given a probability distribution P on a sample space S ,
order-α entropic measure of S, is defined as:
Eα (S, P ) = eHα (P )
(36)
where Hα (P ) is Rènyi’s entropy of the distribution P .
Examples of order-α entropic measure of discrete distributions:
In these examples, we take the various distributions considered
earlier.
(i) Geometric distribution [6]:
(1 − p)α 1
.
(37)
log
Hα (X) =
1−α
1 − pα
Using Definition in (2), we get
Eα (S, G(P )) =
1
(1 − p)α 1−α
1 − pα
(38)
where G(P ) stands for geometric distribution, is a function
of parameters α, p only.
(ii) Inverse λ-Power distribution [6]:
then
H(P ) = 1 − c + log k.
(31)
Hα (P ) = log
1
ζ(λα) 1−α
ζ(λ)α
From Definition in (1), we get
E(S, Gpd(P ) = k exp (1 − c)
(vi) Gaussian distribution [7]: For −∞ < x < ∞,
−∞ < μ < ∞ and σ 2 > 0,
f (x) = √
then
H(P ) =
2πσ 2
e
(x−μ)2
2σ 2
1
log (2πeσ 2 ).
2
(33)
Eα (S, ζ(λ)) =
1
ζ(λα) 1−α
(40)
ζ(λ)α
where ζ(λ) represents inverse λ-power distribution, is a
function of parameters α, λ only.
Examples of order-α entropic measure of continuous
distributions:
(i) Two sided power distribution [6]:
α log n − log (αn − α + 1)
. (41)
1−α
Using Definition in (2), we get
Hα (P ) = log (b − a) +
(34)
α
Using Definition in (1), we get
E(S, Gd(P )) = σ
(39)
From Definition in (2), we get
(32)
where Gp(P ) represents generalized Pareto distribution, is a
function of parameters k, c only.
1
.
(2πe)
Eα (S, T s(P )) =
(35)
where Gd(P ) represents Gaussian distribution, is a function
of parameter σ only.
So far we studied a measure which contained no extraneous
parameter.
In the next section, we propose a generalized order-α entropic
measure of a sample space.
International Scholarly and Scientific Research & Innovation 8(1) 2014
(b − a)n 1−α
1
(αn − α + 1) 1−α
(42)
where T s(P ) represents two sided power distribution, is a
function of parameters a, b, α only.
(ii) Two piece normal distribution [6]:
2
1
log(α)
−
.
Hα (P ) = −log
π σ 1 + σ2
2(1 − α)
119
scholar.waset.org/1999.7/9997345
(43)
World Academy of Science, Engineering and Technology
International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering Vol:8, No:1, 2014
From Definition in (2), we get
1
π 2(α−1)
Eα (S, T p(P ) =
(σ1 + σ2 )
α
2
IV. E XPONENTIAL “ TYPE(α, β)” ENTROPY
(44)
where T p(P ) represents two piece normal distribution, is a
function of parameters α, σ1 , σ2 only.
(iii) Exponential distribution [7]:
Hα (P ) = log λ −
log α
.
1−α
(45)
Using Definition in (2), we get
International Science Index, Physical and Mathematical Sciences Vol:8, No:1, 2014 waset.org/Publication/9997345
Eα (S, Ed(P )) = λα
(α−1)
(46)
where Ed(P ) represents exponential distribution, is a function
of parameters α, λ only.
Corresponding to Sharma and Taneja “type (α, β)” entropy,
the exponential “type(α, β)” entropy is defined as follows:
Definition 3: Expontial type(α, β) entropy of a discrete
distribution P is given by:
n (pi )α −n (pi )β
i=1
i=1
e
−1
(53)
E (α,β) (P ) =
e(21−α −21−β ) − 1
where
α= β, α, β> 0.
This has interesting particular cases that we briefly mention
below.
Limiting and Particular cases:
(a) When α → β, measure (48) reduces to
H β (P ) = −2β−1
(iv) Asymmetric Laplace distribution [6]:
1
(47)
1
+ φ2 1−α 1−α
α
α2
1−α
(48)
where Al(P ) represents asymmetric Laplace distribution, is
a function of parameters α, φ1 , φ2 only.
(v) Generalized Pareto distribution [6]:
Hα (P ) = log k −
(54)
This measure was given by Sharma and Taneja [10].
(b) When α → β and further taking β → 1, measure
(48) reduces to Shannon’s entropy.
(c)When α = 1 , measure (48) reduces to
n
β
e1− i=1 (pi ) − 1
, β= 1, β> 0 (55)
E 1,β (P ) =
e1−21−β − 1
From Definition in (2), we get
φ
(p(xi ))β log p(xi )
i=1
φ 1−α + φ 1−α 1
1
2
.
log
Hα (P ) =
1−α
α2α
Eα (S, Al(P )) =
n
log(α − αc + c)
1−α
(49)
This can be considered as exponential “type-β” entropy
corresponding to Havrda and Charvât entropy [2] of type β
given by
n
(pi )β − 1
H β (P ) = i=1
,
(56)
1−β
2
−1
In the next section, we study some properties of E (α,β) (P ),
the exponential “type(α, β) ” entropy.
Using Definition in (2), we get
1
Eα (S, P d(P )) = k(α − αc + c) α−1
V. P ROPERTIES OF THE EXPONENTIAL “ TYPE(α, β)”
(50)
where P d(P ) represents generalized Pareto distribution, is a
function of parameters k, α, c only.
(vi) Gaussian distribution [7]:
Hα (P ) =
1
log α
log (2πσ 2 ) −
.
2
2(1 − α)
(51)
From Definition in (2), we get
Eα (S, Gd(P )) = σ
(2π)α2(α−1)
(52)
where Gd(P ) represents Gaussian distribution, is a function
of parameters σ, α only.
In the next section, we first propose a exponential
two-parametric generalization of Shannon’s entropy that we
refer to as exponential “type(α, β)” entropy and discuss its
limiting and particular cases also.
International Scholarly and Scientific Research & Innovation 8(1) 2014
ENTROPY
The quantity introduced in the preceding section is an
‘entropy’. Such a name will be justified, if it shares some
major properties with Shannon’s and other entropies in the
literature. We study some such properties in the next three
theorems.
Theorem 1: The measure of information E (α,β)
n (P ), P ∈ Δn ,
where Δn = {P = (p1 , . . . , pn ) : pi ≥ 0, i=1 pi = 1} has
the following properties:
1) Symmetry:
E (α,β) (P ) = E (α,β) (p1 , . . . , pn ) is a symmetric
function of (p1 , . . . , pn ).
2) Normalized:
E (α,β) ( 12 , 12 ) = 1.
3) Expansible:
E (α,β) (p1 , . . . , pn , 0) = E (α,β) (p1 , . . . , pn ).
4) Decisive:
E (α,β) (1, 0) = E (α,β) (0, 1) = 0.
120
scholar.waset.org/1999.7/9997345
World Academy of Science, Engineering and Technology
International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering Vol:8, No:1, 2014
International Science Index, Physical and Mathematical Sciences Vol:8, No:1, 2014 waset.org/Publication/9997345
5) Continuity:
E (α,β) (p1 , . . . , pn ) is continuous in the region pi ≥ 0
for all α, β > 0.
Proof: (1) to (4):these properties are obvious and can be
verified easily.
n
n
5). We know that i=1 (pi )α − i=1 (pi )β is continuous in
the region pi ≥ 0 for all α, β > 0.
Hence, E (α,β) (P ), is also continuous in the region pi ≥ 0
for all α, β > 0.
Theorem 2: The measure E (α,β) (P ) is non-negative
for all α, β > 0.
Proof: We consider the following cases:
Case(i): When α > 1 and 0 < β < 1,
n
n
α
β
e i=1 (pi ) − i=1 (pi ) − 1 < 0
and
e(2
1−α
−21−β )
−1<0
E
(α,β)
(P ) > 0.
(57)
Case(ii): When β > 1 and 0 < α < 1,
n
n
α
β
e i=1 (pi ) − i=1 (pi ) − 1 > 0
e(2
we get
1−α
−21−β )
−1>0
E (α,β) (P ) > 0.
(58)
Case(iii): When α > 1 and β > 1,
(a) Let α > β > 1
n
n
α
β
e i=1 (pi ) − i=1 (pi ) − 1 < 0
and
e(2
1−α
−21−β )
we get
−21−β )
−1<0
E (α,β) (P ) > 0
(62)
From (57), (58), (59), (60), (61), (62), we conclude that
E (α,β) (P ) > 0 for all α, β > 0.
To prove the next theorem, we shall use the following
definition of a concave function.
Definition 4 (Concave Function): A function f (.) over
the points in a convex set R is concave if for all r1 , r2 ∈ R
and μ ∈ (0, 1)
(63)
The function f (.) is convex if the above inequality holds with
≥ in place of ≤.
Theorem 3: The measure E (α,β) (P ) is a concave function
of the probability distribution P = (p1 , . . . , pn ), pi ≥
n
0, i=1 pi = 1, when one of the parameters α, β(> 0) is
greater than unity and the other is less than or equal to unity,
i.e., either α > 1 and 0 < β ≤ 1 or β > 1 and 0 < α ≤ 1.
Proof :
Associated
with
the
random
variable
X = (x1 , x2 , . . . , xn ), let us consider r distributions
where
n
(64)
pk (xi ) = 1, k = 1, 2, . . . , r.
Next
n let there be r numbers (a1 , a2 , . . . , ar ) such that ak ≥ 0,
i=1 ak = 1 and define
−1<0
E (α,β) (P ) > 0.
e(2
we get
1−α
−2
1−β
)
where
p0 (xi ) =
e(2
1−α
−2
1−β
r
ak pk (xi ), i = 1, 2, . . . , n.
n
Obviously
i=1 p0 (xi ) = 1 and thus P0 (X) is a bonafide
distribution of X.
If α > 1 and 0 < β ≤ 1, then we have
r
(60)
ak E (α,β) (Pk ) − E (α,β) (P0 )
k=1
=
r
e
ak E (α,β) (Pk ) −
k=1
)
−1>0
E (α,β) (P ) > 0.
(65)
k=1
−1>0
E (α,β) (P ) > 0.
P0 (X) = {p0 (x1 ), . . . , p0 (xn )},
(59)
Case(iv): When α < 1 and β < 1,
(a) Let α < β < 1
n
n
α
β
e i=1 (pi ) − i=1 (pi ) − 1 > 0
we get
1−α
Pk (X) = {pk (x1 ), . . . , pk (xn )},
(b) Let β > α > 1
n
n
α
β
e i=1 (pi ) − i=1 (pi ) − 1 > 0
and
e(2
i=1
we get
and
and
μf (r1 ) + (1 − μ)f (r2 ) ≤ f (μr1 + (1 − μ)r2 )
we get
and
(b)Let β < α < 1
n
n
α
β
e i=1 (pi ) − i=1 (pi ) − 1 < 0
(61)
International Scholarly and Scientific Research & Innovation 8(1) 2014
≤
r
k=1
121
ak E (α,β) (Pk ) −
[
e
r
i=1
ai pi ]α −[
n
i=1
a i p i ]β
e(21−α −21−β ) − 1
r
i=1
ai (pi )α −
n
i=1
ai (pi )β
e(21−α −21−β ) − 1
scholar.waset.org/1999.7/9997345
−1
−1
World Academy of Science, Engineering and Technology
International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering Vol:8, No:1, 2014
i.e.,
r
ai E (α,β) (Pi ) ≤ E (α,β) (P )
(66)
i=1
By symmetry in α and β, the above result is also true for
β > 1 and 0 < α ≤ 1.
VI. C ONCLUSION
International Science Index, Physical and Mathematical Sciences Vol:8, No:1, 2014 waset.org/Publication/9997345
In this paper, for the first time, concept of measure
of a sample space with associated probability distribution
has been introduced. This idea has quite some potential
for further study and exploration both in statistics as well
as in information theoretic applications.Using parametric
generalization provides further desirable flexibilities.
Professor Sharma has published over 120 research papers in the areas
covering Coding Theory, Information Theory, Functional Equations and
Statistics. He has successfully guided 23 persons for Ph.D. in India and abroad.
He has also authored/co-authored 23 published books in Mathematics and
some others on Indian studies. He is the Chief Editor of Jr. of Combinatorics,
Information and System Sciences–JCISS (1976-present), and is on the
editorial boards of several other journals.
He is member of several professional academic organizations. Past
President of Forum for Interdisciplinary Mathematics (FIM), Vice President
of Calcutta Mathematical Society, Ramanujan Math Society, Vice President
of Academy of Discrete Mathematics and Applications (ADMA). After
returning to India, he is working on establishing a Center of Excellence in
Interdisciplinary Mathematics (CEIM) in Delhi, India.
Professor Sharma has received several awards and recognitions in India
and abroad. He is widely traveled that would include Germany and several
other countries of Europe, Brazil, Russia, and China. He has organized several
International Conferences on Math/Stat in India, USA, Europe, Australia and
China. Currently five persons are registered for Ph.D. under his supervision.
ACKNOWLEDGMENT
The authors wish to express their sincere thanks to
the referees for the valuable suggestions which helped in
improving the presentation of the paper.
R EFERENCES
[1] L. L. Campbell, “Exponential entropy as a measure of extent of distribution,” Z. Wahrscheinlichkeitstheorie verw. Geb, vol. 5, pp. 217-225,
1966.
[2] J. H. Havrda and F. Charvat, “Quantification methods of classification
processes: concept of structural α entropy,” , Kybernetica, vol. 3, pp.
30-35, 1967.
[3] S. W. Golomb,, “The information generating function of a probability
distribution,” IEEE Transition on Information Theory, vol. 12, pp. 75-77,
1966.
[4] J. N. Kapur, Measure of information and their applications,1st ed., New
Delhi, Wiley Eastern Limited, 1994.
[5] T. Koski and L. E. Persson, “Some properties of generalized exponential
entropies with applications to data compression,” Information Sciences,
vol. 62, pp. 103-132, 1992.
[6] S. Nadarajah and K. Zografos, “Formulas for Rènyi information and
related measures for univariate distributions,” Information Sciences, vol.
155, pp. 119-138, 2003.
[7] F. Nielsen and R. Nock, “On Rènyi and Tsallis entropies and divergences
for exponential families,”, 2011. http://arxiv.org/abs/1105.3259v1.
[8] A. Rènyi, “On measures of entropy and information,” in proceeding of
the Forth Berkeley Symposium on Mathematics, Statistics and Probability1961, pp. 547-561, 1961.
[9] C. E. Shannon, “A mathematical theory of communication,” Bell System
Technical Journal, pp. 379-423;623-656, 1948.
[10] B. D. Sharma, I. J. Taneja, “Three generalized-additive measures of
entropy,” E.I.K. (Germany), vol. 13, pp. 419-433, 1977.
Rajkumar Verma is a Ph.D. student in Department of Mathematics, Jaypee
Institute of Information Technology (Deemed University), Noida, India.He
received the B.Sc. and M.Sc.degree from Chaudhary Charan Singh University
(formerly, Meerut University), Meerut in 2004 and 2006 respectively.His
research interests include Information Theory, Fuzzy information measures,
Pattern recognition, Multicriteria Decision-making analysis, Soft set theory
etc.
Bhu Dev Sharma currently Professor of Mathematics at Jaypee Institute of
Information Technology (Deemed University), Noida, India did his Ph.D. from
University of Delhi. He was a faculty at the University of Delhi (1966–1979)
and earlier at other places. He spent about 29 years abroad 20 years
(1988-2007) in USA and 9 years (1979-1988) in the West Indies, as Professor
of Mathematics and in between as chair of the department.
International Scholarly and Scientific Research & Innovation 8(1) 2014
122
scholar.waset.org/1999.7/9997345