Available online at www.sciencedirect.com
Physica A 335 (2004) 94 – 106
www.elsevier.com/locate/physa
On the stability of analytic entropic forms
Evaldo M.F. Curadoa , Fernando D. Nobrea; b;∗
a Centro
Brasileiro de Pesquisas Fsicas, Rua Xavier Sigaud 150,
Rio de Janeiro - RJ 22290-180, Brazil
b Departamento de Fsica Te
orica e Experimental, Universidade Federal do Rio Grande do Norte,
Campus Universitario – Caixa Postal 1641, Natal - Rio Grande do Norte 59072-970, Brazil
Received 6 October 2003; received in revised form 5 December 2003
Abstract
The stability against small perturbations on the probability distributions (also called experimental robustness) of analytic entropic forms is analyzed. Entropies S[p], associated
with a
given set of probabilities {pi }, that can be written in the simple form S[p] = W
i=1 r(pi ), are
shown to be robust, if r(pi ) is an analytic function of the pi ’s. The same property holds for
entropies (S[p]) that are monotonic and analytic functions of S[p]. The Tsallis entropy Sq [p]
falls in the 7rst class of entropies, whenever the entropic index q is an integer greater than 1.
A new kind of entropy, that follows such requirements, is discussed.
c 2003 Elsevier B.V. All rights reserved.
PACS: 02.50.−r; 05.20.−y; 05.70.Fh
Keywords: Entropy stability; Nonextensive statistical mechanics
1. Introduction
Entropy is one of the most important concepts of physics, since it is responsible for
a connection between the microscopic world (appropriate for the application of statistical mechanics) and the macroscopic world (for which thermodynamics applies). An
appropriate de7nition of entropy should satisfy certain requirements—e.g., well-de7ned
concavity and maximum at equiprobability—in order to describe nature properly. The
concept of entropy also plays an important role in information theory, but in this case,
the requisites are less restrictive. The standard statistical-mechanics formalism is based
∗
Corresponding author. Centro Brasileiro de Pesquisas F>?sicas, Rua Xavier Sigaud 150, Rio de Janeiro - RJ
22290-180, Brazil.
E-mail addresses: [email protected] (E.M.F. Curado), [email protected], [email protected] (F.D. Nobre).
c 2003 Elsevier B.V. All rights reserved.
0378-4371/$ - see front matter doi:10.1016/j.physa.2003.12.026
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
95
on the Boltzmann–Gibbs (BG) entropy, which for a given set of probabilities {pi }, is
de7ned as
W
SBG [p] = −
pi ln pi ;
(1.1)
i=1
where W denotes the total number of microscopic states of the system under consideration. Herein, we work with dimensionless entropies, which for the case of the BG
entropy means kB = 1. For a long time, the BG statistical mechanics was considered
as a universal theory, due to its large success in the description of many systems in
nature. However, nowadays it is becoming evident that the BG formalism presents its
particular range of applicability, mostly to systems characterized by short-range interactions and short-time memories [1–4]. Within this formalism, the thermodynamical
quantities that increase with the size of the system (known as extensive quantities),
do it in a well-de7ned manner, i.e., they can only increase linearly with such a size.
Recently, it is becoming evident that systems characterized by long-range interactions
and/or long-time memories, the BG formalism may present inconsistencies [1–4]. If
long-range interactions are present, one may have situations where each microscopic
constituent of the system interacts with all the others, leading to some thermodynamic
quantities that increase more than linearly with N , implying a nonextensive behavior.
In many of these cases, the most successful formalism so far, is the nonextensive
statistical mechanics based on Tsallis’s entropy [5]
W
1
Sq [p] =
1−
(q ∈ R) ;
(1.2)
(pi )q
q−1
i=1
which consists in a generalization of Eq. (1.1), in such a way that in the limit where
the entropic index q → 1, one recovers the BG entropy. It should be stressed that the
Tsallis’s proposal remains as the only simple entropic form, which is a generalization
of the BG entropy, having passed through all the stability tests so far, for arbitrary
values of q [4,5,11].
Other entropic forms, presenting a dependence on some entropic index(es) [and
recovering the BG entropy when considering the corresponding entropic index(es) in
some particular limit(s)], have been introduced also in the literature as generalizations
of the BG entropy [4,6–9,12]. Among the most investigated generalizations of the BG
entropy—besides the Tsallis’s entropy—one may 7nd the R>enyi [6], the normalized
[7–9], and the escort [4] entropies. It has been veri7ed that all these three latter
entropies may be useful in information theory, but are not acceptable from the thermodynamical point of view for arbitrary values of the entropic index(es). The concavity of
such entropies, which should be well-de7ned for thermodynamical stability, does hold
only for restricted ranges of the entropic index(es) [4]. Besides that, such entropies happen to fail in another stability test (sometimes referred to as experimental robustness)
[4,10,11]. It should be also mentioned that a generalization of the Boltzmann factor has
been formulated recently [13], by assuming the temperature as a stochastic variable.
Within such a formalism—known as superstatistics—if one chooses appropriately the
96
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
temperature distribution, both BG and Tsallis’s probability distributions may be recovered as particular cases. A possible entropy for the superstatistics has been proposed
for several temperature distributions [14], and for the cases investigated, a well-de7ned
concavity was veri7ed; in addition to that, such an entropy has recently passed through
the experimental-robustness test [15].
Some procedures for constructing entropic forms have been proposed recently
[16,17]. The framework for obtaining the generalized entropy optimized by a given
statistical distribution seems to be quite general [17]. However, a simple recipe for the
derivation of a particular class of entropic forms (entropies that may be treated analytically) has produced interesting results [16]. In this latter case, the entropies should
follow certain requirements as described below:
(i) They must satisfy the 7rst three basic Khinchin axioms [18], but are allowed to
violate the fourth one (which is the axiom concerning the additivity property of the
entropy). It is important to remind that, according to Khinchin, the only entropic form
that satis7es all four basic axioms is the BG entropy of Eq. (1.1).
(ii) They may be written in the simple form
S[p] =
W
r(pi )
[r(0) = r(1) = 0] :
(1.3)
i=1
The condition r(0) = 0 follows from the third Khinchin axiom, which states that the
addition of an event with zero probability should not change the entropy of the system.
However, the condition r(1) = 0, which ensures S = 0 for total certainty in a single
event, may—in a completely equivalent way—be also replaced by an additive constant
s0 = −r(1) in the total entropy.
(iii) They must lead to explicit expressions for the associated probability distributions. Although this condition seems to be a bit arti7cial, it should, in general, lead to
entropic forms that can be worked out analytically.
By extremizing the general entropy proposed in Eq. (1.3), with the usual constraints
for the probability normalization and the internal energy U ;
W
pi = 1;
U=
i=1
W
pi E i ;
(1.4)
i=1
one may easily see that there are only two possible entropic forms satisfying the above
requirements [16], corresponding to
r(pi ) = −pi ln pi
(1.5)
r(pi ) = 12 (pi − pi2 ) :
(1.6)
and
Let us now consider the extremization of the entropy in Eq. (1.3) under the constraints
W
i=1
pi = 1;
U=
W
i=1
u(pi )Ei ;
(1.7)
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
97
where u(pi ) is related to r(pi ) according to a simple linear relation, r(pi ) = c1 pi +
c2 u(pi ) (c1 and c2 constants). The conditions r(0) = 0 and r(1) = 0 yield, respectively,
u(0) = 0 and c1 = −c2 u(1), in such a way that Eq. (1.3) may be rewritten as
S[p] = −c2
W
[u(1)pi − u(pi )]
[u(0) = 0; u(1) ¿ 0] :
(1.8)
i=1
Now, extremizing Eq. (1.8) under the constraints of Eq. (1.7), one gets also that only
two entropic forms are possible [16], characterized by
r(pi ) =
1
[pi − piq ]
q−1
(q ∈ R)
(1.9)
and
r(pi ) = 1 − exp(−bpi ) + pi s0
(b ∈ R; b ¿ 0) ;
(1.10)
where s0 = exp(−b) − 1. The quantity u(pi ) de7ned above is given by u(pi ) = piq (in
the entropic form of Eq. (1.9)), whereas u(pi )=1−exp(−bpi ) (in the entropic form of
Eq. (1.10)). The forms in Eqs. (1.5) and (1.9), correspond to BG and Tsallis’s
entropies, respectively. The form of Eq. (1.6) corresponds to the particular case q=2 of
Tsallis’s entropy (also called “linear entropy” by some authors). The entropy associated
with Eq. (1.10) represents, to our knowledge, a new entropic form.
It is important to mention that the formalism employed in Ref. [16] may also be
implemented in the case of the normalized constraint for the internal energy [19]
W
u(pi )Ei
U = i=1
(1.11a)
W
i=1 u(pi )
which may be expressed also as [20]
W
u(pi )(Ei − U ) = 0 :
(1.11b)
i=1
If one uses the normalized constraint of Eq. (1.11b), the entropic forms of Eqs. (1.9)
and (1.10) remain unaltered, whereas the corresponding probability distributions will
change slightly, with the replacements Ei → Ei −U [see Refs. [19,20] for the probability
distribution associated with Tsallis’s entropy, and Section 3 of the present paper for
the one associated with the entropic form of Eq. (1.10)]. However, the formalism of
Ref. [16] with the constraint of Eq. (1.11a) may lead to implicit expressions for the
entropic forms and/or probability distributions. A detailed discussion on the implications
of the use of the normalized constraint in either one of the forms of Eqs. (1.11a) and
(1.11b) is given in Ref. [20].
Besides the above requirements, if one imposes r(pi ) to be either a concave or a
convex function of the pi ’s, it is easy to prove that the corresponding entropy is an
extremum at equiprobability (pi = 1=W ), regardless of its speci7c form [16]. In what
follows, we shall reproduce such a simple proof for the case of a concave r(pi ). We
98
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
shall make use of the general property for an arbitrary concave function g(x),
n n
1
1
xi ¿
g(xi )
(1.12)
g
n
n
i=1
i=1
which may be applied to r(pi ),
W
W
1 1
1 ¿
r(pi ) :
pi = r
r
W
W
W
i=1
(1.13)
i=1
Multiplying the inequality above by W , one trivially gets that
W
1
1
S
pi =
≡ Wr
¿ S[p] ≡
r(pi )
W
W
(1.14)
i=1
which ensures a maximum at equiprobability. Such a proof applies to all four
entropic forms above, characterized by a concave r(pi ), i.e., the BG (Eq. (1.5)), Tsallis
(Eq. (1.9)) (as well as its corresponding particular case q = 2 (Eq. (1.6))), and the
new entropic form of Eq. (1.10). It is straightforward to prove also that any increasing
monotonic function (S[p]) of an entropy S[p] that exhibits a maximum at equiprobability, should present a maximum at equiprobability as well.
It is important to remind that Tsallis’s entropy for any q integer greater than 1, as
well as the entropic forms corresponding to Eqs. (1.6) and (1.10) are analytic functions
of the probabilities {pi }. In the next section, we discuss the property of experimental
robustness for analytic entropies. In Section 3, we discuss in more detail the properties
of the entropy de7ned by Eqs. (1.3) and (1.10). Finally, in Section 4 we present our
conclusions.
2. Experimental robustness of analytic entropies
In this section, we will restrict ourselves to entropic forms following Eq. (1.3), where
r(pi ) are analytic functions of the probabilities {pi }. In the experimental-robustness
test [10], one de7nes a certain distance, d[p; p ], between two probability sets, {pi }
and {pi }, as well as the relative discrepancy between their respective entropies,
R≡
S[p] − S[p ]
;
Smax
(2.1)
where Smax represents the maximum value of the entropy. The entropy S is considered
as experimentally robust if, for any given ¿ 0, there exists a ¿ 0, such that
d[p; p ] 6 ⇒ |R| ¡ independently of W . As example of a distance, one may de7ne [4]
1=
W
d [p; p ] ≡
|pi − pi |
( ¿ 1) ;
i=1
(2.2)
(2.3)
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
99
which is an extension of the distance used previously by other authors [10,11] (that
considered = 1) for a general ¿ 1. It should be stressed that in the interval
0 ¡ ¡ 1 the above quantity does not constitute a metric, since it violates the triangular inequality. Another distance may be de7ned, based on the Kullback–Liebler
measure of information [21], which is the symmetrized Kullback–Leibler mutual
information [22]
W 1
pi
pi
dKL [p; p ] ≡
pi ln
+ pi ln
:
(2.4)
2
pi
pi
i=1
Let us start our analysis by considering two typical cases, which correspond to
extremum situations for the set of probabilities {pi }, as de7ned below.
Case 1 (Quasi-certainty):
p1 = 1 − a2 ;
p1 = 1 − ;
2
pi =
pi =
a2
(∀i = 1) ;
W −1
1
(∀i = 1) :
2 W −1
(2.5)
Case 2 (Quasi-equal probabilities):
p1 = a2 ;
p1 = ;
2
pi =
pi =
1 − a2
(∀i = 1) ;
W −1
1 − =2
(∀i = 1) :
W −1
(2.6)
In the examples above, a is a positive constant that becomes irrelevant if one is dealing
with the distance d [p; p ] [4,10,11] of Eq. (2.3). However, the corrections a2 that
appear in the above equations, become important in order to manipulate with dKL [p; p ].
One may easily show that both cases above yield, to leading orders, the same distances, i.e.,
1=
1
2
2
d [p; p ]= 1+
+O(
);
d
[p;
p
]=
(2.7)
+O(3 ) :
KL
2
(W −1)−1
8
One notices that d [p; p ] is independent of W only in the case =1 [10,11]. However,
for ¿ 1 one still has that lim→0 d [p; p ] = 0, for arbitrary values of W (W ¿ 1).
Using the fact that r(pi ) is an analytic function of pi , one may expand it in a Taylor
series, in such a way that the absolute value of the relative discrepancy in Eq. (2.1)
becomes
9r 9r 1
−
(2.8)
|R| = + O(2 )
2 Smax 9p p=0
9p p=1 in case 1 (Eq. (2.5)), and
9r 9r 1
−
|R| = + O(2 )
2 Smax 9p p=0
9p p=1=(W −1) (2.9)
100
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
in case 2 (Eq. (2.6)). In the equations above we have discarded the index i from the
derivatives, since 9r=9pi is the same for all i.
Therefore, any entropy that may be written in the simple form of Eq. (1.3), with
an analytic r(pi ), satis7es condition (2.2) in the two particular cases above for the
probabilities {pi }, for both distances d [p; p ] and dKL [p; p ]. It should be stressed
that the two simple examples above have proven to be very useful, since, up to now,
entropic forms that have passed through them, have also satis7ed a general proof of
the experimental-robustness property. Many well-known entropic forms have failed in
these two simple examples [4,10,11].
Next, we will carry a general demonstration of the experimental robustness for the
above-mentioned class of entropies, when one considers the particular distance d1 [p; p ]
between the two probability sets. Expanding the entropy S[p] around its maximum
value
S[p] = Smax +
W
9r (pi − pi(0) ) + · · · ;
9pi pi =p(0)
i=1
(2.10)
i
where {pi(0) } represents the set of probabilities that maximize the entropy S[p] (note
that for the class of entropies de7ned in the previous section, pi(0) = 1=W (∀i)). For
another set of probabilities {pi } one has
W
9r S[p ] = Smax +
(p − pi(0) ) + · · · :
9pi p =p(0) i
i=1
i
(2.11)
i
Since r(pi ) presents the same functional form for all pi ’s, the index i may be dropped
from the derivatives, i.e.,
9r 9r 9r =
≡
9pi pi =p(0)
9pi p =p(0)
9p p=p(0)
i
i
(2.12)
i
in such a way that Eqs. (2.10) and (2.11) lead to
W
9r S[p] − S[p ] =
(pi − pi ) :
9p p=p(0)
(2.13)
i=1
Therefore, the absolute value of the relative discrepancy becomes
W
1 9r |R| = (pi − pi )
Smax 9p p=p(0) i=1
(2.14)
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
101
and using the fact that | i xi | 6 i |xi |, one gets that
W
1 9r |R| 6 |pi − pi |
Smax 9p p=p(0) i=1
1 9r =
d [p; p ]
Smax 9p p=p(0) 1
(2.15)
which certainly satis7es condition (2.2) for the distance d1 [p; p ].
It should be stressed that the proofs of experimental robustness for the class of
entropies considered herein, with the general distances d [p; p ] and dKL [p; p ], are
still lacking; such proofs seem to be much harder to carry out than the one for the
distance d1 [p; p ]. However, considering these distances, such entropies satisfy condition (2.2) in two extremum cases for the probabilities {pi }, namely, quasi-certainty
and quasi-equal probabilities, as shown above. It is important to mention that many
well-known entropic forms have failed in these two simple examples [4,10,11], even
for the distance d1 [p; p ].
The results above are valid for any entropy that may be written in the simple form of
Eq. (1.3), with an analytic r(pi ). Strictly speaking, for the derivations above it is only
necessary to have a well-de7ned =rst derivative 9r=9pi . It is important to remind that
the present analysis does not apply to many well-known entropic forms (that exhibit
some kind of nonanalyticity in the pi ’s, when pi → 0), like the BG, Tsallis’s (for
arbitrary real values of q), R>enyi’s [6], as well as to the normalized [7–9], and escort
[4] entropies. However, Tsallis’s entropy satis7es the conditions above when q is an
integer greater than 1.
Let us now consider another class of entropies, (S[p]), that are increasing, monotonic, and analytic functions of the above-de7ned entropies S[p]. Obviously, (S[p])
presents a maximum at S = Smax . Therefore,
9 (S) = (Smax ) +
(S − Smax ) + · · · ;
(2.16a)
9S S=Smax
9 (S − Smax ) + · · · ;
(S ) = (Smax ) +
9S S =Smax
(2.16b)
where we are using the notation S ≡ S[p] and S ≡ S[p ]. One may de7ne the relative
discrepancy,
R̃ ≡
(S) − (S )
(Smax )
which, after using Eqs. (2.16), may be written as
Smax 9 S − S
R̃ =
;
(Smax ) 9S S=Smax Smax
(2.17)
(2.18)
102
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
where we have used the fact that (S) and (S ) present the same functional form
and the same maximum, and so
9 9 =
:
(2.19)
9S 9S S=Smax
S =Smax
From Eq. (2.18) one gets a simple relation between |R̃| and |R|,
Smax
9
|R̃| =
|R| :
(Smax ) 9S S=Smax
(2.20)
The above result implies that for each robust entropy S[p] (for which condition (2.15)
is satis7ed), there exists a whole class of entropies, (S[p]), of analytic, increasing, and
monotonic functions of S[p], that are robust as well. As for the case of the entropies
S[p], one does not really need the analyticity condition, but rather a well-de7ned
7rst derivative 9=9S. This de7nes a wide variety of entropic forms satisfying the
experimental robustness criterion.
In the next section we discuss in more detail the entropy de7ned through Eqs. (1.3)
and (1.10).
3. General properties of a new entropic form
Herein we shall restrict ourselves to the entropic form (cf. Eqs. (1.3) and (1.10))
[16]
S[p] =
W
r(pi ) =
i=1
W
[1 − exp(−bpi )] + s0
(b ∈ R; b ¿ 0) ;
(3.1)
i=1
where the additive constant s0 = exp(−b) − 1 ensures S[p] = 0, for total certainty in a
single event [p1 = 1 and pi = 0 (∀i ¿ 1)].
The corresponding probability distribution is given by [16]
pi =
1
1
+ ln(1 + Ei ) ;
Z
b
(3.2)
where Z is the partition function
Z=
b−
W
bW
i=1
ln(1 + Ei )
:
(3.3)
If one considers the normalized constraint of Eq. (1.11b) for the internal energy, the
simple replacements Ei → Ei − U should be carried in Eqs. (3.2) and (3.3).
Since the entropy of Eq. (3.1) ful7ls the requirements of the previous section, it
satis7es condition (2.2) in the two particular examples for the probabilities {pi }, (i.e.,
quasi-certainty and quasi-equal probabilities) for both distances d [p; p ] (Eq. (2.3))
and dKL [p; p ] (Eq. (2.4)). If one gets restricted to the distance d1 [p; p ], the general
demonstration of experimental robustness, carried in the previous section, applies for
the entropy in Eq. (3.1).
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
103
3
b→∞
b=4.0
2.5
2
S
b=3.0
1.5
b=2.0
1
b=1.5
0.5
0
0
5
10
b=1.0
b=0
↓
15
20
W
Fig. 1. The entropy of Eq. (3.1), at equiprobability (pi = 1=W ), for typical values of b. In each case a
saturation occurs, with S → (b − 1) + exp(−b), when W → ∞. The dashed straight line, S = W − 1,
corresponds to the asymptotic limit b → ∞.
Besides that, such an entropy presents a maximum at equiprobability, as shown at
the end of Section 1. At this extremum (pi = 1=W ), Eq. (3.1) becomes
S = W [1 − exp(−b=W )] + exp(−b) − 1 :
(3.4)
Eq. (3.4) is illustrated in Fig. 1 for typical values of b. For any 7nite b, S attains a
saturation in a well-de7ned value, when W → ∞, i.e., S → (b − 1) + exp(−b). Also, in
the limit b → ∞ (W 7nite), one gets (either from Eq. (3.1) or Eq. (3.4)), S → W − 1
(represented by the dashed line in Fig. 1).
Let us now address the concavity property. Since the exponential function exp(−x)
is a convex function, all the terms inside the summation of Eq. (3.1) are concave and
so, one would expect a concave entropy. In Fig. 2 we exhibit the entropy of Eq. (3.1)
versus p in the simple case W = 2 (p1 = p, and p2 = 1 − p), for typical values of b.
One observes the expected maximum at equiprobability (p = 1=2), and that Smax → 1
for large values of b, since in this limit, S[p] → W − 1. Also, Fig. 2 shows that the
concavity of S[p] is well-de7ned for any b ¿ 0.
For the general proof of concavity, one should consider two sets of probabilities {pi }
and {pi }, corresponding to a unique set of W possibilities, and de7ne an intermediate
probability law
pi ≡ !pi + (1 − !)pi
(0 ¡ ! ¡ 1) (∀i) :
(3.5)
" ≡ S[p ] − {!S[p] + (1 − !)S[p ]} ¿ 0 :
(3.6)
The entropy is concave if
104
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
1
↑
b→∞
b=10.0
0.8
b=4.0
0.6
S
b=3.0
0.4
b=2.0
b=1.5
0.2
b=1.0
b=0
→
0
0
0.2
0.4
p
0.6
0.8
1
Fig. 2. The entropy of Eq. (3.1) in the simple case W = 2 (p1 = p, and p2 = 1 − p), for typical values
of b.
Using the entropy of Eq. (3.1), one gets that
"=
W
{! exp(−bpi ) + (1 − !) exp(−bpi )
i=1
− exp[ − b(!pi + (1 − !)pi )]} :
(3.7)
The convexity property for the exponential functions above, exp[ − !x1 − (1 − !)x2 ] 6
! exp(−x1 )+(1−!) exp(−x2 ), leads to " ¿ 0. This ensures the concavity of the entropy
in Eq. (3.1). It is straightforward to show that any increasing monotonic function
(S[p]) of the entropy in Eq. (3.1) also presents a well-de7ned concavity.
Therefore, the entropic form discussed throughout this section satis7es three basic
ingredients that are usually required for an appropriate de7nition of entropy: maximum at equiprobability, well-de7ned concavity, and experimental robustness. To our
knowledge, the only entropic forms that have, up to the moment, satis7ed unrestrictively all these three requirements, are the BG and Tsallis’s entropies. As discussed above, any increasing, monotonic, and analytic function of the BG or Tsallis’s
entropies does satisfy these three basic ingredients as well. Although a general proof
of experimental-robustness property seems to be a hard task for the entropy associated
with the Beck–Cohen superstatistics [13], such an entropic form has exhibited both
a maximum at equiprobability and a well-de7ned concavity for several temperature
distributions investigated [14].
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
105
4. Conclusion
We have investigated the experimental robustness (or stability) of a certain class of
entropic forms. Essentially, we have analyzed entropies S[p], associated with
W a given
set of probabilities {pi }, that can be written in the simple form S[p] = i=1 r(pi ),
where r(pi ) is an analytic function of the pi ’s. We have also investigated families of entropies, (S[p]), that are increasing, monotonic, and analytic functions of
the above-mentioned entropies S[p]. The experimental-robustness analysis consists in
examining the relative discrepancy between the entropies associated with two given
probability sets, {pi } and {pi }, when a previously de7ned distance, d[p; p ], between the probability sets, is assigned to a small value. First of all, we have considered two simple tests, related to extremum situations for the set of probabilities
{pi } (namely, quasi-certainty and quasi-equal probabilities). The class of entropies
considered herein pass through these simple tests, for very general distances d[p; p ].
It should be stressed that many entropic forms, proposed previously in the literature, have failed in such simple tests, e.g., the R>enyi [10], the normalized nonextensive, and the escort entropies [4,11]. By considering a simple form for the
distance d[p; p ] (i.e., the same considered in the proof of the experimental robustness of the BG [10] and Tsallis [11] entropies), we have presented a general—but
very simple—proof of such a property for the above-mentioned class of entropies.
The proof of experimental robustness has been extended to a more general class of
entropies, (S[p]), that are increasing, monotonic, and analytic functions of robust
entropies S[p]. It should be stressed that entropic forms that present a nonanalytic
behavior in r(pi ), may still satisfy the experimental-robustness criterion, although the
proof for that becomes more laborious than the present one. As examples of such
cases, one could mention the Boltzmann–Gibbs [10] and Tsallis [11] entropies (as well
as all increasing, monotonic, and analytic functions of such entropies), which, to our
knowledge, represent the only entropic forms that have satis7ed such a property previously. Herein, we have presented a whole class of entropies that satisfy this criterion
as well.
have also discussed the properties of a new entropic form, namely, S[p] =
We
W
[1
− exp(−bpi )] + [exp(−b) − 1], that falls within the category mentioned above.
i=1
Besides the experimental-robustness property, we have shown that this entropy presents
its maximum at equiprobability, and exhibits a well-de7ned concavity. All these properties hold also for any increasing, monotonic, and analytic function of this entropy.
Such an entropy is characterized by an exponential dependence on the probabilities {pi }, leading to a probability distribution with a logarithmic dependence on the
energy. Considering that the BG entropy is characterized by a logarithmic dependence on the probabilities {pi } (which results in an exponential probability
distribution), and Tsallis’s entropy, which is characterized by a power-law in the
probabilities (and consequently, a power-law in the probability distribution), the
present entropy and its corresponding probability distribution would, in a certain sense,
7ll in the framework of entropic forms. Obviously, experimental realizations are
desirable.
106
E.M.F. Curado, F.D. Nobre / Physica A 335 (2004) 94 – 106
Acknowledgements
It is a pleasure to thank Constantino Tsallis for fruitful discussions. The partial
7nancial supports from CNPq and Pronex/MCT (Brazilian agencies) are acknowledged.
F.D.N. would like to thank Centro Brasileiro de Pesquisas F>?sicas (CBPF), where this
work was developed, for the warm hospitality.
References
[1] C. Tsallis, Braz. J. Phys. 29 (1999) 1.
[2] C. Tsallis, in: S. Abe, Y. Okamoto (Eds.), Nonextensive Statistical Mechanics and its Applications,
Lecture Notes in Physics, Springer, Berlin, 2001.
[3] C. Tsallis, in: P. Grigolini, C. Tsallis, B.J. West (Eds.), Classical and Quantum Complexity and
Nonextensive Thermodynamics, Chaos Solitons Fractals 13 (2002) 371.
[4] C. Tsallis, E. Brigatti, e-print cond-mat/0305606.
[5] C. Tsallis, J. Stat. Phys. 52 (1988) 479.
[6] A. R>enyi, Probability Theory, North-Holland, Amsterdam, 1970.
[7] P.T. Landsberg, V. Vedral, Phys. Lett. A 247 (1998) 211.
[8] P.T. Landsberg, Brazilian J. Phys. 29 (1999) 46.
[9] A.K. Rajagopal, S. Abe, Phys. Rev. Lett. 83 (1999) 1711.
[10] B. Lesche, J. Stat. Phys. 27 (1982) 419.
[11] S. Abe, Phys. Rev. E 66 (2002) 046134.
[12] I. Roditi, E.P. Borges, Phys. Lett. A 246 (1998) 399.
[13] C. Beck, E.G.D. Cohen, Physica A 322 (2003) 267.
[14] C. Tsallis, A.M.C. Souza, Phys. Rev. E 67 (2003) 026106.
[15] A.M.C. Souza, C. Tsallis, Phys. Lett. A 319 (2003) 273.
[16] E.M.F. Curado, Braz. J. Phys. 29 (1999) 36.
[17] S. Abe, J. Phys. A 36 (2003) 8733.
[18] A.I. Khinchin, Mathematical Foundations of Information Theory, Dover, New York, 1957.
[19] C. Tsallis, R.S. Mendes, A.R. Plastino, Physica A 261 (1998) 534.
[20] S. Mat>?nez, F. Nicol>as, F. Pennini, A. Plastino, Physica A 286 (2000) 489.
[21] S. Kullback, R.A. Leibler, Ann. Math. Stat. 22 (1961) 79.
[22] C. Tsallis, Phys. Rev. E 58 (1998) 1442.
© Copyright 2026 Paperzz