Some tests for log-concavity of life distributions 1 Introduction

Some tests for log-concavity of life distributions
Debasis Sengupta
and
Debashis Paul
Applied Statistics Unit
Department of Statistics
Indian Statistical Institute
Sequoia Hall, 390 Serra Mall
203 B.T. Road
Stanford University
Kolkata 700 108, India
Stanford, CA 94305-4065, USA
[email protected]
[email protected]
Abstract : We consider the testing problem for log-concavity of a life distribution
where the alternative is that the distribution is not log-concave. We suggest an
exact test for the (restricted) class of log-concave distributions having a point mass
at the left end point of the support. We prove consistency of the test which is
based on the distance between logarithm of the empirical distribution function and
its Least Concave Majorant (LCM), and extend it to the general case. We propose yet
another test based on a new transform called Total Time since Failure (TTF), whose
properties we study. In particular the strong uniform convergence of sample TTF to
population TTF is shown under some general conditions for distributions with finite
support. We show that the TTF-based test for log-concavity is consistent under the
same set-up. We provide simulation-based cut-off points for both the tests and study
their power properties.
Keywords : Reversed hazard rate, total time on test, total time since failure, reliability, consistent test.
1
Introduction
A distribution function F with support over [0, ∞) is said to be log-concave if log F
is a concave function, that is,
F (αx + (1 − αy)) ≥ F α (x)F 1−α (y)
1
(1)
for α ∈ (0, 1) and 0 ≤ x < y < ∞. If the density exists everywhere and is denoted
by f , then F is log-concave if and only if the reversed hazard rate function, defined
by f /F , is non-increasing. The class of log-concave distributions have the following
closure properties. If the random variables X and Y are independent and have (possibly different) log-concave distributions, then X + Y and max{X, Y } have log-concave
distributions (see Sengupta and Nanda, 1999). Limits of log-concave distributions
are also log-concave.
Log-concave distributions play an important role in various models in econometrics
and reliability. In theory of contracts, Laffont and Tirole (1988) considered the situation where the principal lacks knowledge of the relevant characteristic of an agent,
but knows the distribution of this characteristic. If this distribution is log-concave,
then optimal incentive contract is invertible and a separating equilibrium exists. Bagnoli and Bergstrom (1989) proposed a model to determine the cost-effectiveness of
appraisals of used items when the seller knows the exact worth of the commodity
but the buyer only knows its distribution. A clear decision rule on whether the seller
should go for the appraisal emerges when the distribution is log-concave. The logconcave nature of a distribution happens to be a crucial assumption that ensures the
existence of a separating equilibrium in a model for firms and regulators (see Baron
and Myerson, 1982), and of ‘efficient auctions’ in a model for the analysis of auctions
(see Myerson and Satterthwaite, 1983). Bergstrom and Bagnoli (1993) proposed a
marriage market model where the existence of a unique equilibrium distribution of
marriages by age and ‘quality’ of partners is ensured if the distribution of this ‘quality’ is log-concave. In the field of reliability, a sharp upper bound on the reliability of
a unit with known mean life and log-concave life distribution was given by Sengupta
and Nanda (1999). They also provided an explicit lower bound on the distribution of
the number of failures (within a specified time-frame) of a system under the regime
of ‘perfect repair’, when the failure time distribution is log-concave.
All these results are relevant when the concerned distribution is assumed to be logconcave. There is no readily available mechanism for checking this assumption, that
is, no statistical test for the hypothesis of log-concavity of a given distribution. The
2
purpose of the present article is to fill this void. Let G be the class of all log-concave
distributions. Then we intend to construct a statistical test for
null hypothesis (H0 ) : F ∈ G,
against alternative hypothesis (H1 ) : F ∈ G c ,
(2)
on the basis of samples from the distribution F .
Note that if F is replaced by 1 − F in (1), the corresponding inequality defines the
class of life distributions with log-concave survival function, which is also known as
the increasing failure rate (IFR) class. Although there are some similarities between
the IFR and log-concave classes of life distributions, there is only a partial overlap
between them. Another related class is that of unimodal distributions. A unimodal
distribution on [0, ∞) with mode at 0 is always log-concave. If X and Y are independent random variables with densities f and g where g is unimodal and f is itself
a log-concave function, then the density of X + Y is log-concave (see Steutel, 1985).
Distributions having log-convex or log-concave density are log-concave.
Some tests for membership of a distribution to the unimodal and IFR classes have
been proposed. The tests for unimodality include the bandwidth test (Silverman,
1981), the dip test (Hartigan and Hartigan, 1985) and the excess-mass test (Hartigan,
1987), which is equivalent to the dip test for univariate data. Tenga and Santner
(1984a, 1984b) proposed a test for the hypothesis that a distribution is IFR. However,
adapting this test for the log-concave class is not easy. The main difficulty in this
adaptation is that while the exponential distribution lies at the boundary of the IFR
and non-IFR classes, while there is no unique ‘borderline’ distribution in the case of
the log-concave class.
We present in Section 2 a nonparametric estimator of a distribution function,
under the restriction of log-concavity. Using this estimator as basis, we derive in
Section 3 a consistent test for the log-concavity of a distribution, assuming that the
distribution has a point mass of a minimum size at zero. We remove this assumption
in Section 4, but the consistency of the resulting test is not established. In the two
subsequent sections we develop a consistent test, based on a new transform. This
transform, called the total time since failure (TTF) is analogous to the total time on
3
test (TTF) transform which is well-known in the reliability literature. We provide
in Section 7 approximate cut-off values for the test statistics for various levels and
sample sizes, by means of simulations. We also give some estimates of power of the
tests under some specific distributions which are not log-concave. All proofs are given
in the appendix.
2
A nonparametric estimator of the distribution
function
Let q be a real-valued function defined on an interval I on the real line. If q is bounded
from above and C(q) is the class of all concave functions c such that c(x) ≥ q(x)
∀ x ∈ I, then the least concave majorant (LCM) Cq of q is defined by
Cq (x) = inf{c(x) : c ∈ C(q)} for x ∈ I.
Note that Cq ∈ C(q), that is, the infimum is attained.
A log-concave estimator of F based on n samples from F is exp(Clog Fn ), where
Fn is the empirical distribution function. Note that log Fn is a nondecreasing and
piecewise constant function. The shape of the LCM of such functions is described in
the following lemma.
Lemma 2.1. Let the function q : [a, b] → (−∞, 0] be defined as
vj if bj ≤ x < bj+1 , 1 ≤ j ≤ n − 1,
q(x) =
vn if x = bn ,
(3)
where a = b1 < b2 < · · · < bn = b, and v1 < v2 < · · · < vn ≤ 0. Then
(a) Cq coincides with q over a subset of the points b1 , . . . , bn , and is linear in between
the points of coincidence;
(b) For any 1 ≤ i < j ≤ n let Lij (x) = vi + (x − bi )(vj − vi )/(bj − bi ), bi < x < bj .
Then Cq is given by

v1 

Cq (x) = max vj ,
max
Lik (bj )
i,k:1≤i<j<k≤n


vn
4
if x = b1
if x = bj , 2 ≤ j ≤ n,
if x ≥ bn .
(4)
and by linear interpolation for x between bj values.
The proof of Lemma 2.1 follows along the lines of those of Lemma 2.1 and Theorem 2.1 of Tenga and Santner (1984a).
The following theorem establishes the almost sure consistency of the estimator
exp(Clog Fn ) when the parent distribution is log-concave.
Theorem 2.2. If Fn is the empirical distribution function obtained from n samples
of the log-concave distribution F , which F has a point mass at some x0 , then
(a) sup |Clog Fn (x) − log F (x)| → 0 almost surely as n → ∞;
x≥x0
(b) sup | exp(Clog Fn (x)) − F (x)| → 0 almost surely as n → ∞.
x
If F is log-concave and does not have a point mass, then for all x s.t. F (x) > 0,
|Clog Fn (x) − log F (x)| → 0 almost surely as n → ∞.
Note that the estimator Clog Fn is not the nonparametric maximum likelihood
estimator (MLE) of log F under the assumption of log-concavity. It can be shown
using an argument similar to Barlow et al. (1972, Section 5.3) that such a constrained
MLE of a log-concave distribution does not exist, unless additional restrictions are
used. It can also be shown that the limit of (nonparametric) log-concave MLEs in
a sequence of restricted classes converge pointwise to a piecewise linear estimator of
log F , as the restriction is gradually relaxed. This “log-concave MLE” (see Sengupta
and Paul, 2004) is somewhat different from Clog Fn . We use the latter estimator as
the basis for the test for log-concavity discussed in the next section, because of the
analytical simplicity of this estimator.
3
A test for log-concavity of distributions having
mass at 0
Suppose that Fn has the empirical distribution based on n samples from the distribution F . For any step function q defined on an interval I on the real line, let
5
Cq be as in Section 2, and Lq be the piecewise linear function obtained from q by
linear interpolation between its successive jump points. If F is indeed log-concave,
then (at least for large enough n) the supremum of the difference between Clog Fn and
Llog Fn should be small, whereas we would expect the difference to be appreciably
large if log F deviates from concavity. In order to keep the difference well-behaved,
we shall consider in this section only those distributions for which log F is bounded
from below. These are distributions having a point mass at 0.
Let Lp be the class of distributions with support in [0, ∞) and having a point
mass p, with 0 < p < 1. Notice that if F is log-concave then it can have at most one
jump discontinuity, and the jump can occur only at the left end-point of its support
(see Sengupta and Nanda, 1997). Thus, the distributions in Lp ∩ G do not have any
discontinuity except at 0. Instead of the hypothesis (2), in this section we derive a
test for
null hypothesis (H0 ) : F ∈ Lp ∩ G,
against alternative hypothesis (H1 ) : F ∈ Lp ∩ G c ,
(5)
on the basis of samples from the distribution F .
Let X1:n ≤ · · · ≤ Xn:n be the order statistics, vj = log Fn (Xj:n ) and gj =
Clog Fn (Xj:n ). Then the test will be based on the statistic
dn = max {(gj − vj )wj },
1<j<n
(6)
where w1 , . . . , wn are non-negative weights. It is easy to see that dn is invariant under
scale change of the samples. In the special case wj = 1 ∀j, the statistic reduces to
dn = sup{Clog Fn (x) − Llog Fn (x)}.
x
(7)
Remark 3.1. If Xi:n = · · · = Xj:n < Xj+1:n for some 1 ≤ i < j ≤ n (take Xn+1:n to
be ∞), then we have vi = · · · = vj = log(j/n). Consequently, gi = · · · = gj . In the
absence of ties, vj = log(j/n).
Even though there is no ‘borderline’ distribution for the class G, there is such a
6
distribution for the class Lp ∩ G, which is

0


Fp∗ (x) = p1−x


1
if x < 0,
if 0 ≤ x < 1,
(8)
if x ≥ 1.
Note that log Fp∗ is a straight line over [0,1]. The next result shows that Fp∗ can be
used to obtain the ‘worst-case’ null distribution of dn .
Theorem 3.2. Let F ∈ Lp and Fp∗ be as in (8). Then
PFp∗ (dn ≥ u) ≥ PF (dn ≥ u)
∀ u ≥ 0.
(9)
The above result allows us to obtain a conservative test for log-concavity restricted
to the class Lp for a given level α ∈ (0, 1), sample size n and a fixed p ∈ (0, 1). Let
cα,n (p) be the (1 − α) quantile of the distribution of dn , assuming that the sample is
from Fp∗ . Then the test with rejection region
{dn > cα,n (p)}
(10)
is a conservative test for (5) at level α.
Remark 3.3. If F is a log-convex distribution (defined by (1) with the inequality
reversed) with support on [0, a) for any a > 0 and F ∈ Lp , then Theorem 3.2 holds
with the inequality reversed. It follows that the test given by (10) is unbiased when
the alternative hypothesis corresponds to the class of log-convex distributions with
finite support and a point mass p at 0.
The exact distribution of the test statistic under the distribution Fp∗ is complicated. Hence the cutoff points cα,n (p) are found by simulation (see Section 7).
Remark 3.4. In view of Theorem 4.1 given in the next section, cα,n (p∗ ) is a conservative cut-off for dn if p∗ > p. Thus, there is no need to know the point mass p at
zero exactly; one can work with a lower also.
We now prove the consistency of the test under two different sets of conditions on
the weights.
7
Theorem 3.5. The test with critical region (10) is consistent for the testing problem
(5) if either of the following conditions hold.
(a) The weights w1 , . . . , wn satisfy w ≤ wj ≤ w
∀ j, n for some 0 < w ≤ w < ∞.
(b) The weights w1 , . . . , wn are such that wj = w(j/n) where w : [0, 1] → R is a
continuous function such that w(x) > 0 ∀ x > 0.
4
A conservative test for log-concavity
We now return to the main testing problem (2). When F does not have a point mass
at 0, limx→0 log F (x) = −∞. As the probability mass in a right neighbourhood of 0
is small, for any given sample size there may be shortage of data to detect departure
from log-concavity in this region. The problem is simplified if we have the following
information about a quantile of F : there are small positive numbers x0 and p0 such
that F (x0 ) ≥ p0 . We can subtract x0 from all the sample values and equate the
negative values to zero. The modified sample has the distribution F0 given by
F0 (x) = F (x + x0 ),
and has a point mass of at least p0 at 0. Thus, the statistic dn with conservative
cutoff cα,n (p0 ) can be used (see Remark 3.4) to test the log-concavity of F0 . This is
equivalent to testing the log-concavity of F in the interval [x0 , ∞).
We now assume that no such auxiliary information is available along with the
data. Using the test statistic dn derived in Section 3 as a starting point, we study
the behaviour of the cut-off cα,n (p) as p goes to zero.
Theorem 4.1. For every fixed n and α ∈ (0, 1), as p decreases to zero, cα,n (p)
increases monotonically to a finite limit.
Let
cα,n = lim cα,n (p).
p→0
(11)
For a given level α, we propose to test (2) via the rejection region
{dn > cα,n }
8
(12)
If the null distribution is in Lp ∩ G, then the cutoff cα,n (p) is adequate for achieving
the level α. Increasing the cut-off to cα,n would mean that the rejection probability
is smaller, that is, the test is conservative. The following result shows that the above
test is a conservative one at level α, even if there is no point mass at zero.
Theorem 4.2. Let the statistic dn correspond to a sample of size n from any logconcave distribution F . Then
P (dn > cα,n ) ≤ α.
(13)
When F does not have a point mass at zero, the lack of uniform almost sure
convergence of log Fn (x) to log F (x) makes it difficult to establish that limn→∞ cα,n =
0. This is in contrast to the case of F ∈ Lp where we proved that limn→∞ cα,n (p) = 0
for every fixed p ∈ (0, 1). Thus, consistency of the test for general F remains an open
question.
5
Total Time since Failure (TTF)
The Total Time on Test (TTT) transform of a life distribution is found to be very useful in reliability, particularly in the study of monotone ageing. We define an analogous
transform with a view to deriving another test of log-concavity of life distributions
which would be demonstrably consistent. In order to do this, we restrict attention
to the sub-class of distributions with bounded support contained in [0, ∞), which we
denote by LB . We use the notation L to represent the class of all distributions with
support in [0, ∞).
In the discussion to follow, we define the inverse of any function g : R → R as
g −1 (t) = inf{y : g(y) ≥ t} for t ∈ R.
(14)
Definition 5.1. If F ∈ LB with support [a, b] where a and b are real numbers, then
the Total Time since Failure (TTF) transform of F is given by
F −1 (t)
F (u)du
TF−1 (t) = a b
.
F (u)du
a
9
(15)
The notation TF−1 is used only to simplify our later notations, where inverse of
this function will be used. The following lemma describes some useful properties of
the TTF transform.
Lemma 5.2. Let F be a distribution in LB , having support [a, b].
(a) TF−1 : [0, 1] → [0, 1] is a left-continuous and monotone increasing function.
(b) F has a discontinuity at x0 (x0 > a) if and only if
TF−1 has zero slope in a
left-neighbourhood of F (x0 ).
(c) TF−1 has a jump discontinuity at u0 (0 < u0 < 1) if and only if F has zero slope
in a right-neighbourhood of F −1 (u0 ).
(d) F is log-concave if and only if TF−1 is convex.
If Fn is the empirical distribution function for a random sample of size n drawn
from a distribution in L with support [a, b], then the sample TTF is defined by
Tn−1 (t)
Fn−1 (t)
= a −1
Fn (1)
a
Fn (u)du
.
(16)
Fn (u)du
Remark 5.3. The expression for Tn−1 can be written explicitly in terms of the order
statistics (X1:n , . . . , Xn,n ). Let a be the left end-point of the support of F . Let
Sk;n = Tn−1 nk . Then if Xn:n > a,
Sk;n
k
j=1 (Xk:n − Xj:n )
= n
j=1 (Xn:n − Xj:n )
for k = 0, 1, . . . , n,
(17)
so that S0;n ≡ S1;n ≡ 0 and Sn;n ≡ 1. If Xn:n = a, then we define Sk;n = 0 for
k = 0, 1, . . . , n − 1 and Sn;n = 1. This convention makes the mapping
(X1:n , . . . , Xn:n ) → (S0;n , . . . , Sn;n )
continuous. Tn−1 is a left continuous step function with values Sk;n at the points k/n.
It is convenient, however, to consider the function Tn−1 whose graph is obtained by
linear interpolation between the successive points (k/n, Sk;n ). In particular, viewed
10
as a function of t, Tn−1 (t) is a continuous distribution function for every fixed sample
realization.
The next result shows uniform almost sure convergence of the sample TTF (and
its linearized version) to the population TTF, under certain conditions.
Theorem 5.4. Suppose F ∈ LB is such that F −1 is continuous on [0, 1] (F is strictly
increasing on its support, which is an interval). Then,
(a) sup |Tn−1 (t) − TF−1 (t)| → 0 almost surely as n → ∞,
(18)
(b) sup |Tn−1 (t) − TF−1 (t)| → 0 almost surely as n → ∞.
(19)
t∈[0,1]
t∈[0,1]
Corollary 5.5. If F ∈ LB is log-concave, then
(a) sup |Tn−1 (t) − GTn−1 (t)| → 0 almost surely as n → ∞,
(20)
(b) sup |Tn−1 (t) − GTn−1 (t)| → 0 almost surely as n → ∞.
(21)
t∈[0,1]
t∈[0,1]
where Gq stands for the Greatest Convex Minorant of a function q, and is defined in
a manner similar to the Least Concave Majorant, Cq .
Remark 5.6. If the upper end-point of the support of the distribution is not known,
then a known upper bound of this end-point can be used. Theorem 5.4 and Corollary
5.5 go through even with this modification.
The statistic of part (b) of Corollary 5.5 is an empirical measure of departure of
F from log-concavity. An analogous measure can be obtained via an inverse of the
TTF transform. Let the “lower inverse” of any monotone nondecreasing function
g : [0, 1] → [0, 1] be defined by
g −L (x) = sup{t ∈ [0, 1] : g(t) ≤ x} for x ∈ [0, 1].
Let us define the inverse population TTF for F

0


TF (x) = (TF−1 )−L (x)


1
11
(22)
∈ LB by
if x < 0,
if x ∈ [0, 1],
if x > 1.
(23)
Likewise, let the corresponding inverse of the linearized sample TTF be

0
if x < 0,


Tn (x) = (Tn−1 )−L (x) if x ∈ [0, 1],


1
if x > 1.
(24)
For every fixed sample realization, Tn is a distribution function with support [0, 1]. It
has a point mass at zero and is continuous everywhere else. Actually, Tn is the linear
interpolation between the successive points (Sk;n , k/n) for k = 1, . . . , n.
Remark 5.7. F ∈ LB is log-concave if and only if TF is concave on [0, 1]. In such a
case, TF coincides with its LCM, CTF .
Theorem 5.8. Let F ∈ LB be such that F −1 is continuous on [0, 1]. Also, suppose
that F has at most one point mass, and if it exists it is at 0. Then,
(a) sup |Tn (t) − TF (t)|
→ 0 almost surely as n → ∞,
(25)
(b) sup |Tn (t) − CTn (t)| → 0 almost surely as n → ∞,
(26)
t∈[0,1]
t∈[0,1]
if F is also log-concave. If F has point mass at any point other than 0, then F is
not log-concave and so (b) fails to hold but (a) holds with the uniform convergence
replaced by pointwise convergence.
6
A test for log-concavity based on sample TTF
We use a weighted version of the statistic of part (b) of Theorem 5.8 to test for the
log-concavity of a distribution. Let bj = Sj;n , vj = Tn (bj ) and gj = CTn (bj ). Note
that if there are no ties in the observations then vj = j/n. For a given sequence of
nonnegative weights {wj : j = 1, . . . , n} we define the test statistic by
tn = max {(
gj − vj )wj },
1<j<n
(27)
where w1 , . . . , wn are nonnegative weights.
The statistic tn is invariant under scale change of the samples, as S1;n , . . . , Sn;n
are scale invariant.
12
As in Section 3, we first consider the class Lp ∩G of log-concave distribution having
point mass p at 0. Once again, the worst-case distribution happens to be Fp∗ defined
by (8).
Theorem 6.1. Let F ∈ Lp ∩ G and Fp∗ be as in (8). Then
PFp∗ (tn ≥ u) ≥ PF (tn ≥ u)
∀ u ≥ 0.
(28)
Now define τα,n (p) to be the (1 − α) quantile of the distribution of tn , assuming
that the sample is from Fp∗ . This quantile is an increasing function of p.
Theorem 6.2. For every fixed n and α ∈ (0, 1), as p decreases to zero, τα,n (p)
increases monotonically to a finite limit.
Let
τα,n = lim τα,n (p).
p→0
(29)
As the restriction of a point mass at 0 is removed, the following proposition allows
us to use τα,n as a conservative cutoff for a level α test for log-concavity.
Theorem 6.3. Let the statistic tn correspond to a sample of size n from any logconcave distribution F . Then
P (tn > τα,n ) ≤ α.
(30)
Unlike in the cutoff cα,n used in Section 4, we can identify τα,n as the (1 − α)
quantile of a particular distribution. Let E1:n < E2:n < . . . < En:n denote the
order statistics corresponding to a random sample of size n from a unit exponential
distribution. Define for k = 1, . . . , n,
n
Zk;n =
j=n−k+1 (Ej:n
n
j=1 (Ej:n
− En−k+1:n )
− E1:n )
(31)
Lemma 6.4. Let Z = (Z1;n , . . . , Zn;n ) be defined by (31) and let Sp = (S1;n , . . . , Sn;n )
be such that Sk;n are defined by (17) where X is a random sample from Fp∗ . Then,
As p ↓ 0,
13
D
Sp =⇒ Z
(32)
Let Z be is as in Lemma 6.4. Let Vn be the function whose graph in the range
0 ≤ x ≤ 1 is obtained by linearly interpolating successive points of the set (Zk;n , k/n)
for k = 1, . . . , n. Also let Vn (x) = 0 for x < 0 and Vn (x) = 1 for x > 1. Let
vj = Vn (Zj;n ) and hj = CVn (Zj;n ) for j = 1, . . . , n. Then, for {wj : j = 1, . . . , n} as in
the definition of tn , we define
tn = max {(hj − vj )wj }
1<j<n
(33)
tn .
Theorem 6.5. Under the above set-up, τα,n is the (1 − α) quantile of Let the statistic tn be computed from a sample of size n from the distribution
F ∈ LB . Then a conservative test for the null hypothesis H0 : F ∈ G against the
alternative H1 : F ∈ G c , given by the rejection region
tn > τα,n
(34)
has size at most equal to α.
The restriction that F should belong to LB is not necessary to perform the test,
since the statistic tn can be defined for any F ∈ L. However, without this restriction
the function TF is not properly defined and the convergence result (25) does not hold.
Subject to this restriction, we establish the consistency of the proposed test, via the
next theorem.
Theorem 6.6. Let F be restricted to the class of distributions as in Theorem 5.8(a)
and suppose the weights satisfy condition (a) of Theorem 3.5. Then the test having
rejection region (34) is consistent.
7
Simulation results
The cut-off value for the test statistic dn for the level α is cα,n , which is the limit of
cα,n (p) as p → ∞. Table 1 gives cut-off points for α = 0.05 only, though the pattern
of the cut-off values for different p and n holds generally for other values of α. The
weights w1 , . . . , wn are taken as 1.
14
Table 1. Cut-off values cα,n (p) and cα,n for α = .05, w1 = · · · = wn = 1
No. of
Sample
Values of cα,n (p) for p =
Approximate
samples
size (n)
0.1
0.01
0.001
0.0001
0.00001
cut-off (cα,n )
50000
5
0.57733
0.63290
0.64732
0.64220
0.64398
0.65
50000
6
0.61311
0.70734
0.71729
0.71967
0.71590
0.72
50000
7
0.65054
0.75592
0.77152
0.77325
0.77350
0.78
50000
8
0.66467
0.79944
0.81703
0.81858
0.82153
0.83
50000
9
0.67445
0.83058
0.85927
0.86336
0.85895
0.87
50000
10
0.68175
0.85915
0.88518
0.89514
0.90451
0.91
50000
15
0.67707
0.93439
1.00430
1.01422
1.01057
1.02
50000
20
0.65172
0.98292
1.07420
1.08627
1.08364
1.09
50000
30
0.59803
1.01203
1.16183
1.17789
1.18222
1.19
40000
50
0.50458
0.99065
1.24960
1.29955
1.30394
1.31
30000
100
0.38566
0.90610
1.29877
1.42328
1.42458
1.43
30000
200
0.28506
0.78251
1.29405
1.51659
1.52930
1.53
The first column represents the number of simulation runs on the basis of which
the reported values are determined. The values of p (point mass at zero) are taken to
be 0.1, 0.01, 0.001, 0.0001 and 0.00001. The last column gives the smallest number,
up to the second place of decimal, which is larger than cα,n (p) for all these values of
p, and may be regarded as an approximate value of cα,n .
Table 2. Cut-off values cα,n (p) and cα,n for α = .05, wj = min(1, (−1/ log(j/n))) for
j = 1, ..., n.
No. of
Sample
Values of cα,n (p) for p =
samples
size (n)
0.1
0.01
0.001
0.0001
0.00001
0.000001
cut-off (cα,n )
50000
5
0.57694
0.63873
0.64652
0.64442
0.64591
0.64364
0.65
50000
10
0.54993
0.63378
0.65220
0.65367
0.65569
0.65277
0.66
50000
20
0.47559
0.57675
0.60723
0.60919
0.60839
0.60856
0.61
50000
30
0.42440
0.52503
0.56965
0.57688
0.57620
0.57573
0.58
40000
50
0.35845
0.45755
0.51905
0.53169
0.53413
0.53227
0.54
30000
100
0.27950
0.36956
0.44560
0.47883
0.47928
0.47840
0.48
20000
500
0.14199
0.20708
0.28450
0.36488
0.37383
0.37096
0.38
10000
2000
0.07532
0.11591
0.17775
0.25476
0.29373
0.29885
0.30
15
Approximate
Table 2 gives the cut-off values cα,n (p) and cα,n for weights wj = min(1, (−1/ log( nj )))
for j = 1, ..., n, corresponding to α = .05. Note that, for fixed n, the weights are all
positive, increasing as j increases, and bounded above. Also as n → ∞, w(j) → 0
for all fixed j. Further, this particular weighting scheme forces the test statistic to
take values between 0 and 1. The resulting cut-off values are smaller and show a
decreasing trend (with increasing n) more clearly.
The conservative cut-off values of dn for uniform weights and different levels are
summarized in Table 3.
Table 3. Cut-off values cα,n for w1 = · · · = wn = 1
Sample size (n)
α = 0.01
α = 0.05
α = 0.1
5
0.79
0.65
0.56
10
1.12
0.91
0.78
20
1.39
1.09
0.94
50
1.68
1.31
1.11
100
1.85
1.43
1.22
The cut-off values for the statistic tn are given in Table 4. Evidently τα,n decreases
with increasing n, as the TTF approaches the limiting curve, namely the diagonal.
Table 4. Cut-off values τα,n
Sample size (n)
α = 0.01
α = 0.05
α = 0.1
10
0.388
0.313
0.275
20
0.309
0.254
0.227
50
0.214
0.179
0.161
100
0.158
0.132
0.120
200
0.116
0.096
0.088
The power of the tests dn and tn were simulated for a piecewise exponential distribution. The graph of log F for this continuous non-log-concave distribution shown
in Figure 1. The samples of this distribution can be described as
X
if X ≤ x0 ,
Z=
x0 + Y if X > x0 ,
where X ∼ exp(λ1 ) and Y ∼ exp(λ2 ).
16
Figure 1. Non-log-concave distribution used for power comparison of dn and tn .
Table 5. Empirical power of dn for piecewise exponential distribution, λ1 = 1,
λ2 = 20, α = 0.05. (No. of samples used in power calculation = 100000)
x0
Empirical power
Empirical power
n = 20, cα,n = 1.09
n = 50, cα,n = 1.31
0.05
0.0689
0.0118
0.1
0.0833
0.0906
0.2
0.1169
0.0257
0.3
0.0746
0.0044
0.5
0.0165
0.0001
Table 6. Empirical power of tn for rate changing exponential distribution, λ1 = 1,
λ2 = 20, α = 0.05. (No. of samples used in power calculation = 100000)
x0
Empirical power
Empirical power
n = 20, τα,n = 0.254
n = 50, τα,n = 0.179
0.2
0.1277
0.5910
0.5
0.4876
0.9395
0.8
0.3753
0.8422
1.0
0.2530
0.6801
1.5
0.0549
0.1788
2.0
0.0070
0.0149
17
We chose the following values of the parameters: λ1 = 1, λ2 = 20 and x0 = .2, 1.
Table 5 shows the simulated power of dn for sample size 50. Table 6 shows the
simulated power of tn for the same sample size. The power is found to be rather low
for small sample sizes.
We next consider an empirically weighted version of the test statistics dn and
tn . The weights that we use are a crude version of the density estimates of the
observations. More specifically, given n observations from a distribution, (we want
to test if it is log-concave) we generate a histogram of the observations with equal
√
bin-width and number of bins approximately n. Denoting this histogram-based
density estimate by fˆn (x), we define our weight sequence {win : i = 1, . . . , n} as
win = fˆn (Xi:n ). Using these weights we can compute the weighted versions of the two
test statistics dn and tn . Further, we can simulate samples of same size n, from the
borderline (or limiting borderline) null distributions and using this compute the test
statistics using the weight sequence {win }ni=1 . With sufficiently large number of such
simulated samples we can compute an approximate cutoff value for a level α test for
any 0 < α < 1. The Tables 7-8 give an idea about the power properties of the tests,
for the limited class of alternative distributions we considered before. It should be
remembered however, that in case of the first test (with test statistic dn , i.e., the
one based on the supremum (weighted) difference between log Fn and its LCM) there
is no borderline distribution if there is no point mass at zero. Also, we have noted
that the convergence of the cutoff value for a fixed level α test, as p (point mass at
zero) decreases to zero, slows down as n increases. This fact and the large amount of
computations necessary to get a single limiting conservative cutoff value forced us to
choose a very small but fixed p for our simulation study. However, we let p decrease
for an increased sample size in accordance with the observations made above.
The power of the empirically weighted test statistic seems to be much better for
the TTF-based test (with test statistic tn ), as compared to its unweighted version.
The reason is that the weighted test puts more mass at regions where we are most
likely to see a discrepancy from log-concavity. However, the performance of the test
18
dn does not improve much when this weighting scheme is used.
It seems that the power in general has a regular behaviour, in that it first increases
and then decreases with increased value of x0 or the change-point in the changed rate
exponential alternative, when the rate parameters are held fixed. What is also interesting is that the cutoff values (now random, since the weights are random quantities,
dependent on the data), also obey a pattern that their standard deviation is roughly
proportional to their means. Comparing with Table 5 one can see an improvement in
power of the test when these empirical weights are used.
Table 7. Empirical power of empirically weighted dn for piecewise exponential
distribution, λ1 = 1, λ2 = 20, n = 50, α = 0.05 (No. of samples used in power
calculation = 2000, No. samples drawn per cutoff computation = 1000)
p
0.00005
x0
Mean(cut-off)
s.d.(cut-off)
Empirical power
0.3
0.277
0.087
0.488
0.5
0.216
0.055
0.542
0.7
0.197
0.047
0.282
1.0
0.224
0.058
0.066
1.2
0.250
0.064
0.015
Table 8. Empirical power of empirically weighted tn for piecewise exponential
distribution, λ1 = 1, λ2 = 20, n = 50, α = 0.05 (No. of samples used in power
calculation = 2000, No. samples drawn per cutoff computation = 1000)
x0
Mean(cut-off)
s.d.(cut-off)
Empirical power
0.3
0.079
0.018
0.720
0.5
0.072
0.018
0.910
0.7
0.061
0.016
0.940
1.0
0.047
0.013
0.854
1.2
0.041
0.009
0.725
1.5
0.039
0.007
0.372
19
Appendix : Proofs
Proof of Theorem 2.2.
We consider two cases separately: F has a point mass at 0, and F does not have
a point mass at 0.
Case I. F has a point mass p at 0.
In this case, by Lemma A.4 (given below), we have
P (sup | log Fn (x) − log F (x)| → 0) = 1.
x
So, given any > 0,
lim P (sup | log Fn (x) − log F (x)| ≤ ∀ n ≥ m) = 1.
m→∞
x
So, since log F (x) + is a concave function, with probability tending towards 1, as
m → ∞, we have uniformly over x ≥ 0,
log F (x) − ≤ log Fn (x) ≤ Clog Fn (x) ≤ log F (x) + , ∀ n ≥ m.
This implies,
lim P (sup |Clog Fn (x) − log F (x)| ≤ ∀ n ≥ m)
m→∞
≥
x
lim P (sup | log Fn (x) − log F (x)| ≤ ∀ n ≥ m) = 1.
m→∞
x
In other words,
sup |Clog Fn (x) − log F (x)| → 0, almost surely as n → ∞.
x
Case II. F does not have a point mass at 0.
In this case we shall show only the pointwise almost sure convergence. Fix any
x0 > 0. For 0 < K < x0 , let us define, for all ω ∈ Ω,
ynK (ω) = inf{x ≥ K : Clog Fn (x) = log Fn (x)}.
Note that for every fixed K, ynK (ω) = Xi:n (ω) for some i depending on K and ω. Since
F (K) > 0, by Lemma A.4, given any > 0, for a.a. ω, there exists N (ω), such that,
20
if n ≥ N (ω), then supx≥K | log Fn (x) − log F (x)| ≤ . Since log F (x) + is concave,
this implies, in particular, that for a.a. ω, if n ≥ N (ω), then for all x ≥ ynK (ω),
log F (x) − ≤ log Fn (x) ≤ Clog Fn (x) ≤ log F (x) + .
This follows since the restriction of Clog Fn on [ynK , ∞) is the LCM of the restriction
of log Fn on [ynK , ∞) (easy to check using the definition of ynK and Lemma 2.1).
We now aim to show that:
∃ a 0 < K ∗ < x0 , and K ∗ < ηK ∗ < x0 such that for a.a. ω,
∗
lim sup ynK (ω) < ηK ∗ .
n→∞
So, in particular, given > 0, for a.a. ω, ∃ N (ω) < ∞ such that, for all n ≥ N (ω),
∗
ynK (ω) < x0 and so, for all x ≥ x0 ,
|Clog Fn (x) − log F (x)| ≤ | log Fn (x) − log F (x)| + ≤ 2.
This will prove the strong uniform convergence of Clog Fn (x) to log F (x) on [x0 , ∞).
Fix r > 0 (small) and consider the set of K > 0 such that log F (K) + r < 0.
Then define ξK,r as the smallest x-coordinate where a straight line passing through
(0, log F (K) + r) touches log F . Notice that since F is strictly concave and does not
have a point mass at 0, ξK,r is well defined and it decreases to 0 for a.a. ω as K ↓ 0,
(for every fixed r > 0). It is also important to notice that ξK,r > K for all K>0.
Hence, we can find a K ∗ >0 such that ξK ∗ ,r <x0 (see Figure 2). Since we fix r>0 once
for all (it may depend upon x0 ), we can drop the explicit reference to r from now on.
Figure 2. Construction used in the Proof of Theorem 2.2.
21
∗
Let us pick an ηK ∗ such that ξK ∗ ,r < ηK ∗ < x0 . We plan to show that ynK (ω) <
ηK ∗ (ω) for n sufficiently large (for a.a. ω). By the strict concavity of log F , it follows
that the slope of the line touching log F at ξK ∗ and passing through (0, log F (K ∗ ) + r)
is strictly bigger than the slope of the line joining the points (0, log F (K ∗ ) + r) and
(ηK ∗ , log F (ηK ∗ )). Hence we can choose a 0 < δ < r such that if lδ (x) denotes the
line joining the points (0, log F (K ∗ ) + r) and (ηK ∗ , log F (ηK ∗ ) + δ), then
lδ (ξK ∗ ) < log F (ξK ∗ ) − δ.
On the other hand, note that we can choose n large enough (depending on ω) so
that
sup | log Fn (x) − log F (x)| < δ,
x≥K ∗
implying
∗
∗
log Fn (ynK (ω)) < log F (ynK (ω)) + δ.
∗
∗
∗
Therefore, if ynK (ω) ≥ ηK ∗ infinitely often, then the line joining (ynK (ω), log Fn (ynK (ω)))
and (0, log F (K ∗ ) + r) must intersect the curve log F (·) − δ for infinitely many n, by
the observation made in the previous paragraph.
Again since for a.a. ω, log Fn (K ∗ ) → log F (K ∗ ) and for all x ≤ K ∗ ,
log Fn (x) ≤ log Fn (K ∗ ) < log F (K ∗ ) + r,
for all sufficiently large n, so the above statement holds true even if we replace the
point (0, log F (K ∗ ) + r) by the point (x, log Fn (x)) for any 0 < x ≤ K ∗ for sufficiently
large n.
In view of Lemma A.1 (given below), for every > 0, ∃ C with 0 < C < K ∗ ,
such that the set
A = {ω : lim inf x∗n (ω) > C }
has probability greater than 1 − . Now, for a.a. ω,
sup | log Fn (x) − log F (x)| < δ
(35)
x≥C
for sufficiently large n. Now fix an ω ∈ A such that (35) holds. Notice that by defini∗
∗
tion of LCM, the line segment joining (x∗n (ω), log Fn (x∗n (ω))) and (ynK (ω), log Fn (ynK (ω)))
22
will be part of Clog Fn and hence should lie above log Fn . However, by the
(call it L)
intersects log F − δ infinitely often, and this is a contraobservation made above, L
diction to (35).
Since > 0 is arbitrary, this contradiction proves the result.
Lemma A.1. Let
x∗n (ω) = sup{x < K ∗ : Clog Fn (x) = log Fn (x)}.
Then, for a.a. ω,
lim inf x∗n (ω) > 0.
n→∞
Proof. Suppose ∃ a set A of positive probability such that for all ω ∈ A, lim inf x∗n (ω) =
0. By definition, x∗n (ω) = Xkn :n (ω) for some 1 ≤ kn (ω) ≤ n. Since F does not
have a point mass at zero, lim inf x∗n (ω) = 0 implies that a subsequence of n−1 kn (ω)
converges to zero. To avoid messy notations, we assume without loss of generality that the original sequence itself converges to zero. This means, of course, that
log Fn (x∗n (ω)) → −∞. Now, for any M such that 0 < M < K ∗ ,
sup | log Fn (x) − log F (x)| < δ
x≥M
(36)
almost surely for sufficiently large n. By concavity of log F + δ and the definitions
∗
of ynK (ω) and x∗n (ω), it follows that the curve log Fn lies strictly below the line L∗ ,
joining (0, log Fn (x∗n (ω)) and (K ∗ , log F (K ∗ ) + δ). Since log Fn (x∗n (ω)) → −∞ as
n → ∞, eventually the whole of log Fn within the interval [M, K − c] for some c > 0,
will lie below the curve log F −δ, violating (36). This contradiction proves the result.
In order to prove Theorem 3.2, we need the following lemma, which is adapted
from Theorem 2.2 of Tenga and Santer (1984a), and can be proved similarly.
Lemma A.2. Suppose q : [a, b] → (−∞, 0], {bj }, {vj } and Cq are as in Lemma 2.1.
Given a convex strictly increasing function t : [0, M ) → [0, ∞), where M > b, define
q (x) on [a , b ] = [t(a), t(b)] to be the right continuous step function with value vj at
t(bj ) for 1 ≤ j ≤ n. Then the LCM, Cq of q satisfies
Cq (t(x)) ≤ Cq (x) ∀x ∈ [a, b].
23
(37)
Proof of Theorem 3.2.
Let F ∈ Lp ∩ G. Define t : [0, 1) → [0, ∞) by
t(x) =
(log F )−1 (log Fp∗ (x)) = F −1 Fp∗ (x) if 0 < x < 1,
0
if x = 0.
Clearly t(x) is strictly increasing. Now, log Fp∗ (x) = (1 − x) log p if x ∈ (0, 1) so that,
t(x) = (log F )−1 ((− log p)(x − 1)) for x ∈ (0, 1). Since F is log-concave, (log F )−1 is
convex on (log p, 0). Consequently t(x) is convex.
If 0 ≤ x1 ≤ . . . ≤ xn < 1 is an ordered sample from Fp∗ , then defining yj = t(xj )
we have an ordered sample 0 ≤ y1 ≤ . . . ≤ yn < ∞ from F . (Notice that in this
case b = xn < 1 = M ) Then by Lemma A.2 it follows that Cq (t(xj )) ≤ Cq (xj ) for
1 < j < n, so that dn computed from y1 , . . . , yn is less than or equal to that computed
from x1 , . . . , xn . This implies that
PFp∗ (dn ≥ u) ≥ PF (dn ≥ u) ∀ u ≥ 0
We need a couple of lemmas in order to prove Theorem 3.5.
Lemma A.3. Let f : [0, ∞) → [0, 1] be a nondecreasing function. Let K ≥ 0 be
such that f (K) = y0 > 0. Suppose there exists a sequence of nondecreasing functions
fn : [0, ∞) → [0, 1] such that supx |fn (x) − f (x)| → 0 as n → ∞. Then,
sup | log fn (x) − log f (x)| → 0
x≥K
as n → ∞.
Proof. Let η > 0 be such that y0 − η > 0. The function log restricted to the domain
[y0 − η, 1] is uniformly continuous.
Suppose > 0 is given. Then ∃ δ() > 0 such that | log y−log x| < if |y−x| < δ()
and y, x ∈ [y0 −η, 1]. Also, since fn (K) → f (K), ∃ Nη such that |fn (K)−f (K)| < η
for all n ≥ Nη . This implies, in particular, that fn (K) > y0 − η if n ≥ Nη (since
y0 = f (K)). Since fn is non-decreasing, if n ≥ Nη then fn (x) > y0 − η ∀x ≥ K.
Also, since f is nondecreasing and f (K) = y0 , we have, f (x) > y0 − η ∀x ≥ K.
Since supx |fn (x) − f (x)| → 0 as n → ∞, ∃ Nδ() such that if n ≥ Nη ∨ Nδ() , then
24
∀x ≥ K, we have, fn (x), f (x) ∈ [y0 − η, 1] and |fn (x) − f (x)| < δ(). Thus, if
n ≥ Nη ∨ Nδ() , then
sup | log fn (x) − log f (x)| ≤ .
x≥K
Since > 0 is arbitrary, this proves the result.
Lemma A.4. Suppose Fn denotes the empirical distribution function based on a
sample of size n from a distribution F belonging to Lp for any 0 < p < 1. Then
sup | log Fn (x) − log F (x)| → 0 almost surely as n → ∞
x≥0
(38)
Proof. Recall that by Glivenko-Cantelli Theorem supx |Fn (x) − F (x)| → 0 almost
surely as n → 0. Fix ω such that the convergence takes place. Since F has a point
mass p at zero, take f (x) ≡ F (x), fn (x) ≡ Fn (x)(ω), K = 0 and y0 = p and apply
Lemma A.3 to complete the proof.
Proof of Theorem 3.5.
Part (a). We prove this in two steps. In the first step we show that under the
conditions of the theorem, for every 0 < p < 1 and 0 < α < 1, we have, cα,n (p) → 0
as n → ∞.
By Lemma A.2 we have, for all F ∈ Lp (0 < p < 1),
sup | log Fn (x) − log F (x)| → 0 almost surely
x≥0
as n → ∞.
Thus, ∀ > 0,
lim P
m→∞
sup | log Fn (x) −
0≤x≤1
log Fp∗ (x)|
≤ /2 ∀n ≥ m
= 1.
Given > 0, and α > 0, choose N so that
∗
P sup | log Fn (x) − log Fp (x)| ≤ /2 ∀n ≥ N ≥ 1 − α.
0≤x≤1
or,
P (L(x) ≤ log Fn (x) ≤ U (x) ∀x ∈ [0, 1] ∀n ≥ N ) ≥ 1 − α
25
(39)
where L(x) = log Fp∗ (x) − /2 and U (x) = log Fp∗ (x) + /2. Also, Clog Fn (x) ≤ U (x)
whenever log Fn (x) ≤ U (x) ∀x ∈ [0, 1] (since U (x) is concave, as log Fp∗ (x) is concave).
However, U (x) − L(x) = ∀x. Hence (39) implies
P sup | log Fn (x) − Clog Fn (x)| ≤ ∀n ≥ N ≥ 1 − α
x
⇒ P w sup | log Fn (x) − Clog Fn (x)| ≤ w ∀n ≥ N ≥ 1 − α
x
⇒ P (dn ≤ w ∀n ≥ N ) ≥ 1 − α,
since log Fn (x) ≤ Llog Fn (x) ≤ Clog Fn (x) and since
w sup |Llog Fn (x) − Clog Fn (x)| ≥ dn .
x
(Here Lq is as defined in the beginning of Section 3). Since > 0 is arbitrary, by
recalling the definition of cα,n (p), we conclude that cα,n (p) → 0 as n → ∞.
In the second step of the proof of part (a), we establish consistency of the test by
showing that if F ∈ Lp , (0 < p < 1) is not log-concave, then
P (dn ≥ cα,n (p)) → 1
as n → ∞.
(40)
If F ∈ Lp ∩ G c , then there are points 0 ≤ x1 < x2 < x3 and some number δ > 0 such
that
log F (x2 ) <
x 2 − x1
x3 − x2
log F (x3 ) +
log F (x1 ) − 3δ.
x 3 − x1
x3 − x1
(41)
By SLLN and the fact that log is a continuous function we know that for almost
all ω, there is N1 (ω) < ∞ such that ∀n ≥ N1 (ω) we have,
| log Fn (xi ) − log F (xi )| < δ
for i = 1, 2, 3.
(42)
Fix such an ω. Then ∀n ≥ N1 (ω) we have,
x 2 − x1
x3 − x2
log Fn (x3 ) +
log Fn (x1 ),
x 3 − x1
x3 − x1
( since Clog Fn is the LCM of log Fn )
x2 − x1
x3 − x2
>
log F (x3 ) +
log F (x1 ) − δ
x 3 − x1
x3 − x1
> log F (x2 ) + 2δ
Clog Fn (x2 ) ≥
> log Fn (x2 ) + δ.
(43)
26
The second and fourth inequalities are due to (42) and the third inequality is by (41).
For every x in the interior of the support of F , there is a sufficiently large n and
an integer-valued random variable in (x) defined by Xin (x):n ≤ x < Xin (x)+1:n . Further,
we have,
log Fn (x) ≤ Llog Fn (x) ≤ log Fn (Xin (x)+1:n ) = log
in (x) + 1
.
n
Also,
0 ≤ log Fn (Xin (x)+1:n ) − log Fn (x) ≤ log
in (x) + 1
in (x)
− log
.
n
n
Since F (x) > 0 ∀x > 0, by SLLN, it follows that ∀x > 0, as n → ∞, in (x) → ∞
almost surely. As a result, we finally get
0 ≤ Llog Fn (x) − log Fn (x) → 0 almost surely as n → ∞.
Hence, appealing to (43) we can say that, for almost all ω, there is N (ω) <
∞ (N (ω) ≥ N1 (ω)) such that ∀n ≥ N (ω) we have,
Clog Fn (x2 ) > Llog Fn (x2 ) + δ/2.
(Without loss of generality we can take N to be a random variable). This implies
that for all m ≥ 1,
P sup |Llog Fm (x) − Clog Fm (x)| > δ/2
x
≥ P (Clog Fm (x2 ) − Llog Fm (x2 ) > δ/2)
≥ P (Clog Fn (x2 ) − Llog Fm (x2 ) > δ/2 ∀n ≥ m)
≥ P ({ω : N (ω) ≤ m})
→ 1 as m → ∞
The last convergence result follows since P ({ω : N (ω) < ∞}) = 1 and the event
{ω : N (ω) ≤ m} ↑ {ω : N (ω) < ∞} as m ↑ ∞.
Now, since
dn ≥ w sup |Llog Fn (x) − Clog Fn (x)|,
x
the last result implies that P (dn ≥ wδ/2) → 1 as n → ∞. This, together with the
fact that limn→∞ cα,n (p) = 0 proves (40).
27
Part (b). We prove this in two steps also. In the first step we show that under the
given conditions cα,n (p) → 0 as n → ∞ for all 0 < p < 1. If we take
w∗ = sup w(x)
x∈[0,1]
then w∗ < ∞ (since w : [0, 1] → R is continuous) and for all n,
w = sup wj ≤ w∗ .
j
So now we can proceed as in the first step of the proof of part (a) to make the desired
conclusion.
In the second step of the proof we show that if F ∈ Lp is not log-concave then
(40) holds. As in the previous case, we can find points 0 ≤ x1 < x2 < x3 and a
number δ > 0 such that (41) holds. Further, δ can be so chosen that w(F (x2 )) > 2δ.
This is because F (x2 ) > 0 and by assumption w(x) > 0 ∀ x > 0.
With the same notations as in the proof of Theorem 3.5, we have for almost all
ω, there is N1 (ω) such that ∀ n ≥ N1 (ω), (43) holds.
Again, we have
in (x2 ) + 1
= Fn (Xin (x2 )+1:n ) → F (x2 )
n
Since w is continuous, it follows that,
in (x2 ) + 1
win (x2 )+1 = w
→ w(F (x2 ))
n
almost surely as n → ∞.
almost surely as n → ∞.
Hence, for almost all ω, there is N2 (ω) such that ∀ n ≥ N2 (ω), we have
win (x2 )+1 > w(F (x2 )) − δ > δ.
(44)
Also, since 0 ≤ log Fn (Xin (x2 )+1:n ) − log Fn (x2 ) ≤ 1/n, for all ω, there is N3 (ω)
such that ∀ n ≥ N3 (ω), we have
log Fn (x2 ) > log Fn (Xin (x2 )+1:n ) − δ/2.
(45)
Combining, (43), (44) and (45) we can say that for almost ω if n ≥ N (ω) (where
N ≥ max{N1 , N2 , N3 } and measurable), then
Clog Fn (Xin (x2 )+1:n ) ≥ Clog Fn (x2 ) > log Fn (Xin (x2 )+1:n ) + δ/2
28
and (44) holds. So for almost all ω if n ≥ N (ω), then
dn =
max {wj (Clog Fn (Xj:n ) − log Fn (Xj:n )}
1<j<n
≥ win (x2 )+1 (Clog Fn (Xin (x2 )+1:n ) − Fn (Xin (x2 )+1:n ))
≥ δ 2 /2.
This implies
P (dn ≥ δ 2 /2) ≥ P (ω : N (ω) ≤ n) → 1 as n → ∞.
This, together with the fact that limn→∞ cα,n (p) = 0 proves (40).
Proof of Theorem 4.1.
We show that if 0 < p1 < p2 < 1, then for every n,
PFp∗ (dn ≥ u) ≥ PFp∗ (dn ≥ u) ∀ u ≥ 0.
1
2
(46)
This will prove that cα,n (p) increases as p ↓ 0. Since, by definition, cα,n (p) ≤
− log(1/n) = log n for all p, the result follows.
To prove (46), let us define a (log-concave) distribution Fp1 ,p2 ,m with point mass
p1 at 0, by

if ym ≤ x ≤ 1
 (1 − x) log p2
log zm − log p1
log Fp1 ,p2 ,m (x)
x if 0 ≤ x ≤ ym
 log p1 +
ym
where 0 < ym < 1 for m ≥ 1, log zm = (1 − ym ) log p2 and ym ↓ 0 as m ↑ ∞.
For each fixed sample size n, let us denote by U1:n < · · · < Un:n , an ordered
random sample from U (0, 1) distribution. Let Xp1 , Xp2 and Xm (each one is an
ordered n-tuple) be defined as follows.

if Ui:n ≤ p1 ,
0
p1
log Ui:n
Xi:n =
if Ui:n > p1 .
1 −
log p1

if Ui:n ≤ p2 ,
0
p2
log Ui:n
Xi:n =
if Ui:n > p2 .
1 −
log p2
0
if Ui:n ≤ p1 ,



log
U
−
log
p
i:n
1


if p1 < Ui:n ≤ zm ,
 ym
log zm − log p1
m
Xi:n =





 1 − log Ui:n
if Ui:n > zm .
log p1
29
for 1 ≤ i ≤ n. It is easy to check that Xp1 , Xp2 and Xm are ordered samples of size
n from Fp∗1 , Fp∗2 and Fp1 ,p2 ,m respectively.
Notice that zm → p2 as m → ∞. Hence for any given > 0, for sufficiently large
m, zm < p2 + .
Let, for sample point ω, the sample realization be (U1:n (ω), . . . , Un:n (ω)). Also, let
an be the function defined in Lemma A.5 (given below). For comparison we consider
the following cases :
Case 1. For all i = 1, . . . , n, Ui:n (ω) > p2 .
For all sufficiently large m, Ui:n (ω) > zm for i = 1, . . . , n. This implies, for all
sufficiently large m,
m
Xi:n
(ω) = 1 −
log Ui:n (ω)
p2
= Xi:n
(ω)
log p2
for i = 1, . . . , n.
Then an (Xm (ω)) = an (Xp2 (ω)).
Case 2. For all i = 1, . . . , n, Ui:n (ω) ≤ p1 .
Then we have, for all m,
p2
m
Xi:n
(ω) = 0 = Xi:n
(ω)
for i = 1, . . . , n.
Hence an (Xm (ω)) = 0 = an (Xp2 (ω)).
Case 3. For some 1 ≤ k ≤ n − 1, Uk:n (ω) < p1 and Uk+1:n (ω) > p2 .
It can be checked along the lines of cases 1 and 2 that for sufficiently large m,
p2
m
Xi:n
(ω) = Xi:n
(ω)
for i = 1, . . . , n.
Hence, again we have an (Xm (ω)) = an (Xp2 (ω)).
Case 4. For some 0 ≤ j < k ≤ n, Uj:n (ω) ≤ p1 and p1 < Ui:n (ω) ≤ p2 for j +1 ≤ i ≤ k.
(We take U0:n ≡ 0 and Un+1:n ≡ 1).
Thus, for all m, p1 < Ui:n (ω) ≤ zm for j + 1 ≤ i ≤ k. And for all sufficiently large
m, Uk+1:n (ω) > zm . Thus for all sufficiently large m we have
p2
m
Xi:n
(ω) = 0 = Xi:n
(ω)
30
for i = 1, . . . , j;
m
Xi:n
(ω)
= ym
log Ui:n − log p1
log zm − log p1
for i = j + 1, . . . , k;
and
m
Xi:n
(ω) = 1 −
log Ui:n (ω)
p2
= Xi:n
(ω)
log p2
for i = k + 1, . . . , n.
m (ω)) where
an (X
Now, check that an (Xm (ω)) ≥ m = (X m , . . . , X m )
X
k:n
n:n
m ) is the maximum (weighted) difference between the linear interpolant of
and an (X
m
m
, log(i/n)) : i = k, . . . , n} (call it L̃n,k ) and its LCM. Since Xi:n
(ω) =
the points {(Xi:n
p2
p2
m
Xi:n
(ω) for i = k + 1, . . . , n and Xk:n
(ω) → 0 = Xk:n
(ω) as m → ∞, it follows that
m (ω)) → an (Xp2 (ω))
an (X
as m → ∞. This can be checked easily by concentrating our attention to the triangles
p2
, log((k +1)/n)) and
∆ and ∆m , where ∆ is formed by the points (0, log(k/n)), (Xk+1:n
p2
m
, log(k/n)), (Xk+1:n
, log((k +
(Xjp∗2:n , log(j ∗ /n)) and ∆m is formed by the points (Xk:n
1)/n)) and (Xjp∗2:n , log(j ∗ /n)). The index j ∗ is the smallest index j ≥ k + 1 such that
the point (Xj:n , log(j/n)) is on the LCM of L̃n,k . From the definition it follows that
the triangles ∆m merge with ∆ as m → ∞ and the result follows from the definitions
m (ω)).
an (X
of an (Xp2 (ω)) and This implies that
lim sup an (Xm (ω)) ≥ an (Xp2 (ω))
m→∞
(47)
Now, since Fp1 ,p2 ,m is a log-concave distribution with point mass p1 at zero, an
application of Lemma A.2 (as in the proof of Theorem 3.2) shows that ∀ m, for almost
all ω, we have,
an (Xp1 (ω)) ≥ an (Xm (ω)).
This, together with (47) and the other three cases studied above imply that
an (Xp1 (ω)) ≥ an (Xp2 (ω))
So we have proved (46).
31
almost surely.
In order to prove Theorem 4.2, we need the following lemma, which is not very
difficult to prove.
Lemma A.5. Let
S = {(x1 , . . . , xn ) : xi ∈ R ∀ i, x1 ≤ x2 ≤ · · · ≤ xn },
and an : S → R be the function that maps (X1:n , . . . , Xn:n ) into dn (that is, dn =
an (X1:n , . . . , Xn:n )).
(a) The points of discontinuity of the map an are contained in the set
D(an ) = {(x1 , . . . , xn ) : x1 ≤ · · · ≤ xi = xi+1 ≤ · · · ≤ xn for some 1 ≤ i ≤ n−1}
(b) If F is log-concave and does not have a point mass at 0, then the set D(an ) has
probability 0 under F .
Proof of Theorem 4.2.
By Theorem 3.2 and Theorem 4.1, the result holds if F has a point mass at 0.
So suppose F is continuous throughout and does not have a point mass at 0. Then
there is x0 > 0 such that log F is strictly concave on (0, x0 ) (since log F (x) ↓ −∞ as
x ↓ 0 and log F is concave) and hence it is almost everywhere differentiable. Further,
there is a decreasing sequence of points {rm : m = 1, 2, . . .} such that r1 < x0
and limm→∞ rm → 0, such that the tangents to log F passing through the points
(rm , log F (rm )) have slopes increasing to infinity (see Figure 3). Suppose that the
tangent passing through (rm , log F (rm )) intersects the line x = 0 at the point (0, zm ).
It follows that zm ↓ −∞ as m → ∞. Define pm = ezm .
32
Figure 3. Construction used in the Proof of Theorem 4.2.
Define distributions Fm by

log F (x),




 tangent line joining (0, log pm ) and
log Fm (x) =

(rm , log F (rm )) evaluated at x,




−∞,
if x ≥ rm ,
if 0 ≤ x < rm ,
if x < 0.
So, Fm is a log-concave distribution with a point mass pm at 0.
Thus, by Theorem 3.2,
PFp∗m (dn ≥ u) ≥ PFm (dn > u)
∀ u ≥ 0,
Notice that Fm =⇒ F as m → ∞. Since F is continuous everywhere, by Lemma
A.5(b) we get
PFm (dn > u) → PF (dn > u)
∀u≥0
as m → ∞.
On the other hand, by Theorem 4.1 we know that for every m ≥ 1, PFp∗m (dn ≥
cα,n ) ≤ α. Also this probability increases as m → ∞ (refer to (46)). As a result we
have
α ≥ PF (dn > cα,n ).
33
Proof of Lemma 5.2.
Parts (a)–(c) follow from the definition of the TTF transform. If either of the
conditions of parts (b) and (c) hold, then F is not log-concave and TF−1 is not convex.
Now suppose that F is continuous in (a, b] and TF−1 is continuous in [0, 1). It can be
verified that for all x > a, the left-derivative of log F at x is equal to the reciprocal of
the left-derivative of TF−1 at F (x). Further, the right-derivative of log F at x is equal
to the reciprocal of the right-derivative of TF−1 at F (x), as long as TF−1 (F (x)) < 1.
The result of part (d) in the special case of F having no point mass in (a, b] is proved
by observing that a continuous function is convex (concave) if and only if its leftand right-derivatives are non-decreasing (non-increasing). The proof is completed by
observing that if F has a jump discontinuity at x0 ∈ (a, b], then log F is not concave
at x0 and TF−1 is not convex at F (x0 −).
Proof of Theorem 5.4.
Part (a). For t ∈ [0, 1], and F ∈ LB (without loss of generality let us take a = 0, that
is, the support of the distribution F is [0, b] for some b > 0),
−1
Fn−1 (t)
F (t)
F (u)du −
Fn (u)du
0
0
F −1 (t)
F −1 (t)
|F (u) − Fn (u)|du + Fn (u)du
≤
−1
Fn (t)
0
F −1 (t)
|F (u) − Fn (u)|du + |Fn−1 (t) − F −1 (t)|
≤
0
≤ F −1 (1) sup |Fn (x) − F (x)| + sup |Fn−1 (t) − F −1 (t)|
x
t∈[0,1]
By Glivenko-Cantelli Theorem the first term in the inequality converges to 0
almost surely. In order to complete the proof, we have to show that the second term
also converges to 0 almost surely, under the assumption that F −1 is continuous on
[0, 1] and F is strictly increasing on [0, b].
Since, F −1 is continuous on [0, 1], given > 0, ∃ δ > 0 such that if x, y ∈ [0, 1],
|x − y| ≤ δ , then |F −1 (x) − F −1 (y)| ≤ .
We have,
P (|Fn−1 (t) − F −1 (t)| > )
34
= P (|X[nt]:n − F −1 (t)| > ) where [nt] is smallest integer ≥ nt
= P (|F −1 (U[nt]:n ) − F −1 (t)| > ) since F −1 is continuous on [0, 1]
≤ P (|U[nt]:n − t| > δ ),
where Ui:n denotes the i-th smallest order statistics for a random sample of size n
from Uniform(0,1) distribution. Since Uk:n has Beta(k, n − k + 1) distribution, and
mean of Beta(k, n − k + 1) is k/n and |[nt] − nt| ≤ 1, hence by taking fourth moment
and using Chebyshev’s inequality we can bound the last term in the above inequality
by C(t)δ−4 n−2 where C(t) is a constant depending on t and uniformly bounded on
[0, 1]. Now applying Borel-Cantelli lemma we obtain
P (|Fn−1 (t) − F −1 (t)| > infinitely often) = 0,
which proves that Fn−1 (t) converges to F −1 (t) pointwise, almost surely. The pointwise
a.s. convergence of a sequence of monotone random functions to a bounded and continuous function on a compact interval is equivalent to their uniform a.s. convergence.
Hence, supt∈[0,1] |Fn−1 (t) − F −1 (t)| → 0 a.s. as n → ∞.
Thus we have actually proved that
−1
Fn−1 (t)
F (t)
lim sup F (u)du −
Fn (u)du = 0 almost surely.
n→∞ t∈[0,1] 0
0
Since Fn−1 (1) → b almost surely as n → ∞, the result follows.
Part (b). Since Tn−1 is the linear interpolation between the successive points (k/n, Sk:n ),
≤ t ≤ nk , we have
it is nondecreasing and hence for k−1
n
k−1
k
−1
−1
−1
Tn
≤ Tn (t) ≤ Tn
.
n
n
Consequently, the result follows from part (a).
Proof of Theorem 5.8. We only prove (a) for the case when F has at most one
point mass, and the possible point mass is at 0. (b) easily follows from (a) when F
is also log-concave.
35
If F has a jump discontinuity at x0 then TF has a jump discontinuity at
x0
F (u)du
−1
0
z(x0 ) := TF (F (x0 )) = F −1
,
(1)
F
(u)du
0
and the size of the jump is the same as that of F.
First fix t ∈ (0, 1]. Without loss of generality we suppose Xn−1:n > 0 (true almost
surely as n → ∞). Then ∃1 ≤ kn (t) ≤ n − 1 such that
kn (t)
n
≤ T̃n (t) ≤
kn (t)+1
,
n
whereby,
Tn−1
kn (t)
n
We now show that
kn (t)
n
= Skn (t);n ≤ t ≤ Skn (t)+1;n =
Tn−1
kn (t) + 1
n
.
(48)
→ TF (t) a.s. By Theorem 5.4(a) and (48) we have
kn (t)
−1
TF
→ t, a.s.
n
From this the result follows since, by the assumption that F −1 is continuous on [0, 1],
and TF is continuous on (0, 1] by the observation
we have knn(t) = TF TF−1 knn(t)
made above. Thus, T̃n (t) → TF (t) a.s.
Now suppose t = 0. Then, if F does not have a point mass at 0, then w.p. 1,
T̃n (0) =
1
n
→ 0 = TF (0). So, suppose that F has a point mass at zero and F (0) = p.
Then TF (0) = p. Notice that T̃n (0) = Fn (0) + n1 . Since Fn (0) → F (0) = p, a.s., we
have T̃n (0) → TF (0) a.s.
Thus for all t ∈ [0, 1], T̃n (t) → TF (t) a.s. Since T̃n is a distribution function on [0, 1]
for every sample realization, and since the pointwise limit TF is also a distribution
function on [0, 1], and under the assumption both have only one point mass at 0, (by
mimicking the proof of Glivenko-Cantelli theorem) we conclude that
sup |T̃n (t) − TF (t)| → 0 a.s., as n → ∞.
t∈[0,1]
In order to prove Theorem 6.1 we need the following lemma.
Lemma A.6. Let s1;n , . . . , sn;n be the values of S1;n , . . . , Sn;n defined by (17) for a
given set of order statistics 0 ≤ x1:n ≤ . . . ≤ xn;n < ∞. Suppose t : [0, M ) → [0, ∞)
36
is a concave increasing function for some M > b. Also, let s1;n , . . . , sn;n be the values
of S1;n , . . . , Sn;n defined by (17) when xk:n is replaced by t(xk:n ) for k = 1, . . . , n.
Then we can construct a concave increasing function ξ : [0, 1] → [0, 1] such that
ξ(sk;n ) = sk;n for k = 1, . . . , n.
Proof: Define ξ : [0, 1] → [0, 1] by ξ(sk;n ) = sk;n and by linear interpolation between
the points {sk;n , k = 0, 1, . . . , n}. Since t is increasing, it follows that sk;n are nondecreasing in k and hence, ξ is non-decreasing. W.l.o.g. we may assume that xk:n are
all different. Then, in order to prove the concavity of ξ, it is enough to prove that for
every 1 ≤ k ≤ n − 2, we have
ξ(sk+1;n ) − ξ(sk;n )
ξ(sk+2;n ) − ξ(sk+1;n )
≥
sk+1;n − sk;n
sk+2;n − sk+1;n
(49)
By definition of ξ and sk;n , (49) can be re-written as
t(xk+1:n ) − t(xk:n )
t(xk+2:n ) − t(xk+1:n )
≥
,
xk+1:n − xk:n
xk+2:n − xk+1:n
which is obviously true since t is a concave increasing function.
Proof of Theorem 6.1. The stated result can be proved along the lines of the proof
of Theorem 3.2, by taking t(x) = Fp∗ −1 (F (x)) in Lemma A.6.
Proof of Theorem 6.2.
The proof follows along the lines of that of Theorem 4.1, once we recognize the
following fact which is analogous to Lemma A.5: the discontinuity points of the test
statistic tn are contained in the set
D(tn ) = {(X1:n , . . . , Xn:n ) : X1:n = · · · = Xi:n < Xi+1:n ≤ · · · ≤
Xj−1:n = Xj:n ≤ · · · ≤ Xn:n
for some 1 ≤ i < j − 1 < n}.
We omit the details.
Proof of Theorem 6.3.
The proof follows along the lines of that of Theorem 4.2.
37
Proof of Lemma 6.4.
Let X1 , . . . , Xn be samples from Fp∗ . Then for i = 1, . . . , n, Xi = Fp∗ −1 (Ui ) where
U1 , . . . , Un are i.i.d. U (0, 1). Thus,

0
log Ui
Xi =
1 −
log p
Consequently, Xi:n = 1 −
log Ui:n
log p
if 0 ≤ Ui < p,
if p ≤ Ui ≤ 1
.
for all i = 1, . . . , n if and only if U1:n ≥ p. Hence,
from (17) it follows that
Sp = (S1:n , . . . , Sn:n ) = (Z1:n , . . . , Zn,n ) = Z
if and only if U1:n ≥ p, where,
k
j=1 (log Uk:n − log Uj:n )
Zk;n = n
k = 1, . . . , n.
j=1 (log Un:n − log Uj:n )
(50)
Let us define for fixed n, Bp = {ω : U1:n (ω) ≥ p}. Then, Bp ⊆ Bp for 0 < p < p < 1.
Also, P (Bp ) = (1 − p)n ↑ 1 as p ↓ 0. Hence, for any A ∈ B(Rn ) (that is, the Borel
σ-algebra on Rn ) we have,
P (Sp ∈ A) = P (Sp ∈ A ∩ Bp ) + P (Sp ∈ A ∩ Bpc )
∈ A ∩ Bp ) + P (Sp ∈ A ∩ B c ),
= P (Z
p
∈ A) and the second term
where the first term on right hand side converges to P (Z
converges to 0 as p ↓ 0.
If Ui follows U (0, 1) distribution, then Ei = − log(1 − Ui ) has unit exponential
has the same joint distribution as that
distribution. Hence it follows from (31) that Z
of Z and this completes the proof.
Proof of Theorem 6.5.
It can be easily checked, as in the case of tn , that for every n, the set of discontinuity points of tn , has measure 0. The stated result follows from Lemma 6.4.
38
Proof of Theorem 6.6.
The key step in showing the consistency of the test is to show that
sup |Vn (x) − x| → 0 almost surely
as n → ∞.
x∈[0,1]
(51)
By virtue of (51) we immediately deduce that τα,n → 0 as n → ∞. Now the rest
of the proof follows from the characterization of log-concave distributions through
their TTF transform, Theorem 5.8, and using the same technique as in the proof of
Theorem 3.5.
We observe that since Vn (x) is a nondecreasing function for each sample point,
and [0,1] is a compact set, in order to prove (51) it is enough to prove point-wise
almost sure convergence. Since Vn (0) ≡ 0 and Vn (1) ≡ 1, we only show that for
all x ∈ (0, 1), Vn (x) → x almost surely. Fix x ∈ (0, 1). There exists a sequence of
integers in (x) satisfying the property that
in (x)
n
≤x<
in (x)+1
.
n
Clearly,
in (x)
n
→ x as
n → ∞. By Lemma A.7 given below, we have
in (x)
Vn
→ x almost surely,
= Zin (x):n
n
in (x) + 1
= Zin (x)+1:n → x almost surely.
and Vn
n
Since Vn interpolates linearly between points (Zk:n , k/n), the result follows.
Lemma A.7. Let x ∈ (0, 1) and in (x) be a sequence of integers such that
Then
Zin (x):n → x
almost surely.
Proof: From the definition of Zk:n we have,
n
1
in (x) in (x) j=n−in (x)+1 (Ej:n − En−in (x)+1:n )
n
Zin (x):n =
1
n
j=1 Ej:n − E1:n
n
We observe that,
1
1
Ej:n =
Ej → 1 almost surely,
n j=1
n j=1
n
n
39
in (x)
n
→x
E1:n → 0 almost surely.
Hence, it is enough to show that if
in (x)
n
→ x, then
1
in (x)
n
j=n−in (x)+1 (Ej:n −En−in (x)+1:n )
→ 1 almost surely.
Recall the following result about exponential distributions. Let X1 , . . . , Xn be
i.i.d. unit exponential random variables. Then for 1 ≤ j ≤ n − 2, the random
vector (Xj+2:n − Xj+1:n , . . . , Xn:n − Xj+1:n ) is independent of Xj+1:n and has the same
joint distribution as that of the order statistics (Y1:n−j+1 , . . . , Yn−j+1:n−j+1 ), where
{Yi , i = 1, . . . , n − j + 1} are i.i.d. unit exponential random variables.
Applying this result to our setting we have,
n
D
(Ej:n − En−in (x)+1:n ) =
in (x)−1
Yj:in (x)−1 .
j=1
j=n−in (x)+2
We now show that given any > 0,


n
1
P 
(Ej:n − En−in (x)+1:n ) − 1 > i.o.  = 0.
i
(x)
−
1
n
j=n−in (x)+2
Consider,


n
1
P 
(Ej:n − En−in (x)+1:n ) − 1 > 
i
(x)
−
1
n
j=n−in (x)+2


in (x)−1
1
Yj:in (x)−1 − 1 > 
= P 
in (x) − 1 j=1
= P (|Wn (x) − (in (x) − 1)| > (in (x) − 1)),
where Wn (x) ∼ Γ(1, in (x) − 1)
= P (|Wn (x) − E(Wn (x))| > kn (x)) where we write kn (x) = in (x) − 1
1
≤ 4 4 E|Wn (x) − E(Wn (x))|4
kn
3kn (x)(kn (x) + 2)
=
4 kn4 (x)
1
4
.
≤ 4· 2
kn (x)
For any 0 < δ < x we have, for all sufficiently large n, kn > n(x−δ), which implies
40
that
1
2 (x)
kn
<
1
.
n2 (x−δ)2
Hence,


n
1
P 
(Ej:n − En−in (x)+1:n ) − 1 >  < ∞.
i
(x)
−
1
n
n=1
j=n−in (x)+2
∞
So an application of Borel-Cantelli lemma gives the result.
References
Bagnoli, M. and Bergstrom, T. (1989). Log-concave probability and its applications,
Working Paper 89-23, Center for Research on Economic and Social Theory,
University of Michigan.
Barlow, R.E., Bartholomew, D.J., Bremmer, J.M. and Brunk, H.D. (1972). Statistical Inference under Order Restrictions, Wiley, New York.
Barlow, R.E. and Proschan, F. (1975). Statistical Theory of Reliability and Life
Testing, Holt, Rinehart and Winston, New York.
Baron, D. and Besanko, D. (1984). Regulation, asymmetric information and auditing, Rand Journal of Economics, 15, 447-470.
Bergstrom, T. and Bagnoli, M. (1993). Courtship as a waiting game, Journal of
Political Economy, 101, 185-202.
Cheng, M.-Y. and Hall, P. (1998). On mode testing and empirical approximations
to distributions. Statistics and Probability Letters, 39, 245–254.
Hartigan, J.A. and Hartigan, P.M. (1985). The dip test of unimodality. Annals of
Statistics, 13, 70–84.
Hartigan, J. A. (1987). Estimation of a convex density contour in two dimensions,
Journal of the American Statistical Association, 82, 267–270.
Lafont, J. and Tirole, J. (1988). Dynamics of incentive contracts, Econometrica, 59,
1735–1754.
41
Menéndez, J.A. and Salvador, B. (1990). On the likelihood ratio test for unimodality and symmetry in a normal model. In Proceedings of the XIVth SpanishPortuguese Conference on Mathematics, Vol. I–III, Puerto de la Cruz, 1989,
Univ. La Laguna, La Laguna, 909–913.
Myerson, R. and Satterthwaite, M. (1983). Efficient mechanisms for bilateral trading, Journal of Economic Theory, 29, 265–281.
Sengupta, D. and Nanda, A. (1997). Log-concave and concave distributions in reliability, Technical Report, Applied Statistics Division, Indian Statistical Institute,
Kolkata.
Sengupta, D. and Nanda, A. (1997). Log-concave and concave distributions in reliability, Naval Research Logistics, 46, 419–433.
Sengupta, D. and Paul, D. (2004). Some tests for log-concavity of life distribution,
Technical Report # ASD/2004/7, Applied Statistics Division, Indian Statistical
Institute, Kolkata.
Silverman, B.W. (1981). Using kernel density estimates to investigate multimodality,
Journal of the Royal Statistical Society, Series B, 43, 97–99.
Steutel, F.W. (1985). Log-concave and log-convex distributions. In Encyclopedia of
Statistical Science, Vol. 5, Wiley, New York, 116–117.
Tenga, R. and Santner, T.J. (1984a). Testing goodness of fit to the increasing failure
rate family, Naval Research Logistics, 31, 617–630.
Tenga, R. and Santner, T.J. (1984b). Testing goodness of fit to the IFR family with
censored data, Naval Research Logistics, 31, 631–646.
42