Tail distribution and dependence measures

Tail distribution and dependence measures
Arthur Charpentier∗
March 27, 2003
Résumé : L’ensemble des familles de copulas usuelles peuvent être distinguées suivant la structure de dépendance qu’elles engendrent, certaines familles allouant plus de masse sur la queue supérieure (grands sinistres)
ou sur la queue inférieure (petits sinistres), par exemple. Gary Venter mentionnait, dans Tails of Copulas, qu’en
assurance dommage les copulas accentuant la dépendance au niveau de la queue supérieure étaient d’un grand
intérêt puisqu’ils permettaient de prendre en compte la corrélation entre les risques extrêmes. En fait, compte
tenu du poids dans la distribution de la charge totale des grands sinistres, il convient de modéliser correctement
la dépendance au sein de cette queue supérieure, et éviter surtout de sous estimer cette forme de dépendance.
Nous allons ici voir comment étudier cette queue supérieure (ou de façon analogue la queue inférieure), du point
de vue de la distribution tout d’abord, puis sous la forme d’une mesure de dépendance dans un second temps.
Ce papier se propose tout d’abord d’étudier le copula conditionel, ou copula tronqué, correspondant au copula
du copula du couple (X, Y ) étant donné que X et Y excèdent tous deux un seuil (défini comme un quantile, à
−1
(u) , Y > FY−1 (v), u, v ∈ [0, 1]). Nous étudierons plus particulièrement le cas
probabilité donné, i.e. X > FX
des copulas Archimédien, qui définissent une famille stable, au sens où le copula tronqué est encore un copula
Archimédien, pour tout u, v. Nous étudierons également une application de ce copula conditionel dans le cas
du risque de crédit. Nous verrons enfin, comment à partir de cette étude de la queue supérieure du point de
vue de la distribution, il est possible de définir des mesures de dépendance fonctionelle, en définissant le rho de
Spearman de ce copula conditionel à u et v donnés, et d’étudier sa variation en tant que fonction de u et v, et
d’étudier son comportement quand u et v tendent vers 1. Cette mesure de dépendance peut alors être utilisée
comme critère de choix de modèle, lors d’une modélisation par un copula paramétrique, de façon à vérifier que
la famille retenue ne sous-estime pas cette queue de distribution, qui peut avoir un impact énorme en assurance
dommage.
Abstract : Families of copulas could be distinguished according to the allocation of the weight among
the dependence structure : some copulas could stress more on upper tails (large claims), while other could
stress more on lower tails...etc. As mentioned by Gary Venter in Tails of Copulas, in property and casualty
applications, there could be interest in copulas that emphasize correlation among large losses. More precisely,
it might be more interesting to focus on the upper tail, to be sure that the model fits well the dependence
among large losses - the main idea being that we should chose a copula which does not underestimate the
dependence in the upper tail. In this paper, we will study (in Part 2 ) conditional copula, which is the copula of
the distribution (X, Y ), given X and Y both higher, or lower, than a given threshold (defined as a quantile, for
a given risk level ). After defining these conditional properties, we will give some properties, satisfied by these
families of copulas, focusing on a stable family : the family of Archimedian copula. This conditional copula will
be used, then (in Part 3 ), to define a functional dependence measure, based on Spearman’s rho, call tail rank
correlation, which could be seen as a measure of dependence in the tails of the distribution.
Keywords : Archimedian copula; conditional distribution; copulas; dependence; factor representation; rank
correlation; Spearman’s rho; tail correlation; tail dependence; truncature;
∗ Arthur Charpentier - ENSAE/CREST - Timbre J120 - 3, avenue Pierre Larousse - 92245 Malakoff cedex (France) [email protected]
1
1
1.1
Introduction
Motivation
Studying multivariate extremes is an important issues in risk management, and to study into details the relationship among extremes, some tail concentration functions have been introduced, to study, first of all, how
much probability is in those regions (X and Y ’extreme’), but also the shape of the dependence structure.
Upper and lower tail dependence parameters could be an interesting tool to study tail dependence (because of
its simple analytical expression), but this coefficient can only quantify the amount of dependence in the tail,
and it can not describe the shape of the dependence structure in the tails.
Using untruncated data, it is possible to obtain all the overall dependence structure between X and Y. But
one can wonder if dependence properties still hold if focusing only on extremes of the distribution. For example,
if the correlation between X and Y is positive, could we assume that the correlation between extreme values of
X and extreme values of Y is also positive ? More generally, if X and Y satisfy a positive dependence, can we
assume that the same dependence property still holds for (X, Y ) given X higher than a given threshold and Y
higher than a given threshold ?
The same kind of questions could be asked about stochastic processes. The motivation of some risk managers
could be to focus only on rare events : knowing that Xt has been an extreme event at time t, in the sense
that Xt was lower (or higher ) than a low (high) quantile, what could we expect for Xt+1 . In that case, it
−1
(α) for some
could be interesting to study the dependence structure of (Xt , Xt+1 ) knowing that Xt ≤ FX
t
−1
α (or Xt > FXt (α)). Malevergne and Sornette (2002) focus, for instance, on the correlation conditioned on
exceedance of one variable, for financial time series.
Conversely, for some practical issues, only truncated data could be available. More specifically, those data
could be truncated data with a deterministic truncature, in the sense that only data above a given threshold
where taken into account. These data could give some information about the dependence structure of (X, Y )
given X higher than a given threshold and Y higher than a given threshold, but this is only a partial information
about the overall structure between untruncated variables X and Y .
In the first case, the motivation is to link the dependence structure of (X, Y ) and (X, Y ) given X and Y
are higher (or lower ) than given thresholds And the second case could be seen as the dual problem : knowing
the dependence structure of (X, Y ) given X and Y are higher (or lower ) than some thresholds, what could be
said about the overall dependence structure of (X, Y ) ? Because one of the main concept used to capture the
dependence structure of a multivariate distribution is the copula distribution function, we will focus, in this
paper, on properties of the conditional copula of (X, Y ) given X and Y lower than given thresholds.
1.2
Copula
As mentioned above, one of the main concept used to capture the dependence structure of a multivariate distribution is the copula distribution function. Whenever copula can be defined for any multivariate distributions
in Rd , we focus on bivariate continuous random vectors for expository purpose. Let us denote FX,Y (x, y)
the bivariate cumulative distribution of the pair (X, Y ) of random variables X and Y , FX (x) and FY (y) the
marginal c.d.f. of X and Y , respectively. As shown in Sklar (1959), the joint c.d.f. of (X, Y ) can be written as
FX,Y (x, y) = P (X ≤ x, Y ≤ y) = C (FX (x) , FY (y)) ,
where C is the c.d.f. of a distribution on [0, 1]2 , with uniform margins. When variables are continuous, C
is unique, and is called the copula of (X, Y ) . Sklar theorem allows to separate the marginal feature and the
dependence structure which is represented by the copula. The function C is the c.d.f. of the pair (U, V ) where
U = FX (X) and V = FY (Y ), and
∂ 2C
c (u, v) =
(u, v) ,
∂u∂v
is the associated p.d.f. Sklar’s theorem proves the existence and the uniqueness of the copula. It also explains
how to construct it from the initial distribution. Indeed, for any 0 ≤ u, v ≤ 1, the copula is given by
−1
(u) , FY−1 (v) ,
C (u, v) = FX,Y FX
−1
where FX
and FY−1 are the marginal quantile functions. The copula characterizes any nonlinear dependence
which is invariant by increasing transformation of either X and Y . More precisely we have the following : if φ
and ψ are strictly increasing functions, then (X, Y ) and (φ (X) , ψ (Y )) have the same copula.For a given copula
C, let DC denote the subset
DC = {(u, v) ∈ [0, 1] × [0, 1] |C (u, v) > 0} .
2
However, the joint survivor function can be more appropriate in problems involving duration variables, or
exceedance over a threshold. The survival distribution of (X, Y ) is given by
F X,Y (x, y) = P (X > x, Y > y) = C ∗ F X (x) , F Y (y) ,
(1)
where C ∗ is the c.d.f. of (1 − U, 1 − V ). C ∗ is called survival copula of (X, Y ), or dual copula, and is related to
copula C by
C ∗ (u, v) = 1 − u − v + C (1 − u, 1 − v) .
An as well as F X,Y denote the survival distribution of (X, Y ), C will denote the survival distribution of
(FX (X) , FY (Y )), where (X, Y ) has copula C.
Example 1 Gaussian copula - This copula is symmetric, and is useful for its easy simulation method. Furthermore, it can be easily generalized to higher dimension than 2. It could be defined by its density,
1
1 x2 + y2 − 2ρxy
1 2
2
x
c (x, y) = exp −
+
y
exp
, where θ ∈ [−1, 1] , ∈
2
1 − ρ2
2
1 − θ2
so that
C (x, y) =
0
x y
0
c (u, v) dudv on [0, 1] × [0, 1] .
+
This copula is such that C (x, y) is equal to C − (x, y), C ⊥ (x, y)
and C (x, y) respectively, when θ = −1, 0 and
−1
−1
+1. It could also be written C (x, y) = Φθ Φ (x) , Φ (y) where Φ denotes the N (0, 1) cdf and Φθ is the
bivariate standard normal cdf, with correlation θ.
Example 2 Gumbel copula - This copula is asymmetric, with more weight in the right tail, and is given by
1/θ θ
θ
C (x, y) = exp − (− log x) + (− log y)
where θ ≥ 1,
z
Moreover, it is an extreme value copula, i.e. C (xz , yz ) = C (x, y) for all z > 0, and it is an Archimedian
copula. This copula is also called logistic copula.
Example 3 Clayton copula - This copulas is also asymmetric, but with more weight in the left tail, and is
given by
−1/θ
where θ ≥ 0,
C (x, y) = x−θ + y −θ − 1
It is also an Archimedian copula. The level curves of the densities of these copulas are given below
1.3
Outline of the paper
As mentioned in Deheuvels (1979), copulas are the ’dependence function’ of couple (X, Y ). An heuristic interpretation of positive dependence, is that a pair of random variables are positively dependent if large values of one
tend to be associated with large values of the other, and small values of one with small values of the other.If we
are interested mainly in the dependence among small values, it might be interesting to study the joint distribution of (X, Y ) given X and Y ”small”. In the case the notions of large and small values is based on the quantiles
of the distributions, then X and Y are small if they both do not exceed some respective quantiles, and we might
−1
(u) , Y ≤ FY−1 (v), for
be interested in studying the distribution of the joint distribution (X, Y ) given X ≤ FX
some u, v in [0, 1]. An as well as copulas have been introduced to study the dependence of X and Y , it is possible
−1
(u) , Y ≤ FY−1 (v) . Using the
to introduce the copula of the conditional distribution of (X, Y ) given X ≤ FX
rank transformation of X and Y , U = FX (X) and V = FY (Y ), we can introduce the copula of (X, Y ) given
U ≤ u, V ≤ v, which is also the copula of the conditional couple (U, V ) given U ≤ u, V ≤ v.
3
When studying small values among bivariate pairs of random variables, we will introduce the conditional
copula, based on the lower orthant, defined as the copula of (U, V ) |U ≤ u, V ≤ v. Similarly, when studying
large values among bivariate pairs of random variables, we can introduce the conditional copula, based on the
lower orthant, defined as the copula of (U, V ) given U > u, V > v. In that case, it could be more interesting to
study the survival distribution of the copula associated with (U, V ) given U > u, V > v. In the first part of this
paper, we will focus on the expression of this lower orthant conditional copula. We will see how dependence
orderings are transferred to the extremes, and how positive (or negative) dependence of (X, Y ) imply weaker
−1
−1
(u) , Y ≤ FY−1 (v), or X > FX
(u) , Y > FY−1 (v).
positive (or negative) dependence of (X, Y ) given X ≤ FX
P QD or N QD properties of extremes are obtained in the case where FXY is either T P 2 or RR2. This part will
summarize some theoretical results obtained in Charpentier (2003) . We will then study into details the case of
Archimedian copulas in the second part, showing, first, that for any u and v, the conditional copula of (U, V )
given U ≤ u, V ≤ v is still an Archimedian copula. Furthermore, if the Archimedian copula of (X, Y ) has a
factor representation, also has the conditional copula. Furthermore, the only ’invariant’ copula (in the sense
that the conditional copula does not depend on u and v) necessarily belongs to the family of Clayton copulas.
An explicit applications we be detailed, where the distribution of the distribution in the tail is needed , in Credit
Risk. In that case, we study the dependence between two non-independent default times. In that case, if the
copula at time 0, between the two default times was C, it might be interesting to study, at time t > 0, the
dependence structure given the fact that there has been no default between 0 and t, i.e. the conditional copula
of τ X and τ Y given τ X > t and τ Y > t.
The last part will deal with an application of this conditional copula, defining a functional dependence
measure. As done in Venter (2001) and Malevergne and Sornette (2002), it could be interesting to define
a correlation measure which describe the behavior in the tails. Venter introduced a cumulative tau (from
Kendall’s tau), normalized, so that it is equal to 0 in 0 and to 1 in 1 (as Lorentz curve). But from this measure,
it is rather difficult to quantify the dependence in the tails. Malevergne and Sornette (2002) introduced a
conditional correlation, but the correlation is the one introduced by Pearson, which is very sensitive to the
marginal distribution. The conditional correlation we introduce in this paper could be seen as an extension of
the correlation introduced in Malevergne and Sornette (2001), using Spearman’s correlation instead of Pearson’s.
We will see, for example, that this measure of dependence could be used for calibration issues.
2
2.1
Conditional copula : distribution in the tails
Definition
Let (X, Y ) be a random pair with copula C, and continuous marginal c.d.f.’s FX and FY , respectively. For any
(u, v) ∈ DC , the conditional distribution of (U, V ) given U ≤ u, V ≤ v is
F (C, u, v) (x, y) = P (U ≤ x, V ≤ y|U ≤ u, V ≤ v) =
C (x, y)
,
C (u, v)
where 0 ≤ x ≤ u and 0 ≤ y ≤ v. Since marginal distributions of U and V given U ≤ u, V ≤ v are not uniforms,
F (C, u, v) is not a copula.
The marginal c.d.f.’s of the conditional distributions are given by
FX (C, u, v) (x) =
C (x, v)
C (u, y)
and FY (C, u, v) (y) =
, respectively.
C (u, v)
C (u, v)
The copula of the conditional distribution is1
Φ (C, u, v) (x, y) =
−1
−1
C FX (C, u, v) (x) , FY (C, u, v) (y)
C (u, v)
,
where FX (C, u, v)−1 (t) = x if and only if C (x, v) = tC (u, v), and similarly for Let FY (C, u, v)−1 . This copula
is called lower orthant conditional copula.
1 One can notice that Φ (C, u, v) (x, 0) = 0 because F (C, u, v) (0) = C (u, 0) /C (u, v) = 0 and so, F (C, u, v)−1 (0) = 0.
Y
Y
Furthermore Φ (C, u, v) (x, 1) = x because FY (C, u, v) (v) = 1 (so that, FY (C, u, v)−1 (1) = v) and then,
C FX (C, u, v)−1 (x) , v
= FX (C, u, v) FX (C, u, v)−1 (x) = x,
Φ (C, u, v) (x, 1) =
C (u, v)
because FX (C, u, v) (x) = C (x, v) /C (u, v) .
4
Similarly, it is possible to study the dependence for large values. In that case, it is more convenient to study
the dependence in terms of survival copulas. Using expression (1), the survival copula could be obtained from
joint and marginal survival distributions.
For any u, v ∈ ]0, 1], the survival distribution of (U, V ) given U > u, V > v is
G (C, u, v) (x, y) = P (U > x, V > y|U > u, V > v) =
C ∗ (1 − x, 1 − y)
on [u, 1] × [v, 1] .
C ∗ (1 − u, 1 − v)
The marginal survival distributions of the conditional distribution are given by
GX (C, u, v) (x) =
C ∗ (1 − x, 1 − v)
C ∗ (1 − u, 1 − y)
and
, respectively.
G
(C,
u,
v)
(y)
=
Y
C ∗ (1 − u, 1 − v)
C ∗ (1 − u, 1 − v)
The survival copula of the conditional distribution is2
−1
−1
C ∗ 1 − GX (C, u, v) (x) , 1 − GY (C, u, v) (y)
,
Ψ (C ∗ , u, v) (x, y) =
C ∗ (1 − u, 1 − v)
where GX (C, u, v)−1 (t) = x if and only if C ∗ (1 − x, 1 − v) = tC ∗ (1 − u, 1 − v), and similarly for Let GY (C, u, v)−1 .
This copula is called upper orthant conditional copula.
Proposition 1 Let (U, V ) be a pair of uniform variables, with c.d.f. C, and (u, v) ∈ DC . The conditional
copula of (U, V ) given U ≤ u and V ≤ v is
C FX (C, u, v)−1 (x) , FY (C, u, v)−1 (y)
.
Φ (C, u, v) (x, y) =
C (u, v)
Let C ∗ be the survival copula of (U, V ), then, the survival copula of (U, V ) given U > u and V > v is
−1
−1
C ∗ 1 − GX (C, u, v) (x) , 1 − GY (C, u, v) (y)
Ψ (C ∗ , u, v) (x, y) =
.
C ∗ (1 − u, 1 − v)
Example 4 Mixture of Fréchet upper bond C + and independent copula C ⊥ - For all θ in [0, 1], let
Cθ (x, y) = θC + (x, y) + (1 − θ) C ⊥ (x, y), i.e. Cθ (x, y) = θ min (x, y) + (1 − θ) xy, on [0, 1] × [0, 1]. Let (U, V )
be a pair with distribution Cθ for some θ, which could be seen as the following factor model, with factor Z, with
values 0 and 1,
P (U ≤ x, V ≤ y|Z = 1) = C + (x, y) , i.e. V = U with probability θ
P (U ≤ x, V ≤ y|Z = 0) = C ⊥ (x, y) , i.e. V ⊥ U with probability 1 − θ
Let Φ (Cθ , t) denote the copula of the conditional distribution of (U, V ) given U ≤ t and V ≤ t, for all t ∈ ]0, 1].
Then,
θ
P (Z = 1)
P (U ≤ t, V ≤ t|Z = 1) =
= θ (t) ,
P (Z = 1|U ≤ t, V ≤ t) =
P (U ≤ t, V ≤ t)
θ + t − θt
The joint cumulative distribution of (U, V ) given U ≤ t and V ≤ t is given by
F (x, y) = P (U ≤ x, V ≤ y|U ≤ t, V ≤ t) =
Cθ (x, y)
θ min (x, y) + (1 − θ) xy
=
,
Cθ (t, t)
θt + (1 − θ) t2
so that, the marginal cumulative distributions are
FX,t (x, y) = P (U ≤ x|U ≤ t, V ≤ t) =
=
Cθ (x, t)
Cθ (t, t)
θx + (1 − θ) xt
x
= where x ≤ t.
θt + (1 − θ) t2
t
2 Similarly, one can notice that Ψ (C ∗ , u, v) (x, 0) = 0 because G (C ∗ , u, v) (1) = C ∗ (1 − u, 0) /C ∗ (1 − u, 1 − v) = 0 and so,
Y
1−GY (C ∗ , u, v)−1 (0) = 1−1 = 0. Furthermore Φ (C ∗ , u, v) (x, 1) = x because GY (C ∗ , u, v) (v) = 1 (so that, GY (C ∗ , u, v)−1 (1) =
v) and then,
C ∗ 1 − GX (C, u, v)−1 (x) , 1 − v
−1
=
G
(C,
u,
v)
G
(C,
u,
v)
(x)
= x,
Ψ (C ∗ , u, v) (x, 1) =
X
X
C ∗ (1 − u, 1 − v)
because GX (C ∗ , u, v) (x) = C ∗ (1 − x, 1 − v) /C ∗ (1 − u, 1 − v).
5
It comes that the distribution of U given U ≤ t, V ≤ t is uniform on [0, t]. So, F is the copula of the joint
distribution, i.e. Φ (Cθ , t) (x, y) = F (x, y) and could be written
Φ (Cθ , t) (x, y) = θ (t) C + (x, y) + [1 − θ (t)] C ⊥ (x, y) = Cθ(t) (x, y) where θ (t) =
θ
,
θ + t − θt
which is still in the same family of mixtures. And similarly, one can obtain easily that Ψ (Cθ∗ , t) (x, y) =
Φ (Cθ , t) (x, y) for all x, y in [0, 1].
From the results above, we can write similarly, for the lower tail copula,
C (x, v) C (u, y)
C (x, y)
= Φ (C, u, v)
,
, where x, y ∈ [0, u] × [0, v] ,
C (u, v)
C (u, v) C (u, v)
and for the upper tail conditional survival copula
∗
C (1 − x, 1 − v) C ∗ (1 − u, 1 − y)
C ∗ (1 − x, 1 − y)
∗
=
Ψ
(C
,
,
u,
v)
, where x, y ∈ [u, 1] × [v, 1] .
C ∗ (1 − u, 1 − v)
C ∗ (1 − u, 1 − v) C ∗ (1 − u, 1 − v)
One can get easily the following result : let 0 < u, v ≤ 1, and let Γ be a continuous copula, then there is a
copula C such that Γ = Φ (C, u, v). It comes from this proposition that, for a given couple (X, Y ), with given
−1
(u) and Y ≤ FY−1 (v) has no constraint :
threshold u and v, the dependence structure of (X, Y ) given X ≤ FX
it could be any copula from the lower to the upper Fréchet bound.
Remark 1 For some simple transformation, the conditional copula could be obtained easily. For example,
1/α
is a copula, and furthermore, Φ (Γ, u, v) =
let C be a copula, and α in ]0, 1], then Γ (x, y) → C (xα , y α )
1/α
[Φ (C, uα , v α )] .
Remark 2 One can notice that these definitions could be extended from the bivariate case to the multivariate
case, where d ≥ 2.
2.2
Conditional copula on the diagonal : Φ (C, t)
This copula has been introduced recently in copulas literature, such as in Juri and Würthrich (2002). This
conditional copula could be used to define some functional dependence measures. For example, it is possible to
define the ’lower quadrant correlation’ using Spearman’s rho :
Φ (C, t, t) (x, y) dxdy − 4
ρ (C, t) = 12
[0,1]×[0,1]
−1
which is the rank correlation of (X, Y ) given X ≤ FX
(t) and Y ≤ FY−1 (t). But is this conditional copula can
lead to simplified calculations, one can wonder, for practical issues, why X and Y should be lower (or higher)
than the same percentile.
An alternative could be to study Φ (C, t, φ (t)) and Ψ (C ∗ , t, ψ (t)) for some functionals φ and ψ.
Example
5 For example,
let X and Y be two default times, with the joint survival distribution F X,Y (x, y) =
C ∗ F X (x) , F Y (y) , at time t = 0, where C ∗ is the survival copula of (X, Y ). If no defaults occur between time
0 and time t, then, the conditional survival distribution at time t is
C F X (x) , F Y (y)
= Ψ C ∗ , F X (t) , F Y (t) F X (x) , F Y (y) where x, y > t,
P (X > x, Y > y|X > t, Y > t) =
C F X (t) , F Y (t)
where Ψ C ∗ , F X (t) , F Y (t) is the survival copula of the conditional distribution. Studying the temporal
dynamic evolution of the conditional copula, given that at time t no defaults occur, is similar as studying
−1
Ψ (C ∗ , t, ψ (t)) where ψ = F Y ◦ F X . This will developed in Section(2.7).
2.3
Conditional copula on the border : Φ (C, u, 0) or Φ (C, 0, v)
This case is interesting while conditions depends on only one of the two parameter, while studying (X, Y ) |X ≤
−1
(u), or (X, Y ) |Y ≤ FY−1 (v) . This conditional copula in interesting when studying time series : given some
FX
information on the variable of interest at time t (for example, Xt was a ’large’ value, in the sense that Xt was
6
higher than Ft−1 (p) where p is close to 1), it might be interesting to study the dependence between the variable
at time t and the variable at time t + 1.
For example, let (Xt ) be a stochastic process, with marginal cdf at time t Ft . Let Ut = Ft (Xt ) be the
associated rank at time t. Let Ct be the copula of (Ut−1 , Ut ), which represents the dependence structure
between the rank at time t − 1 and the rank at time t. Φ (C, u, 0) represents the dependence structure knowing
that, at time t − 1, the rank was low (at least in the u-th lower percentile), and Ψ (C ∗ , u, 0) represents the
dependence structure knowing that, at time t − 1, the rank was high (at least in the u-th upper percentile). This
could lead to several application to credibility models (if Xt denotes the cost at time t) or financial modeling
(if Xt denotes a financial time series, such as an index price variation).
Example 6 Studying this conditional dependence could be interesting in regime-switching models. For example,
let 0 = p1 < p2 < ... < pn = 1, and (Xt ) be a self-exciting threshold autoregressive (SET AR) process, of order
1, i.e.
Xt+1 = αi + φi Xt + εit if Ft−1 (pi ) ≤ Xt < Ft−1 (pi+1 )
where εit is a sequence of i.i.d. random variables with mean 0, such that εit and εjt are independent for
i = j. In that case, it could be interesting to study the dependence between Xt and Xt+1 given some information
on Xt .
This kind of conditional copulas could be used in the case variables X and Y could be linked by a causal
relationship. This kind of conditional copula would be used in Section (3.3.1), while studying the dependence
between the cost of a claim (the loss) and the allocated expenses. Given some information on the cost, it might
be interesting to get a better understanding of the dependence between the loss and the expenses.
Conditional invariant copulas on [0, 1] × [0, 1]
2.4
A copula C is said to be have invariant conditional copulas if Φ (C, u, v) does not depend on u and v. This
implies that Φ (C, u, v) = C for all u, v in [0, 1] × [0, 1].
Proposition 2 (i) C is an invariant copula if and only if it satisfies the functional equation
C (x, v) C (u, y)
C (x, y)
=C
,
for all x ∈ [0, u] , y ∈ [0, v] , u, v ∈ [0, 1] ,
C (u, v)
C (u, v) C (u, v)
(ii) C is an invariant copula if and only if it satisfies
C1 (1, y)
C2 (x, 1)
C1 (x, y) =
C2 (x, y) , where C1 (x, y) = ∂C (x, y) /∂x and C2 (x, y) = ∂C (x, y) /∂y.
C2 (1, 1)
C1 (1, 1)
Proof. Charpentier (2003) .
−1/θ
Example 7 Let C belong to the family of Clayton copulas, i.e. C (x, y) = x−θ + y −θ − 1
, so that
−1/θ−1
−θ−1
−θ
−θ
C1 (x, y) = x
x +y −1
. It comes that
−1/θ−1 −θ−1 −θ
−1/θ−1 −θ
−1/θ−1 C1 (1, y)
C2 (x, 1)
C1 (x, y) = x−θ
C2 (x, y)
x + y −θ − 1
.x
= x + y −θ − 1
=
C2 (1, 1)
C1 (1, 1)
and so, C is invariant on [0, 1] × [0, 1] .
2.4.1
Conditional invariant copulas on the first diagonal
A weaker condition of invariance could be defined : a copula C is said to be have invariant conditional copulas
on the diagonal if Φ (C, t, t) does not depend on t and t,i.e. Φ (C, t, t) = C for all t in [0, 1].
C (x, t) C (t, y)
C (x, y)
=C
,
for all x, y ∈ [0, t] , t ∈ [0, 1]
C (t, t)
C (t, t) C (t, t)
Proposition 3 (i) C is an invariant copula on the first diagonal if and only if it satisfies the functional equation
C (x, t) C (t, y)
C (x, y)
=C
,
for all x, y ∈ [0, t] , t ∈ [0, 1]
C (t, t)
C (t, t) C (t, t)
(ii) C is an invariant copula on the first diagonal if and only if it satisfies
C2 (x, 1)
C1 (1, y)
C (x, y) = x −
C1 (x, y) + y −
C2 (x, y)
C1 (1, 1) + C2 (1, 1)
C1 (1, 1) + C2 (1, 1)
Proof. Gouriéroux and Monfort (2003) .
7
2.5
Ordering and conditional copulas
In this part, we will study the evolution of the dependence, while working on the conditional copula. For
−1
(u) and Y ≤ FY−1 (v), or
example, if (X, Y ) is ’positively ordered’ we can expect that (X, Y ) given X ≤ FX
−1
−1
X > FX (u) and Y > FY (v), to be also ’positively ordered’. The two following paragraph will define some
positive dependence concepts, the first one being the most popular : the P QD concept. But we will see in the
third part that this concept is not sufficient : if (X, Y ) is P QD, it does not necessarily imply that (X, Y ) given
−1
−1
(u) and Y ≤ FY−1 (v), or X > FX
(u) and Y > FY−1 (v) is P QD. This is the reason why we will
X ≤ FX
introduce a second concept of positive dependence, based on T P 2 functions.
The positive quadrant dependence (P QD), introduced in Lehmann (1966), is a pointwise comparison between
the joint c.d.f. and the product of marginal c.d.f’s, or equivalently, between C and C ⊥ : a pair (X, Y ), with
joint c.d.f. FXY , and marginal c.d.f.’s FX and FY , is said to be P QD if, for all x, y, FXY (x, y) ≥ FX (x) FY (y),
or similarly, that for all u, v ∈ [0, 1], C (u, v) ≥ C ⊥ (u, v) where C is the copula of (X, Y ), and C ⊥ denotes the
independent copula.
One can obtain easily that this definition remains unchanged using survival functions instead of c.d.f. :
(X, Y ) is P QD if and only if F XY (x, y) ≥ F X (x) F Y (y) .
From this concept of positive dependence, it is possible to define the following stochastic ordering : (X1 , Y1 ),
with joint c.d.f. F1 , is said to be smaller than (X2 , Y2 ), with joint c.d.f. F2 , denoted (X1 , Y1 ) P QD (X2 , Y2 ),
or similarly F1 P QD F2 if and only if F1 (x, y) ≥ F2 (x, y) for all x, y.
In the case of bivariate vectors, one can see easily that this condition could be replace equivalently by
F1 (x, y) ≥ F2 (x, y) for all x, y. In the case where the two pairs have the same marginal distributions, i.e.
L
L
X1 = X2 and Y1 = Y2 , (X1 , Y1 ) LCSD (X2 , Y2 ) if and only if C1 (x, y) ≥ C2 (x, y) for all x, y where C1 and C2
denote the copulas of (X1 , Y1 ) and (X2 , Y2 ) respectively.
Analogously, it is possible to define a negative dependence concept as follows, a pair (X, Y ), with joint
c.d.f. FXY , and marginal c.d.f.’s FX and FY , is said to be N QD if, for all x, y, FXY (x, y) ≤ FX (x) FY (y), or
similarly, that for all u, v ∈ [0, 1], C (u, v) ≤ C ⊥ (u, v) where C is the copula of (X, Y ), and C ⊥ denotes the
independent copula.
One can obtain easily that this definition remains unchanged using survival functions instead of c.d.f. :
(X, Y ) is N QD if and only if F XY (x, y) ≤ F X (x) F Y (y) .
The T P 2 concept was introduced in Karlin (1968), and a nonnegative function h is totally positive of order
2 (T P 2) if, for all x1 < x2 , y1 < y2 ,
h (x1 , y1 ) .h (x2 , y2 ) ≥ h (x1 , y2 ) .h (x2 , y1 ) .
This condition could also be written as the determinant of a square matrix of order 23 .Nelsen (1999) defines
from this concept, two notions of dependence : a pair (X, Y ) of continuous random variables with joint c.d.f.
FXY is LCSD (left corner set decreasing) if FXY is T P 2. And (X, Y ) is RCSI (right corner set increasing)
if F XY is T P 2.
One can notice that these notion could be defined equivalently using copula functions instead of c.d.f.
Furthermore, X and Y are LCSD if and only if P (X ≤ x, Y ≤ y|X ≤ x , Y ≤ y ) is non-increasing in x and
y for all x and y. And similarly, X and Y are RCSI if and only if P (X > x, Y > y|X > x , Y > y ) is nondecreasing in x and y for all x and y. These notions of dependence are stronger than the P QD condition, i.e.
if (X, Y ) is LCSD or RCSI, then (X, Y ) is P DQ.
From these definition, it is possible to define an ordering relationship, as done in Szehli (1995) : (X1 , Y1 ),
with joint c.d.f. F1 , is said to be smaller, in the LSCD order, than (X2 , Y2 ), with joint c.d.f. F2 , denoted
(X1 , Y1 ) LCSD (X2 , Y2 ), or similarly F1 LSCD F2 if and only if
F1 (min {x1 , x2 } , min {y1 , y2 }) .F2 (max {x1 , x2 } , max {y1 , y2 }) ≥ F1 (x1 , y2 ) .F2 (x2 , y1 ) ,
for allx1 , x2 , y1 , y2 , And similarly, (X1 , Y1 ), with joint c.d.f. F1 , is said to be smaller, in the RCSI order, than
(X2 , Y2 ), with joint c.d.f. F2 , denoted (X1 , Y1 ) RSCI (X2 , Y2 ), or similarly F1 RSCI F2 if and only if
F 1 (x1 , y1 ) .F 2 (x2 , y2 ) ≥ F 1 (x1 , y2 ) .F 2 (x2 , y1 )
for all x1 ≤ x2 , y ≤ y2 .
F 1 (x1 , y1 ) .F 2 (x2 , y2 ) ≥ F 1 (x2 , y1 ) .F 2 (x1 , y2 )
L
L
In the case where the two pairs have the same marginal distributions, i.e. X1 = X2 and Y1 = Y2 ,
3 i.e.
det
h (x1 , y1 )
h (x1 , y2 )
h (x2 , y1 )
h (x2 , y2 )
≥ 0, for all x1 < x2 , y1 < y2
8
(X1 , Y1 ) LCSD (X2 , Y2 ) if and only if
C1 (x1 , y1 ) .C2 (x2 , y2 ) ≥ C1 (x1 , y2 ) .C2 (x2 , y1 )
C1 (x1 , y1 ) .C2 (x2 , y2 ) ≥ C1 (x2 , y1 ) .C2 (x1 , y2 )
for all x1 ≤ x2 , y ≤ y2 in [0, 1] ,
where C1 and C2 denote the copulas of (X1 , Y1 ) and (X2 , Y2 ) respectively, and (X1 , Y1 ) LCSD (X2 , Y2 ) if and
only if
C 1 (x1 , y1 ) .C 2 (x2 , y2 ) ≥ C 1 (x1 , y2 ) .C 2 (x2 , y1 )
for all x1 ≤ x2 , y ≤ y2 in [0, 1] .
C 1 (x1 , y1 ) .C 2 (x2 , y2 ) ≥ C 1 (x2 , y1 ) .C 2 (x1 , y2 )
Remark 3 The fact that, if (X, Y ) is LCSD or RCSI, then (X, Y ) is P DQ, could be extended to the stochastic
orderings defined above : if (X1 , Y1 ) LCSD (X2 , Y2 ) or (X1 , Y1 ) RSCI (X2 , Y2 ), then (X1 , Y1 ) P QD (X2 , Y2 ) .
Analogously, it is possible to define a negative dependence concept as follows : a nonnegative function h is
reverse regular of order 2 (RR2) if, for all x1 < x2 , y1 < y2 ,
h (x1 , y1 ) .h (x2 , y2 ) ≤ h (x1 , y2 ) .h (x2 , y1 ) .
And similarly, it is possible to extend the notions of RSCI and LSCD orderings as follows, for negative
dependence : a pair (X, Y ) of continuous random variables with joint c.d.f. FXY is LCSI (left corner set
increasing) if FXY is RR2; and (X, Y ) is RCSD (right corner set decreasing) if F XY is RR2.
Proposition 4 (i) If (X, Y ) is LSCD, then, for any (u, v) ∈ DC , Φ (C, u, v) (x, y) ≥ C ⊥ (x, y) for all x, y ∈
[0, 1].
(ii) If (X, Y ) is RCSI, then, for any (u, v) ∈ DC , Ψ (C ∗ , u, v) (x, y) ≥ C ⊥ (x, y) for all x, y ∈ [0, 1]
(iii) If (X, Y ) is LSCI, then, for any (u, v) ∈ DC , Φ (C, u, v) (x, y) ≤ C ⊥ (x, y) for all x, y ∈ [0, 1].
(iv) If (X, Y ) is RCSD, then, for any (u, v) ∈ DC , Ψ (C ∗ , u, v) (x, y) ≤ C ⊥ (x, y) for all x, y ∈ [0, 1].
Proof. Charpentier (2003) .
2.6
Case of Archimedian copulas
An Archimedian copula is defined by
C (u, v) = φ−1 (φ (u) + φ (v))
where φ is a convex, decreasing function on ]0, 1] such that φ (1) = 0. φ is called the generator of the copula. The
generator of an Archimedian copula is defined up to a scale function. More precisely, two Archimedian copulas
are equal if and only if there exists a constant c > 0, such that φ = cφ
(Schweizer and
with generators φ and φ
Sklar (1983) 4 ).
Example 8 Gumbel copula (Gumbel (1960))- It is defined by
1/θ θ
θ
C (u, v) = exp − (− log u) + (− log v)
,
θ
and corresponds to the generator φ (t) = [− ln t] , where θ ≥ 1.
Example 9 Clayton copula (Kimeldorf, Sampson (1975) and Clayton (1978)) - It is defined by
−1/θ
,
C (u, v) = u−θ + v −θ − 1
and correspond to the generator φ (t) = t−θ − 1, where θ ≥ 0.
4 This result has been obtained for Archimedian binary operation in probabilistic metric spaces (Theorem 5.4.8). The proof
remains unchanged for Archimedian copulas.
9
2.6.1
Conditional copula
Proposition 5 The conditional copula of and Archimedian copula with generator φ is also an Archimedian
copula, with generator
φu,v (t) = φ (tC (u, v)) − φ (C (u, v)) .
Proof. Charpentier (2003) .
Example 10 Gumbel copula - Gumbel copulas C have generator φ (t) = [− ln t]θ where θ ≥ 0. For any
0 < u, v < 1, the corresponding conditional copula has generator
θ
φu,v (t) = M 1/θ − ln t − M where M = [− ln u]θ + [− ln v]θ .
Example 11 Frank copula - Frank copulas C have generator φ (t) = − ln [(exp (−θt) − 1) / ((exp (−θ) − 1))]
where θ ∈ R\ {0}. For any 0 < u, v < 1, the corresponding conditional copula has generator
φu,v (t) = ln
M
(exp (−θu) − 1) (exp (−θv) − 1)
where M =
exp (−θ) − 1
(1 + M )t − 1
Example 12 Clayton copula - Clayton copulas C have generator φ (t) = t−θ − 1 where θ ∈ R\ {0}. For any
0 < u, v < 1, the corresponding conditional copula is Φ (C, u, v) (x, y) = C (x, y) .
As noticed in the Example above, conditional copulas of Clayton copulas do not depend on u, v.
2.6.2
Factor representation - frailty models
A wide class of Archimedian copulas admit a factor representation. Let us assume that X and Y are independent,
conditionally on Z, a positive random variable, such that
P (X ≤ x|Z) = GX (x)Z and P (Y ≤ y|Z) = GY (y)Z ,
where GX and GY are cdf’s. The joint cdf of couple (X, Y ) is given by
FX,Y (x, y) = E (P (X ≤ x, Y ≤ y|Z)) = E (P (X ≤ x|Z) P (Y ≤ y|Z))
= E GX (x)Z GY (y)Z = E (exp [−Z (− log GX (x))] exp [−Z (− log GY (y))])
= ψ (− log GX (x) − log GY (y)) ,
where ψ is the Laplace transform of the distribution of Z, i.e. ψ (t) = E (exp (−tZ)) .
Because the marginal cdf of X and Y are given respectively by
FX (x) = FX,Y (x, +∞) = ψ (− log GX (x)) and FY (y) = ψ (− log GY (y)) ,
the copula of (X, Y ) is
−1
C (u, v) = FXY FX
(u) , FY−1 (v) = ψ ψ−1 (u) + ψ−1 (v)
This copula is an Archimedian copula with generator φ = ψ−1 .
Example 13 Gumbel
- Gumbel copulas could be obtained when factor Z has its Laplace transform
copula
equal to ψ (t) = exp −t1/θ .
Example 14 Clayton copula - Clayton copulas are obtained when the heterogeneity factor Z has a Laplace
transform equal to ψ (t) = [1 − t]−1/θ . The heterogeneity distribution is a Gamma distribution with degrees of
freedom 1/θ.
Proposition 6 Let us consider (X, Y ) with Archimedian copula, with f, and let ψ denote the Laplace transform
−1
(u) , Y ≤ FY−1 (v) is an Archimedian
of the heterogeneity factor. Let 0 < u, v ≤ 1, and (X, Y ) given X ≤ FX
copula with a factor representation, where the factor has Laplace transform
ψ t + ψ−1 (u) + ψ−1 (v)
.
ψuv (t) =
ψ ψ−1 (u) + ψ−1 (v)
Proof. Charpentier (2003) .
10
2.6.3
Invariant copulas
Proposition
7 φ is the generator of an invariant copula if and only if, for all c, t in [0, 1] × ]0, 1], φ (t) =
γ tθ − 1 , i.e. an Archimedian copula C is invariant if and only if C is a Clayton copula.
Proof. Charpentier (2003) .
A weaker condition of invariance could be defined : a copula C is said to be have invariant conditional
copulas on the diagonal if Φ (C, t, t) does not depend on t and t,i.e. Φ (C, t, t) = C for all t in [0, 1].
Similarly, this notion could be extended to upper orthant dependence, and we can study copulas C such
that Ψ (C ∗ , t, t) = C ∗ for all t in [0, 1].
Proposition 8 An Archimedian copula C is invariant on the diagonal if and only if C is a Clayton copula
Proof. Charpentier (2003) .
2.6.4
Dynamic evolution and dependence orderings
In the case we consider the evolution if Φ (C, t) - that is the evolution of the conditional copula on the diagonal one can expect a dependence structure which is all the more positively dependent as t decreases, or similarly, all
the less dependent.In the first case, if 0 ≤ t2 ≤ t1 ≤ 1, Φ (C, t1 ) Φ (C, t2 ), in the sense that Φ (C, t1 ) (x, y) ≤
Φ (C, t2 ) (x, y) for all x, y in [0, 1] × [0, 1]. In the general case, it is difficult to obtain analytical interpretation
of that result, but some properties could be obtained in the case of Archimedian copula.
Proposition 9 Let t1 and t2 such that 0 ≤ t2 ≤ t1 ≤ 1, and let C be an Archimedian copula with generator φ.
Let
C1 −1
C2 −1
φ (x + φ (C2 )) − φ (C1 ) and f21 (x) = φ
φ (x + φ (C1 )) − φ (C2 ) ,
f12 (x) = φ
C2
C1
where C1 = C (t1 , t1 ) and C2 = C (t2 , t2 ). Then
(i) Φ (C, t2 ) (x, y) ≤ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1] if and only if f21 (x)is sudadditive,
(ii) Φ (C, t2 ) (x, y) ≥ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1] if and only if f12 (x)is sudadditive.
Proof. Charpentier (2003).
Example 15 The case of Clayton copulas could be seen as a limiting case, in the sense that φ (t) = t−θ − 1
and so,
f12 (x) = ax + b where a = C1θ /C2θ .
In the case were φ is twice differentiable, a sufficient condition for uniform ordering of conditional copula is
the following.
Lemma 10 If φ is twice differentiable, let ψ (x) = log −φ (t),
(i) If ψ is concave on ]0, 1], then Φ (C, t2 ) (x, y) ≤ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1], for all 0 ≤ t2 ≤
t1 ≤ 1.
(ii) Similarly, if ψ (x) is convex on ]0, 1], then Φ (C, t2 ) (x, y) ≥ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1],
for all 0 ≤ t2 ≤ t1 ≤ 1.
Proof. Charpentier (2003) .
Example 16 Let C be a Ali-Mikhail-Haq copula, with generator φ (x) = log (1 − θ (1 − x)) − log x. Then
1
1
θ
θ
φ (x) =
− and ψ (x) = log
−
1 − θ (1 − x) x
x 1 − θ (1 − x)
One gets that
ψ (x) =
=
φ (x) φ (x) − φ (x)2
2
θ3 x3 − (1 − θ (1 − x))3
= φ (x)2
φ (x)2
x3 (1 − θ (1 − x))3
−2 (1 − θ) 3θ2 x2 + 3θ (1 − θ) x + (1 − θ)2
2
φ (x)
3
x3 (1 − θ (1 − x))
which has the opposite sign of 3θ2 x2 +3θ (1 − θ) x+(1 − θ)2 (θ ≤ 1) which is positive. So, finally, ψ is a concave
function on [0, 1], and so Φ (C, t2 ) (x, y) ≤ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1], for all 0 ≤ t2 ≤ t1 ≤ 1 :
(X, Y ) given X ≤ FX (t) and Y ≤ FY (t) is less and less positively dependent, as t decreases towards 0.
11
Example 17 Let C be the copula given by 4.2.19 in Nelsen (1999), that is with generator φ (x) = exp (θ/x) −
exp (θ). Then, for all t1 and t2 such that 0 ≤ t2 ≤ t1 ≤ 1, and let Ci = θ/ log [2 exp (θ/ti ) − exp (θ)] where
i = 1, 2. One gets
log [2 exp (θ/t1 ) − exp (θ)]
log (x + 2 exp (θ/t2 ) − exp (θ)) − 2 exp (θ/t1 ) + exp (θ)
f12 (x) = exp
log [2 exp (θ/t2 ) − exp (θ)]
Let α, β and γ such that
f12 (x) = exp (α log (x + β)) − γ
which gives, derivating two times with respect to x,
d2
α (α − 1)
f12 (x) =
exp (α log (x + β))
dx2
(x + β)2
Because t2 ≤ t1 , C1 (t1 , t1 ) /C2 (t2 , t2 ) ≥ 1, α (α − 1) ≥ 0, and then d2 f12 (x) /dx2 ≥ 0 and f12 (x) is concave.
Hence, because f12 (0) = 0 and f12 (x) is convex, then f12 (x) is subadditive. For all t1 and t2 such that
0 ≤ t2 ≤ t1 ≤ 1, f12 (x) is subadditive :(X, Y ) given X ≤ FX (t) and Y ≤ FY (t) is more and more positively
dependent, as t decreases towards 0.
One can notice that this case is an application of Lemma (10) :
θ
θ
θ
φ (x) = − 2 exp
and ψ (x) = log −φ (t) = + log θ − 2 log x
x
x
x
which is a convex function on [0, 1], and so Φ (C, t2 ) (x, y) ≥ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1], for all
0 ≤ t2 ≤ t1 ≤ 1.
Example 18 Let C be a copula in the Gumbel-Barnett family, that is φ (x) = log (1 − θ log x). Then
φ (x) =
−θ
and ψ (x) = log θ − log x − log (1 − θ log x)
x (1 − θ log x)
which is a convex function on [0, 1], and so Φ (C, t2 ) (x, y) ≥ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1], for all
0 ≤ t2 ≤ t1 ≤ 1. In that case (X, Y ) given X ≤ FX (t) and Y ≤ FY (t) is more and more positively dependent
as t decreases towards 0 should be understood as (X, Y ) given X ≤ FX (t) and Y ≤ FY (t) is less and less
negatively dependent as t decreases towards 0. This is a direct implication of the fact that the conditional copula
of a Gumbel-Barnett copula remains in this family, with a smaller parameter.
Example 19 Let C be a Frank copula, with generator φ (x) = − log [(exp (−θx) − 1) / (exp (−θ) − 1)], then
φ (t) =
θ exp (−θx)
and ψ (t) = log θ − θx − log (1 − exp (−θx))
exp (−θx) − 1
which satisfies ψ (x) = −θ2 exp (−θx) / [exp (−θx) − 1]2 ≤ 0 : ψ is concave, and so Φ (C, t2 ) (x, y) ≤ Φ (C, t1 ) (x, y)
for all x, y in [0, 1] × [0, 1], for all 0 ≤ t2 ≤ t1 ≤ 1.
Example 20 Let C be a Clayton copula, with generator φ (x) = x−θ − 1, then
φ (x) = −θx−θ−1 and ψ (x) = log θ − (1 + θ) log x
which is convex and so Φ (C, t2 ) (x, y) ≤ Φ (C, t1 ) (x, y) for all x, y in [0, 1] × [0, 1], for all 0 ≤ t2 ≤ t1 ≤ 1. In
that particular case, because Clayton copulas are invariant, one could have expect a linear functional for ψ, that
is both convex and concave.
Example 21 Let C be a Gumbel copula, with generator φ (x) = (− log x)θ , θ ≥ 1, then
φ (x) = −θ (− log x)θ−1 /x, and ψ (x) = log θ − log x + (θ − 1) log (log [−x])
This function being twice differentiable, one gets
ψ (x) =
(log x)2 − [θ − 1] log x − [θ − 1]
x2 [log x]2
12
=
h (log x)
x2 [log x]2
where h (y) = y 2 − [θ − 1] y − [θ − 1] this polynomial has two (real) roots, and one is negative5 . So finally,
ψ (x) ≤ 0 on ]0, x0 ]and ψ (x) ≥ 0 on [x0 , 1] for some x0 : ψ is neither concave nor convex. In that particular
case, φ−1 (x) = exp −x1/θ , so that
1/θ 1
θ
φt1 (x) = (− log C1 x)θ − (− log C1 )θ and φ−1
(x)
=
exp
−
x
+
(−
log
C
)
1
t1
C1
Then, for all 0 < t2 ≤ t1 ≤ 1,
1/θ θ
C2
θ
θ
f21 (x) = − log
exp − x + (− log C1 )
− (− log C2 )
C1
In the case where t1 = 1, then C1 = 1, and
f.2 (x) = (x − log C2 )θ − (− log C2 )θ = (x − α)θ − αθ with α ≥ 0,
which satisfies
θ−2
f.2
(x) = θ (θ − 1) (x + α)
with x, α ≥ 0
Because θ ≥ 1, this function is positive, that is f.2 is concave. Furthermore f.1 (0) = 0, and so, f.1 is subadditive
: Φ (C, t) (x, y) ≤ C (x, y) for all x, y in [0, 1] × [0, 1], for all 0 ≤ t ≤ 1. (X, Y ) given X ≤ FX (t) and Y ≤ FY (t)
is less positively dependent than (X, Y ).
2.6.5
Asymptotic behavior of Φ (C, u, v) when u, v → 0
In extreme value theory, the rate of decay at infinity (or at zero) of a function, or the fatness of the tails is
usually expressed through an index of regular variation at infinity (or at zero).
Definition 1 A function f : ]0, +[ → ]0, +[ is called regularly varying at 0 with index ρ ∈ R, if for any x > 0,
lim
t→0
f (tx)
= xρ ,
f (t)
and f belongs to R0ρ . In the case where ρ = 0, the function is said to be slow varying at 0.
Nelsen (1999) studied some Archimedian copulas whose generator are regularly varying.
Example 22 Gumbel copula - Gumbel copulas have generator φ (t) = [− ln t]θ where θ ≥ 0, which belong to
R00 ..
Example 23 Clayton copula - Clayton copulas C have generator φ (t) = t−θ − 1 where θ > 0 which belong
to R0−θ .
Proposition 11 Let C be an Archimedian copula with differentiable generator φ ∈ R0−α , where 0 ≤ α ≤ +∞.
Then, for all 0 ≤ x, y ≤ 1,
lim Φ (C, u, v) (x, y) = Γα (x, y) ,
u,v→0
where Γα is Clayton copula with parameter α. The limit above is obtain when either u or v tend to 0.
In the case where α = 0, then Φ (C, u, v) converges to C ⊥ (limit case of Clayton copula when α → 0).
In the case where α = +∞, then Φ (C, u, v) converges to C + (limit case of Clayton copula when α → −∞).
Proof. Charpentier (2003) .
Remark 4 This result is an extension of the result obtained by Juri and Würthrich (2002) .
Proposition 12 Let C be an Archimedian copula with generator φ. The limiting case limu,v→0 Φ (C, u, v) (x, y) =
C + (x, y) is obtained if and only if φ satisfies
lim
x→0
φ (xt) − φ (x)
= 0 for all t in [0, 1] .
xφ (xt)
Proof. This proposition is obtained using a proposition due to Genest and MacKay (1986), stating that if
φn is a sequence of generator, and Cn the sequence of associated copulas, then limu,v→0 Cn (x, y) = C + (x, y) if
and only if φn (t) /φn (t) → 0 when n → ∞ for all t in [0, 1] .
5 Because
∆ = (θ − 1) (θ + 3) ≥ 0 and the product of the roots is − (θ + 1) ≤ 0.
13
2.6.6
Generalized Archimedian copulas with factor representation
In that case, there is a factor Z such that the copula of (X, Y ) given Z = z does not depend on z, and is
an extreme value copula C, whereas the conditional marginal distribution depend on Z, i.e. P (X ≤ x|Z) =
GX (x)Z , with a similar expression for Y . In that case, the copula of (X, Y ) is a function of ψ, the Laplace
transform of Z, and C, given by
CXY (x, y) = ψ − log C exp −ψ−1 (x) , exp −ψ−1 (y)
(2)
−1
In the case where C = C ⊥ , then CXY is an Archimedian copula with generator ψ . We will see with the
Theorem below that, in the case where the copula of (X, Y ) is a generalized Archimedian copula, with a factor
−1
(u) , Y ≤ FY−1 (v) is also function
representation, then, for all u, v in [0, 1], the copula of (X, Y ) given X ≤ FX
∗
of a Laplace transform ψ and a copula C ∗
∗
CXY
(x, y) = ψ∗ − log C ∗ exp −ψ∗−1 (x) , exp −ψ∗−1 (y)
(3)
−1
And furthermore, the copula of (X, Y ) given X ≤ FX
(u) , Y ≤ FY−1 (v) is a generalized Archimedian copula,
∗
with a factor representation if and only if the copula C is an extreme value copula.
In the case of Archimedian copula with a factor representation, then the Proposition below proves that the
−1
(u) , Y ≤ FY−1 (v) is a also an Archimedian copula with a factor representation.
copula of (X, Y ) given X ≤ FX
Proposition 13 Let us consider (X, Y ) with a generalized Archimedian copula, with a factor representation,
and let ψ denote the Laplace transform of the heterogeneity factor, C denote the underlying copula, and GX
and GY the ’marginal parameters’.
−1
(u) , Y ≤ FY−1 (v) is
(1) Let 0 < u, v ≤ 1, then, the copula of (X, Y ) given X ≤ FX
∗
CXY
(x, y) = ψ∗ − log C ∗ exp −ψ∗−1 (x) , exp −ψ∗−1 (y)
where
ψ∗ (t) = ψ (t + α) /ψ (α) where α = − log (C (u∗ , v∗ )) and u∗ =
- ψ∗ is the following Laplace
−1 transform
−1
∗
−1
∗
(u) , Y ≤ FY−1 (v)
exp −ψ (u) , v = exp −ψ (v) : ψ is the Laplace transform of Z given X ≤ FX
z
−1
−1
−1
- P X ≤ x|X ≤ FX (u) , Y ≤ FY (v) , Z = z = G∗X (x) and P Y ≤ y|X ≤ FX (u) , Y ≤ FY−1 (v) , Z = z =
z
G∗Y (y) where
C (GX (x) , v∗ )
C (u∗ , GY (y))
∗
and
G
G∗X (x) =
(y)
=
Y
C (u∗ , v∗ )
C (u∗ , v∗ )
∗
- C is the following copula
∗−1
∗−1 C GX G∗−1
C GX G∗−1
(y)
(y)
X (x) , GY GY
X (x) , GY GY
∗
−1
−1 =
C (x, y) =
∗ , v∗ )
C
(u
C GX FX (u) , GY FY (v)
−1
(2) Furthermore, (X, Y ) given X ≤ FX
(u) , Y ≤ FY−1 (v) has a generalized Archimedian copula with a
∗
factor representation if and only if C is an extreme value copula. That is, C ∗ is the copula of (X, Y ) given
k
−1
X ≤ FX
(u) , Y ≤ FY−1 (v) and Z = z for all z if and only if C ∗ xk , y k = C ∗ (x, y) for all k > 0 and x, y in
[0, 1].
Proof. Charpentier (2003) .
1/θ θ
θ
Example 24 Let C be a Gumbel copula, such that C (x, y) = exp − (− log x) + (− log y)
.
1/θ θ θ
C ∗ (x, y) = exp α1/θ − − log x + α1/θ + − log y + α1/θ − α
θ
θ
where α = (− log u∗ ) + (− log v∗ ) . Then, one can notice that,
1/θ θ θ
∗
z z
1/θ
1/θ
1/θ
C (x , y ) = exp α − −z log x + α
+ −z log y + α
−α
while
∗
z
1/θ
C (x, y) = exp z α
−
1/θ
− log x + α
θ
1/θ
+ − log y + α
θ
1/θ −α
z
and, given x, y and α > 0,, f (z) = C ∗ (x, y) − C ∗ (xz , y z ) = 0 only when z = 1 : in that case, C ∗ is not an
extreme copula.
14
As well as Archimedian copulas are stable (in the sense that conditional copulas are still Archimedian copula),
those copulas define a stable family of copulas.
Example 25 Let us consider some insurance claims, where Xt denotes the cost for year t, and Xt+1 the cost for
year t+1, for one contract. Let Θ be a random variable, which denotes the risk variable, such that, for, given Θ =
θ
θ, the survival copula of (Xt , Xt+1 ) is C ∗ , and such that P (Xt > x|Θ = θ) = Gt (x) and P (Xt+1 > x|Θ = θ) =
θ
Gt+1 (x) . Let ψ denote the Laplace transform of the risk variable Θ, then the unconditional copula of (Xt ; Xt+1 )
is given by (2). Furthermore, it is possible to focus on ’bad’ contracts of year t, and study the dependence, for
those contract, between Xt and Xt+1 , which the copula of (Xt , Xt+1 ), given Xt > Ft−1 (p) where p ∈ [0, 1], and
is obtained as was obtained (3).
2.7
Applications of conditional copulas
Let us define the default time of a firm by, for any i = 1, .., n (for a portfolio of n firms) τ i = inf {t, γ i (t) ≤ Ui },
where the default trigger variables Ui are defined on [0, 1], and γ i (t) are the default countdown processes,
defined as
t
λi (s) ds
γ i (t) = exp −
0
where λi are non-negative continuous processes, called default intensity processes. For example, as shown in
Lando (1998), the time of default of a Cox process with intensity λi (t) can be written
t
λi (s) ds ≥ θ i
τ i = inf t,
0
where θ i is a unit exponential random variable, independent of the default intensity process λi (t) (this setup is
equivalent to taking Ui uniform on [0, 1]).
If the survival copula, between two default times τ X and τ Y at time t = 0 is C ∗ , and if no defaults occur
between time 0 and time t, the conditional survival copula at time t is not necessarily C ∗ , as noticed in Giesecke
(2001) and Jouanin (2003). In fact, the joint survival distribution is
C ∗ F X (h + x) , F Y (h + y)
where x, y ≥ 0.
P (X > h + x, Y > h + y|X, Y > h) =
C ∗ F X (h) , F Y (h)
so that the copula of (X, Y ) given X, Y > h is then Ψ C ∗ , F X (h) , F Y (h) .
Remark 5 As shown in Charpentier (2003), the choice of the starting time is arbitrary, and does not influence.... More precisely, Charpentier (2003) proved that, if C is a copula, and u, v in ]0, 1], then
{Φ (C, u , v ) , 0 ≤ u ≤ u, 0 ≤ v ≤ v} = {Φ (Φ (C, u, v) , u , v ) , 0 ≤ u ≤ u, 0 ≤ v ≤ v}
which could be written, in terms of credit risk models as follows : if no defaults occur between time T and time
T + h, the copula at time T + h is
Ψ C ∗ , F X (T + h) , F Y (T + h) = Ψ Ψ C ∗ , F X (T ) , F Y (T ) , F X|X,Y >T (T + h) , F Y |Y,Y >T (T + h)
where F X|X,Y >T and F Y |X,Y >T denote respectively the survival distribution of X and Y given X, Y > T . The
dynamic evolution of the dependence structure remains unchanged, starting at time 0 or at time T .
Example 26 Let τ X and τ Y be exponentially distributed, i.e. P (τ X > t) = exp (−t) while P (τ Y > t) =
exp (−2t) . Let us assume that at time t = 0, the copula of τ X and τ Y is a Gumbel copula. The graphs
below show the evolution of the joint τ X and τ Y at time t = 1, 1 and 3, if no defaults occur between 0 and t.
15
The graphs on the left are the initial shape of the distribution (at time 0),
16
Example 27 Following the example above, we assume now that,initially, the copula of (τ X , τ Y ) was a AliMikhail-Haq copula.
17
3
Conditional rank correlations : tail dependence measure
3.1
Definition of the conditional rank correlations
Some classical measures of dependence, such as Spearman’s rho or Kendall’s tau, have initially been defined
using the notion of ’concordance’. An heuristic interpretation of concordance is that a pair of random variables
are concordant if large values of one tend to be associated with large values of the other, and small values of
one with small values of the other : (Xi , Yi ) and (Xj , Yj ) are concordant if (Xi − Xj ) (Yi − Yj ) > 0. Similarly,
(Xi , Yi ) and (Xj , Yj ) are said discordant if (Xi − Xj ) (Yi − Yj ) < 0. As shown in Nelsen (1999) it is also possible
to defined some risk measures using copulas. Kendall’s tau of couple (X, Y ) is defined as
τ (X, Y ) =
C (u, v) dC (u, v) = τ (C)
[0,1]2
as in Schweizer and Wolf (1981) . Spearman’s rho could be defined using the independent copula C ⊥ (u, v) = uv,
as6
ρ (X, Y ) = 12
uvdC (u, v) − 3 = 12
C (u, v) dudv − 3 = ρ (C)
[0,1]2
[0,1]2
Using these two definitions, one can notice that those two measures of dependence depend only on the copula
C, and not on marginal distributions FX and FY . This implies that, if φ and ψ are strictly increasing functions
on the range of X and Y , then ρ (X, Y ) = ρ (φ (X) , ψ (Y )) and τ (X, Y ) = τ (φ (X) , ψ (Y )).
Another classical measure of dependence is Pearson’s (linear ) correlation, which is defined as follows,
cov (X, Y )
V ar (X) V ar (Y )
r (X, Y ) = where the variances are defined by
V ar (X) = E X 2 − E (X)2 =
R
x2 dFX (x) −
R
2
xdFX (x)
and, as shown in Hottelling (1940)
cov (X, Y ) = E (XY )−E (X) E (Y ) =
R2
[FXY (x, y) − FX (x) FY (y)] dxdy =
6 Using
[0,1]
2
−1
[C (u, v) − uv] dFX
(u) dFY−1 (v)
this expression, Spearman’s rho could be seen as the ”scaled” volume under the graph of the copula (volume under the
graph of the copula and over the unit square). Spearman’s rho is also proportional to the signed volume between the graphs of the
copula C and the product copula C ⊥ . Thus, as notice in Nelsen (1999) is a measure of ”average distance” between the distribution
of (X, Y ) and independence.
18
This coefficient has not simple expression and depends on the copula C but also on marginal distributions.
This means that, if φ and ψ are strictly increasing functions on the range of X and Y , then r (X, Y ) and
r (φ (X) , ψ (Y )) have no reason to be equal (except in some cases where φ and ψ are increasing linear function).
As mentioned before, because variables X and Y are continuous, U = FX (X) and V = FY (Y ) are uniform
on [0, 1] (so that U and V have mean 1/2 and variance 1/12) and then,
E (UV ) − E (U) E (V )
C (u, v) dudv−3 = 12E (UV )−3 = = r (U, V ) = r (FX (X) , FY (Y ))
ρ (X, Y ) = 12
V ar (U) V ar (V )
[0,1]2
Spearman’s rho is equal to Pearson’s correlation between FX (X) and FY (Y ). This relationship gives a simple
way to estimate the measure using the ranks of sample : Spearman’s correlation is the correlation (Pearson’s
linear correlation) of the ranks, i.e. the sample expression of Spearman’s rho is the (linear ) correlation of the
pairs (Ri , Si ) where Rj is the rank of Xj among the sample X1 , ..., Xn , and Sj is the rank of Yj among the
sample Y1 , ..., Yn .
If τ (X, Y ) = +1 or ρ (X, Y ) = +1 then. Conversely, let FX and FY marginal c.d.f. , then there is a couple
(X, Y ) with marginal c.d.f. FX and FY such that ρ (X, Y ) = +1. This property does not stand of Person’s
correlation : given two marginal c.d.f. there is usually no couple (X, Y ) such that r (X, Y ) = +1.
Example 28 Let X and Y have respectively log-normal distributions LN (0, 1) and LN 0, σ2 . As shown in
Wang (1997), Pearson’s correlation r (X, Y ) is bounded as follows
exp (−σ) − 1
exp (σ) − 1
r (X, Y ) ≤ r (X, Y ) ≤ r (X, Y ) where r (X, Y ) = and r (X, Y ) = √
√
2
exp (σ ) − 1 e − 1
exp (σ2 ) − 1 e − 1
For example, if σ2 = 3, then the bounds are −0.008 and 0.16, while ρ (X, Y ) and τ (X, Y ) can take any value
from −1 to +1.
Definition 2 The conditional rank correlation of couple (U, V ) with copula C, based on Spearman’s rho, is
defined on [0, 1] × [0, 1] by
ρ (C, u, v) = ρ (C, [0, u] , [0, v]) : (u, v) → ρ (U, V |U ≤ u, V ≤ v) .
The functional measure of dependence7 will also be called ”lower tail conditional rank correlation”, on [0, u] ×
[0, v], or ”truncated rank correlation”, and could be denoted ρ(C, u, v) . Similarly, one can define easily the ”upper
tail conditional rank correlation”, on [u, 1] × [v, 1], as
ρ (C, u, v) = ρ (C, [u, 1] , [v, 1]) : (u, v) → ρ (U, V |U > u, V > v) .
As mentioned before, if Spearman’s correlation could be written using the copula of (U, V ), then, these tail
rank correlations could be written using conditional copulas :
Φ (C, u, v) (x, y) dxdy − 3 and ρ (C, u, v) = 12
Ψ (C ∗ , u, v) (x, y) dxdy − 3
ρ (C, u, v) = 12
[0,1]2
[0,1]2
Remark 6 Malevergne and Sornette (2002) have studied a function that is called ”conditional rank correlation”,
defined as v → r (U, V |V > v).
From the properties of the conditional copulas, one could obtain several interesting results on these tail
conditional rank correlations. For example,
7 The function is base on the conditional distribution of (U, V ) | (U, V ) ∈ [0, u] × [0, v] but is also base on the conditional
distribution of (X, Y ) :
ρ (C, u, v)
=
ρ (U, V |0 ≤ U ≤ u, 0 ≤ V ≤ v)
=
ρ (X, Y |0 ≤ U ≤ u, 0 ≤ V ≤ v)
−1
ρ U, V |X ≤ FX
(u) , Y ≤ FY−1 (v)
−1
ρ X, Y |X ≤ FX
(u) , Y ≤ FY−1 (v)
=
=
But
ρ (C, u, v)
=
ρ (U, V |0 ≤ U ≤ u, 0 ≤ V ≤ v)
=
r (U, V |0 ≤ U ≤ u, 0 ≤ V ≤ v)
19
Proposition 14 If (X, Y ) is P QD then ρ(u, v) = ρ ([0, u] × [0, v]) ≥ 0 for all 0 < u, v ≤ 1.
A natural estimate of Spearman’s rho is to consider the correlation (Pearson’s linear correlation) of the
ranks. Similarly, a natural estimate of the tail conditional correlations is given by the correlation of the ranks of
(U, V ) given either U ≤ u and V ≤ v, or U > u and V > v. Let (X1 , Y1 ) , ..., (Xn , Yn ) be a n-bivariate sample,
and let
n
n
1
1
I (Xi ≤ x) and FY,n (x) =
I (Yi ≤ x) ,
Ui = FX,n (Xi ) and Vi = FY,n (Yi ) where FX,n (x) =
n i=1
n i=1
so that nUi and nVi are respectively the ranks of Xi within X1 , ..., Xn and Yi .within Y1 , ..., Yn . If I (u, v, n) =
{i|Ui ≤ u, Vi ≤ v}, then
Vi − V u,v
i∈I(u,v,n) Ui − U u,v
ρ (C, u, v) = corr (Ui , Vi |i ∈ I (u, v, n)) = 2 2
i∈I(u,v,n) Ui − U u,v
i∈I(u,v,n) Vi − V u,v
3.2
Examples
The graphs below show the evolution of the upper conditional rank correlation, for some copulas, for some
simulated data, for samples with 1500 observations. Dotted lines are the confidence intervals8 (95% and 90%)
for 1500 observations. The two first graphs are the cases of Gaussian and a Gumbel copulas, with an overall
rank correlation around 0.5.
The two graphs below are the cases of Clayton and Student copulas, with an overall rank correlation around
0.5.
One can notice that the case of the Student copula is all the more interesting that the overall rank correlation
is 0 : fitting a one-parameter copula among usual Archimedian copula should lead to the independent copula.
8 These
confidence bounds have been obtained using Monte Carlo simulations, with 10, 000 simulations of 1, 500-samples.
20
And assuming that these data are independent would lead to substantial problems : in the upper-tail, the rank
correlation is around 0.5, saying that data are not independent in the tails. These graphs could be summarized
in the following table, giving the evolution of the conditional correlation,
ρ (C, 0) = ρS
ρ (C, 50%)
ρ (C, 75%)
ρ (C, 90%)
ρ (C, 95%)
Gaussian Gumbel Survival Clayton Survival Frank
Gumbel
Clayton
0.60
0.60
0.60
0.60
0.60
0.60
0.33
0.51
0.26
0.12
0.60
0.27
0.25
0.51
0.17
0.00
0.60
0.11
0.20
0.51
0.12
0.00
0.60
0.03
0.17
0.51
0.10
0.00
0.60
0.01
In Tails of copulas, Gary Venter said that ’correlation is stronger for large events’. But we can notice that
it is not exactly the case : apart from the case of Student copula, and the survival Clayton copula (which the
only invariant copula, as mentioned in the previous part), the more we look into details in the upper tail of the
distribution (’large events’), the smaller the dependence (theoretical results on that topic were developed in Part
(2.6.4)).
3.3
3.3.1
Applications
Loss Adjustment Expense (Frees and Valdez (1997))
If this rank correlation could be used to get a better understanding of the dependence in the tails of the
distribution, it is also possible to use it while fitting copulas.The two first graphs are the cases of Gaussian and
a mixture of copulas (Cθ (x, y) = θC + (x, y) + (1 − θ) C ⊥ (x, y), as in Charpentier (2003)), with an overall rank
correlation around 0.5.
The two graphs below are the cases of Clayton and the dual of Clayton copulas, with an overall rank correlation
around 0.5.
21
The two graphs below are the cases of Gumbel and the dual of Gumbel copulas, with an overall rank correlation
around 0.5
Furthermore, it could be more interesting to study the dependent between the losses and the expenses,
given the fact that the loss is important, that is studying the conditional copula of (LOSS, ALAE) given
LOSS > FL−1 (u). One can notice that, if the problem is knowing the expenses given the loss amount, then
Gumbel copula is the best copula.
3.3.2
Simulated Hurricane Losses (Venter (2001))
This sample data set, used in Tail of Copulas, is a simulation of 727 losses from a hurricane loss generator,
with Maryland and Delaware exposures. Among several copula models (Gumbel, Frank, Gaussian, and Survival
Gumbel ), Gary Venter said that Frank and Gaussian copulas where the best : they had the highest AIC (or loglikelihood, all the copulas having a single parameter ), and, moreover, the empirical J function (the cumulative
tau) is close to the ones obtained with Frank or Gaussian copulas. The graphs below show the evolution of the
upper conditional rank correlation, compared with the case of Gaussian copulas (on the left), and Frank copula
(on the right),
22
These graphs confirm that those parametric copula provide the best fit. The following graphs represent the
case of Gumbel copula (which overestimates the upper tail) and Clayton copula (which underestimates the upper
tail )
3.3.3
Motor and Household claims (Belguise (2001))
Following the methodology developed by Gary Venter, Belguise fitted a copula using the cumulative tau. The
two copulas which provide the best fit are Gumbel copula, and the survival Clayton copula. The graph below
compare the empirical upper rank correlation with the cases of Gumbel and the dual Clayton copula,
These graphs confirm that those two copulas provide a good fit.
3.3.4
Group Medical Insurance : Hospital vs. other expenses
The dataset used here, from the medical insurance large claims database (available from the website of the
Society of Actuaries) includes over 3 million claims, over the years 1991 and 1992. In this par, only records
relating to Plan type 3 are used (Cebrian, Denuit and Lambert (2003) have focused on Plan type 4 records).
This dataset contains only claims exceeding $25, 000, splitted between the hospital charge (HOSP ) and the
other expenses (OT HER). For some reinsurance issue, it might be interesting to study the dependence between
these two amounts, and more precisely, to study the dependence given the hospital charge. For example, as in
Cebrian, Denuit and Lambert, given the expenses and the hospital charge, the reinsurance indemnity might be
defined as
0
if HOSP ≤ R
.
g (HOSP, OT HER) =
−R
.OT
HER
if
HOSP > R
HOSP − R + HOSP
HOSP
In that case, it might be interesting to get a good understanding of the dependence of (HOSP, OT HER) given
HOSP > x. The graphs below give an illustration of the relationship between these two variables, with the
scatterplot of the copula-type transformation of these variables, i.e. (Ui , Vi ) where Ui = FH (HOSPi ) and
Vi = FO (OT HERi ) (given HOSP exceeds $25, 000), on the left, and, on the right, the empirical density of the
23
copula.
The graph below show the evolution of the conditional rank correlation of (HOSP, OT HER) given HOSP >
−1
−1
(p) where FH
(p) denotes the quantile of order p, for Plan type 3 claims, with claims over the years 1991
FH
and 1992, with confidence bonds under the assumption of a Gumbel copula.
References
[1] BELGUISE,O. (2002). Tempêtes : Etude des dépendances entre les branches auto et incendie avec la
théorie des copulas. Mémoire d’Actuariat, Université Strasbourg.
[2] CEBRIAN, A.C., DENUIT,M. & LAMBERT,P. Analysis of bivariate tail dependence using extreme value
copulas: an application to the SOA medical large claims database. Working Paper
[3] CHARPENTIER,A. (2003). Conditional copula and tail dependence. Working Paper
[4] CLAYTON, D.G. (1978). A model for association in bivariate life tables and its application in epidemiological studies of familial tendency in chronic disease incidence. Biometrika 65 1, 141-151
[5] DEHEUVELS, P. (1979). La fonction de dépendence empirique et ses propriétés. Un test non paramétrique
d’indépendance. Bulletin Royal Belge de l’Académie des Sciences (5) 65 274-292
[6] FREES,E.W. & VALDEZ,E.A. (1997). Understanding relationships using copulas. North American Actuarial Journal. 2 1 1-25
[7] GENEST,C. and MACKAY,R.J. (1986). Copules archimédiennes et familles de lois bidimensionnelles dont
les marges sont données. La revue canadienne de statistique, 14, 145-159.
[8] GIESECKE,K.(2001). Structural modeling of correlated defaults with incomplete information. HumboldtUniversität, Berlin, Working Paper
24
[9] GOURIEROUX,C. and MONFORT, A. (2003). Age and term structures in duration models. Working
Paper
[10] GUMBEL,E.J. (1960). Distributions des valeurs extrêmes en plusieurs dimensions. Publications de l’Institut
de Statistique de l’Université de Paris 9 171-173
[11] HOEFFDING, W (1940) Masstabinvariante Korrelationstheorie. Schriften des Mathematischen Instituts
und fur Angewandte Mathematik der Universitat Berlin, 5, 179-233
[12] JOE,H. (1997). Multivariate models and dependance concepts. Chapman-Hall
[13] JOUANIN,J.F. (2003). Condition de stationnarité temporelle pour la copula des défauts. Groupe de
recherche Opérationnelle, Crédit Lyonnais, Internal Paper
[14] JURI, A. and WURTHRICH, M.V. (2002). Copula convergence theorems for tail events. Insurance: Mathematics and Economics 30, 411-427
[15] KARLIN,S. (1968). Total positivity. Stanford University Press.
[16] KIMELDORF,G. and A.SAMPSON (1975). One parameter families of bivariate distributions with fixed
marginals. Communications in statistics. Part A : Theory and methods 4 293-301
[17] LEHMANN, E.L. (1966). Some concepts of dependence. Annals of Mathematical Statistics 37 1137-1153
[18] MALEVERGNE,Y., & D.SORNETTE (2002). Investigating Extreme Dependences: Concepts and Tools.
The Review of Financial Studies
[19] NELSEN,R,B. (1999). Introduction to copulas. Springer Verlag
[20] PEARSON,K. (1907). On further methods of determining correlation. Drapers’ Company Research Memoirs, Biometric Series IV London.
[21] SCHWEIZER,B. and A.SKLAR (1983). Probabilistic Metric Spaces. North Holland.
[22] SKLAR,A. (1959). Fonctions de répartition à n dimensions et leurs marges. Publ. Inst. Stat. Univ. Paris
8 229-231
[23] SZEKLI,R. (1995). Stochastic ordering and dependence in appllied probability . Springer Verlag
[24] VENTER,G. (2001). Tails of copulas. ASTIN Conference
[25] WANG,S. (1999). Aggregation of correleted risk portfolios : models and algorithms. PCAS LXXXV
25