Paul Embrechts
Sharp bounds on the VaR
for sums of dependent risks
joint work with
Giovanni Puccetti (university of Firenze, Italy)
and Ludger Rüschendorf (university of Freiburg, Germany)
Mathematical Problem
Assumptions:
L1 , . . . , Ld
one period risks with statistically estimated marginals.
L1 + · · · + Ld
total loss exposure.
VaR↵ (L1 + · · · + Ld )
(if VaR↵ (L1
amount of capital to be reserved.
+ · · · + Ld ) = s , then P(L1 + · · · + Ld
Task: for a fixed (high) level of probability ↵ , calculate:! !
n
s) 1
↵)
!
o
VaR↵ = sup VaR↵ (L1 + · · · + Ld ) : L j ⇠ F j , 1 j d
n
o
VaR↵ = inf VaR↵ (L1 + · · · + Ld ) : L j ⇠ F j , 1 j d
Motivation (QRM)
L1 ⇠ F1 ,
L2 ⇠ F2 ,
...,
Ld ⇠ Fd
marginal distributions
+
dependence model
=
VaR↵ (L1 + · · · + Ld )
d ⇡ 600
Motivation (QRM)
L1 ⇠ F1 ,
L2 ⇠ F2 ,
...,
Ld ⇠ Fd
marginal distributions
d ⇡ 600
+
dependence model
=
VaR↵ (L1 + · · · + Ld )
VaR↵
VaR+↵
=
Pd
j=1
VaR↵ (L j )
VaR↵
Known results
- In the homogeneous case F j = F, 1 j d , the bound VaR↵
has been recently given for d > 2 in [PR11] and [WW11] under
different assumptions.
- In the homogeneous case, VaR↵ is very easy to calculate in
arbitrary dimensions.
- In the inhomogeneous case, the computation of VaR↵ poses
serious problems. And the computation of VaR↵ is not possible.
Homogeneous marginals, d
In the case that
n
L1 , . . . , Ld
3
are identically distributed, we have
M(s) = sup P(L1 + · · · + Ld
s); L j ⇠ F, 1 j d
Duality theorem (reduced)
o
( Z
)
M(s) = inf d g dF; g 2 A(s) , where
A(s) = {g : R ! [0, 1] such that
g(x1 ) + · · · + g(xd )
1{x1 + · · · + xd
s}}
Dual bounds
Embrechts and Puccetti (2006) introduce the following class of
piecewise-linear functions for a < s/d
1
ga
0
a
b=s
M(s) D(s) = inf d
a<s/d
Z
!
(d
ga dF = d inf
a<s/d
1)a
Rb
a
F(x) dx
s
da
.
The dual bound D(s) is better than the standard bound produced by
choosing piecewise-constant dual functions.
Referee report on Embrechts and Puccetti (2006)
Timeline to the result
Timeline to the result
1981
Makarov gives the optimal
coupling for the sum of two
risks answering a question
by Kolmogorov
Timeline to the result
Rüschendorf gives
independently the same
optimal coupling and the
dual solution
1982
1981
Makarov gives the optimal
coupling for the sum of two
risks answering a question
by Kolmogorov
Timeline to the result
Rüschendorf gives
independently the same
optimal coupling and the
dual solution
1982
1981
Makarov gives the optimal
coupling for the sum of two
risks answering a question
by Kolmogorov
2006
Embrechts and
Puccetti introduce
dual bounds in the
homogeneous case
Timeline to the result
Rüschendorf gives
independently the same
optimal coupling and the
dual solution
1982
1981
Makarov gives the optimal
coupling for the sum of two
risks answering a question
by Kolmogorov
Wang and Wang gives
optimal couplings for the
sum of arbitrary risks in
some specific examples
2011
2006
Embrechts and
Puccetti introduce
dual bounds in the
homogeneous case
Timeline to the result
Rüschendorf gives
independently the same
optimal coupling and the
dual solution
1982
1981
Makarov gives the optimal
coupling for the sum of two
risks answering a question
by Kolmogorov
Wang and Wang gives
optimal couplings for the
sum of arbitrary risks in
some specific examples
2011
2006
2012
Embrechts and
Puccetti introduce
dual bounds in the
homogeneous case
sharpness of dual
bounds is stated for
a general class of
distributions
The
Rearrangement
Algorithm
(RA)
a new numerical approximation procedure
Game
X=
1
1
2
2
4
6
3
3
6
4
2
6
5
5
10
Game
X=
1
1
2
2
4
6
3
3
6
4
2
6
5
5
10
Rearrange the second column to obtain sum with minimal variance
Game
X=
1
1
2
1
5
6
2
4
6
2
4
6
3
3
6
3
3
6
4
2
6
4
2
6
5
5
10
5
1
6
Rearrange the second column to obtain sum with minimal variance
Game (more difficult)
X=
1
1
2
4
1
2
3
2
4
1
7
4
1
5
3
3
4
10
3
4
7
4
2
3
9
2
3
5
5
5
5
15
5
5
10
Rearrange the entries within each column in order to obtain sum
with minimal variance
Game (more difficult)
X=
1
1
2
4
1
2
3
2
4
1
7
4
1
5
3
3
4
10
3
4
7
4
2
3
9
2
3
5
5
5
5
15
5
5
10
1
Rearrange the entries within each column in order to obtain sum
with minimal variance
Game (more difficult)
X=
1
1
2
4
1
2
3
2
4
1
7
4
1
5
3
3
4
10
3
4
7
4
2
3
9
2
3
5
5
5
5
15
5
5
10
2
1
Rearrange the entries within each column in order to obtain sum
with minimal variance
Game (more difficult)
X=
1
2
3
3
4
1
5
10
2
3
4
7
3
9
4
2
3
5
5
15
1
5
5
10
1
1
2
4
2
4
1
7
3
3
4
4
2
5
5
Rearrange the entries within each column in order to obtain sum
with minimal variance
Game (more difficult)
X=
1
1
2
4
5
1
2
3
2
4
1
7
3
4
1
5
3
3
4
10
2
3
4
7
4
2
3
9
4
2
3
5
5
5
5
15
1
5
5
10
Rearrange the entries within each column in order to obtain sum
with minimal variance
Game (more difficult)
X=
1
1
2
4
5
1
2
8
2
4
1
7
3
4
1
8
3
3
4
10
2
3
4
9
4
2
3
9
4
2
3
9
5
5
5
15
1
5
5
11
Rearrange the entries within each column in order to obtain sum
with minimal variance
5
1
2
8
3
4
1
8
2
3
4
9
4
2
3
9
1
5
5
11
5
1
2
8
5
1
2
8
3
4
1
8
3
5
1
9
2
3
4
9
2
3
4
9
4
2
3
9
4
2
3
9
1
5
5
11
1
4
5
10
5
1
2
8
5
1
2
8
3
4
1
8
3
5
1
9
2
3
4
9
2
3
4
9
4
2
3
9
4
2
3
9
1
5
5
11
1
4
5
10
5
1
2
8
3
5
1
9
2
3
4
9
4
2
3
9
1
4
5
10
Summary:
For a given matrix, rearrange the entries in the columns until you
find an ordered matrix, i.e. a matrix in which
each column is oppositely ordered to the sum of the others.
We call a matrix optimal if the minimal component of the vector of
the sum is maximized and the maximal component of that vector is
minimized (min-max problem).
Sudoku
X=
•
•
•
9
•
•
•
9
•
•
•
9
•
•
•
9
•
•
•
9
Sudoku
X=
•
•
1
9
•
•
2
9
•
•
3
9
•
•
4
9
•
•
5
9
Hint!
Sudoku
X=
5
3
1
9
3
4
2
9
1
5
3
9
4
1
4
9
2
2
5
9
Solution!
Permutation
matrices
X1
ordered
optimal
X2
X3
BEST/WORST
VAR
?
BEST/WORST
VAR
?
dependence=rearrangement
VaR0.99 (L1 )
Fix
1
↵ 2 (0, 1) and assume that each F j |[↵, 1]
takes only N values all having the same
probability (1 ↵)/N.
0 3
BBBX
Lj
P BBB@
j=1
VaR1 (L1 )
1
CCC
min(rowSums(X))CCCA
1
↵
VaR0.99 (L1 )
Fix
1
↵ 2 (0, 1) and assume that each F j |[↵, 1]
takes only N values all having the same
probability (1 ↵)/N.
0 3
BBBX
Lj
P BBB@
j=1
1
CCC
min(rowSums(X))CCCA
VaR↵ (L1 + · · · + Ld )
VaR1 (L1 )
1
↵
min(rowSums(X))
VaR0.99 (L1 )
Fix
1
↵ 2 (0, 1) and assume that each F j |[↵, 1]
takes only N values all having the same
probability (1 ↵)/N.
0 3
BBBX
Lj
P BBB@
j=1
1
CCC
min(rowSums(X))CCCA
VaR↵ (L1 + · · · + Ld )
1
min(rowSums(X))
VaR↵ = max min(rowSums(X̃))
X̃2P(X)
(idea of the proof)
VaR1 (L1 )
↵
Pareto(2) marginals and
↵ = 0.99
Pareto(2) marginals and
↵ = 0.99
Pareto(2) marginals and
↵ = 0.99
VaR↵ = 44.5169 (45.9994)
Pareto(2) marginals and
↵ = 0.99
VaR↵ = 44.5169 (45.9994)
N = 105 ) VaR↵ = 45.99
Pareto(2) marginals and
↵ = 0.99
VaR↵ = 44.5169 (45.9994)
OPTIMAL COUPLING!
N = 105 ) VaR↵ = 45.99
Optimal coupling yields a dependence in which:
- either the (three) rvs are very close to each other
and sum up to something very close to the
minimal sum (-> complete mixability )
- or one of the components is large and the other
(two) are small (-> mutual exclusivity )
Complete mixability
Definition
A distribution F is called d-completely mixable if there exist d random
variables X1 , . . . , Xd identically distributed as F such that
P(X1 + · · · + Xd = constant) = 1
Examples
- Gaussian, Cauchy, t
- Uniform
- Binomial(n,r/s) is s-completely mixable, n,r,s integers
- Multivariate extensions can be given [Rüschendorf and Uckelmann
(2002)]
Complete mixability
Sufficient conditions for complete mixability (Wang and Wang
(2011), Puccetti, Wang and Wang (2012))
- F is continuous with a symmetric and unimodal density.
[Rüschendorf and Uckelmann (2002)]
- F is continuous with a monotone density on a bounded support
and satisfies a moderate mean condition. [Wang and Wang (2011)]
- F is continuous with a concave density on a bounded support.
[Puccetti, Wang and Wang (2012)]
X2
X3
X1
ordered
44.51690
X4
44.49191
44.52803
44.43416
With N=100000, we obtain the first two decimals of
VaR↵ in 0.2 sec.
Rearrangement algorithm
1) Approximate the (1 ↵) upper part of the support of each
marginal F j from above and below:
Fj
Fj
Fj
and create two matrices X and Y with N columns and d rows.
2) Iteratively rearrange the column of each matrix until the
matrices X* and Y* with each column oppositely ordered to the
sum of the other columns.
⇤
.
⇤
VaR
min(rowSums(X
))
min(rowSums(Y
))
3)
↵
4) Run the algorithm with N large enough.
if Li 2 [a⇤ , a] then
Lj b
for some j , i
e remaining
it exhibits
only one
if Li 2 (a,part,
b) then
L1 + · ·mutual
· + Ld = exclusivity:
VaR↵ (L)
Application:
superadditivity
ratio
an be
if large
Li atbone time.
then
Li 2 [a⇤ , a]
for all j , i
risk
0
For a risk portfolio (L1 , . . . , Ld ) it is of interest to study the
Define the superadditivity
ratio as:
uperadditivity
ratio
mensionality d of the risk portfolio
study. For instance
VaR↵ (Lunder
+)
(d)
=
↵
+ well known that
r elliptically distributed risksVaR
it is
(d)
=
1
↵
(L
)
↵ +
r any d 1; see McNeil et al. (2005, Theorem 6.8). In Fig
etween
the
worst-possible
VaR
and
the
comonotonic
VaR,
at
e 4 and
Figure
5,
left,
we
plot
the
function
(d)
for
a
num
↵ d, the
and investigate its properties as a function of the dimension
ome
ofparameters
probability
2 (0, 1).model.
The
value cases,
↵ (d) mealevel ↵level
and the
of the↵
underlying
er ofgiven
di↵erent
homogeneous
portfolios.
In these
↵ (d
ures
how
much
VaR
can
be
superadditive
as
a
function
of
the
ems to
settle the
down
to ait limit
Investigate
limit, given
exists, in d fairly fast. We therefor
efine
↵ = lim ↵ (d),
d!+1
henever this limit exists. For large dimensions d one can then
d
d
Examples
15
2.4
Figure 4: Plot of the function ↵ (d) versus the dimensionality d of the portfolio for a risk vector of LogNormal(2,1)-distributed (left) and Gamma(3,1)-distributed
(right) risks, for two di↵erent quantile levels.
0.99-quantile
0.999-quantile
0
1.6
1.8
5
2.0
10
2.2
0.99-quantile
0.999-quantile
0
100
200
d
300
400
1
2
3
4
5
θ
Figure 5: Left: plot of the function ↵ (d) versus the dimensionality d of the portfolio for a risk vector of Pareto(✓)-distributed risks, for two di↵erent quantile levels
and ✓ = 2. Right: Plot of the limit constant ↵ versus the tail parameter ✓ of the Pareto distribution.
Conclusions
The rearrangement algorithm calculates numerically sharp bounds
for the VaR of a sum of dependent random variables.
- it is accurate, fast and computationally less demanding wrt to the
methods in the literature.
- can be used with inhomogeneous marginals, in high dimensions.
- computes also the best-possible Value-at-Risk.
- can be used with any marginal df and any quantile level.
- can be used also to compute bounds on the distribution function of
different operators such as ⇥, min, max .
Further work
• Find optimal couplings for the best VaR
• Interpret these couplings wrt realistic scenarios
• Add statistical uncertainty
• Compute VaR sharp bounds with some additional dependence
information
• Compare and contrast with other approaches: Robust
Optimization
• ...
References
• Makarov, G.D.(1981):Estimates for the distribution function of the sum of two random variables with
given marginal distributions.Theory Probab. Appl. 26, 803–806.
• Embrechts, P. and G. Puccetti (2006b). Bounds for functions of dependent risks. Finance Stoch. 10(3),
341–352.
• Embrechts,P, Puccetti, G. and L. Rüschendorf (2012). Model uncertainty and VaR aggregation, preprint.
• Puccetti, G. and L. Rüschendorf (2012). Computation of sharp bounds on the distribution of a function of
dependent risks. J. Comput. App. Math. 236 (7), 1833–1840. • Puccetti, G. and L. Rüschendorf (2012). Sharp bounds for sums of dependent risks, preprint.
• Puccetti, G., Wang, B., and R. Wang (2012). Advances in complete mixability. Forthcoming in J. Appl.
Probab.
• Rüschendorf, L. and L. Uckelmann (2002). Variance minimization and random variables with constant
sum. In Distributions with given marginals and statistical modelling, pp. 211–222. Dordrecht: Kluwer
Acad. Publ.
• Rüschendorf, L. (1982). Random variables with maximum sums. Adv. in Appl. Probab. 14(3), 623–632.
• Wang, B. and R. Wang (2011). The complete mixability and convex minimization problems with
monotone marginal densities. J. Multivariate Anal., 102, 1344-1360.
© Copyright 2026 Paperzz