Fast Allocation of Risk Capital in Credit Portfolios

Technical note on allocation of risk capital in credit
portfolios
Jan W. Kwiatkowski
QUARC, Group Risk Analytics, Royal Bank of Scotland, 280 Bishopsgate, London EC2M 4RB
[email protected] , +442070855242
Executive Summary
A methodology for computing and allocating risk capital of for portfolios of
default-able exposures is described in Kwiatkowski & Burridge (2008). Risk
capital is defined as the Conditional Expected Shortfall (CES) over some
threshold (percentile level) of the loss distribution. The methodology uses the
recursive algorithm of Andersen, Sidenius, & Basu (2003) for computing the loss
distribution of a portfolio, whose constituents may, in general, have stochastic
losses given default. The recursion is reversed to compute the contributions of
the individual constituents. In general the computation time is dominated by the
latter process.
Here it is shown how, using the same formulation, the computation of the
individual contributions (the ‘allocation’) may be considerably accelerated, to the
extent that the computation time is dominated by the computation of the loss
distribution1.
Summary of the methodology
The allocation of CES to the portfolio constituents follows the methodology described in Glasserman
(2006), where first the CES threshold is estimated, and in a second stage this estimate is used in
determining the allocation to the portfolio constituents.
1. Portfolio Loss Distribution
The defaults of the portfolio constituents are assumed independent conditional on any given
‘scenario’ Sc , which in general determines the conditional probabilities of default (PDs) of all the
portfolio constituents. The characteristics of the unconditional distribution are computed by first
computing the distributions conditional on each of a set of scenarios, and then integrating over the
scenarios. From this the CES at the required threshold level is computed.
2. Allocation
Given the portfolio CES, the contribution of each constituent can be expressed in terms of the
integral over scenarios of the scenario-conditional probabilities of the threshold being exceeded
given that the constituent has defaulted. In this paper a more efficient method of computing the
scenario-conditional probabilities is presented.
Notation and formulation
Given a portfolio
P with N constituents the -CES of the portfolio P is defined as:
_
~ ~

CES ( P )  E  L P | L P  L 


1
The author gratefully acknowledges the help of Marcus Blackburn in testing the algorithm.
_
~
where L P is the random portfolio loss and L  is its α-percentile.
~

~
_


We allocate CES (i )  E  L(i ) | L P  L   to each constituent
i  1,2,..N , and by the additive
property of expected values these sum exactly to the portfolio CES.
In the most general formulation, conditional on any factor scenario Sc , each exposure, i, is,
independently of the others, subject to one of a set Gi mutually exclusive outcomes, g =1, 2, .. Gi .
The loss given outcome g is  (i , g ) , and this occurs with probability
Gi

g 1
Sc
 Sc (i, g )
where
(i , g )  1
The outcome {No Default} must be included in the set.
This framework includes Corporate exposures that are subject to deterministic or stochastic loss
given default LGD and/or ratings transitions, and tranched asset-backed securities.
In the Andersen Sidenius & Basu (ASB) formulation, the  (i , g ) ’s are integers (positive or
negative) representing multiples of some fixed ‘loss unit’; the loss unit should be small enough for
the measurement of portfolio loss not to be materially affected.
The simplest example, which we will use for illustration, is that of exposures subject to default loss
with deterministic losses given default (LGDs). In this case we have 2 outcomes for each exposure.
Taking g=1 to correspond to {no default}, and g=2 to correspond to {default}, we have
 Sc (i,1)  1  PDSc (i ) , where PDSc (i ) is the probability of default of this exposure under scenario
Sc , and  Sc (i,2)  PDSc (i ) ,
and
 (i,1)  0 , and  (i,2)  x(i ) , the deterministic LGD (in loss units).
Computation of scenario-conditional portfolio loss distributions
The ASB algorithm consists in recursively computing, under each scenario, the distribution of the
losses for portfolios consisting of the first m exposures only, for m =0, 1, 2, …., N
In its simplest form this distribution can be represented as the set of probabilities
p
Sc,m
( J ) : Min Sc,m  J  Max Sc,m 
where Min Sc,m and Max Sc,m are respectively the minimum and maximum losses achievable
m exposures
and p Sc,m ( J )  0 outside this range.
from the first
Then with starting conditions Min Sc,m  Min Sc,m  0 , and
Min Sc,m  Min Sc,m1  min  0, min (i, g )
g


Max Sc,m  Max Sc,m1  max  0, max (i, g )
g


and
p0 (0)  1 , we recursively compute
G
pSc,m ( J )   pSc,m1 J  (i, g ) . Sc (m, g )
g 1
Equation 1
Significant acceleration can be achieved by ignoring immaterial probabilities under the
summation.
The conditional loss distribution for the whole portfolio is given by the probabilities p Sc, N ( J ) , and
L and the corresponding CES,
after integration across scenarios the required threshold
CES (P) may be computed.
Allocation of CES
Formulation
It is shown in Kwiatkowski and Burridge (2008) that the allocation of CES to constituent i can be
written as:
~
~
 (i, g ). (i, g )

CES (i )   
. Pr  L P  L | L(i )  (i, g ) 
1


g 1 
Gi
Equation 2
where
and
 (i, g ) is the unconditional probability of outcome g for exposure i
~
~
Pr  L P  L | L(i )   (i, g ) is the conditional probability that the portfolio loss


exceeds L given outcome g for exposure i
~



~
We first compute the tail probabilities PrSc L P  L | L(i )   (i, g ) conditional on each scenario,
then integrate them across scenarios, and finally substitute in Equation 2.
~
~



distributions, P Sc,i ( J ) , with exposure i eliminated from the portfolio, and setting
We observe that PrSc L P  L | L(i )   (i, g ) can be obtained by computing the ‘revised’ loss
~
~
PrSc  L P  L | L(i )  (i, g )   P Sc,i J  (i, g ) 

 J  L
Defining the tail probabilities:
T Sc,i L   P Sc,i J 
J L
we have finally:


~
~
PrSc  L P  L | L(i )   (i, g )  T Sc,i L   (i, g )


Computation
For the sake of being concise we drop the Sc and i subscripts, on the understanding that all
probabilities are conditional on the scenario. The allocation problem has been reduced to computing


for each exposure the revised tail probabilities T L   (g ) for each outcome g .
In Kwiatkowski and Burridge (2008) the computation of the revised loss distributions was performed
by inversion of the ASB algorithm represented by Equation 1. Considerable computational saving
can be achieved by recognising that if we sum Equation 1 over J  L , for any L , we get a similar
relationship for the tail probabilities:
G
T L   T L  ( g ). ( g )
for any
L
g 1
Equation 3
where
T L   p N ( J )
J L
 (g ) are small enough and/or all the  (g ) are small enough, the loss
distribution after eliminating exposure i will be virtually unchanged. In such cases, our first
approximation
We note first that if all the
^

 

T1 L  ( g )  T L  ( g )
Equation 4
will give a very good approximation. In fact if either all the  (g ) are non-negative or all non-positive,
the revised or original distribution, respectively, will stochastically dominate the other and the
approximation will give, respectively, an upper or lower bound.


The revised algorithm obtains successive approximations to T L   (g ) , starting with Equation 4.
At each iteration the last two approximations are compared, and the algorithm stops when these are
close enough.
For the sake of stability of the algorithm it is necessary first to identify which of the
the largest conditional probability:
G outcomes has
g max   g :  ( g )  max  ( g ) . This will usually be the ‘no default’ outcome.


g
We will use  max to signify  ( g max ) and max to signify  ( g max )
Using the recursion formula Equation 3 we obtain for each g :

 

T L  max  ( g )  T L  ( g ) . max

 T L  
h  g max
from which we get:
max

 ( g )  (h ) . (h )


 T L  max   ( g )


1 
T L   ( g ) 


 max 
  T L  max   ( g )   ( h ) . ( h ) 
h g max






Equation 5



. in the right-hand
The next approximation, T2 L  ( g ) is obtained by substituting T . for T 
side of Equation 5
If the first approximation was a lower bound, this will be an upper bound, and vice-versa.
^
^
If the 2 approximations are close enough we take


T2 L  ( g ) , otherwise continue to the next.
The next approximation is obtained by using Equation 3 to find an expression for the


T L  max  ( g )  (h) terms in Equation 5, namely:

T L  max


 T L  2 g max    ( g )   ( h )


1 
 ( g )  (h) 


 max    T L  2max   ( g )   ( h )   ( f ) . ( f ) 
 f  gmax




And substituting in Equation 5 we get:


 T L

T L   ( g ) 






L  2max   ( g )   ( h )
. ( h ) 
 T L  2max  ( g )  (h)  ( f ) . ( f ) 
f  g max


 max   ( g )
1 
1
 max  
 
max


T



h  g max ] 





Equation 6
^


The next approximation, T3 L   ( g ) is obtained by substituting


 T L

T L   ( g ) 
hand side of






L  2max   ( g )   ( h )
. ( h ) 

T
L

2



(
g
)


(
h
)


(
f
)
.

(
f
)



max

f  g max


 max   ( g )
1 
1
 max  
 
max


T



h  g max ] 

. in the rightT 
. for T 




Equation 6.
Thus we continue, substituting for the final terms on the right-hand side, until successive
approximations are close enough. If any of the other  (g ) ’s are close to  max the algorithm will not
necessarily converge, in which case other methods, e.g. transform methods, will have to be used.
Successive approximations may be explicitly coded to a sufficient depth, or an iterative algorithm
similar to the following may be used:
j 1
For g1  1 to G
D1 ( g1 ) 
1


T L  max   ( g1 )
 max
 ( g1 )
Q 1 ( g1 ) 
 max
Next g1
For g1  1 to G
For g 2  1 to G , g 2  g max


G1 ( g1 , g 2 )  Q1 ( g 2 ).T L  max   ( g1 )   ( g 2 )
^


T j L  g1   D1 ( g1 )   G1 ( g1 , g 2 )
g 2  g [max]


Next g2


Next g1
and for
j   : while not converge
For g1  1 to G
For g 2  1 to G
........
For g j  1 to G
Q j ( g1 ,...., g j )  Q j-1 ( g1 ,...., g j 1 ).
Next g j
.........
Next g2
Next g1
 (g j )
 max
For g1  1 to G
For g 2  1 to G, g 2  g max
........
For g j  1 to G , g j  g max
Fj ( g1 ,...., g j ) 
Q j-1 ( g 2 ,...., g j )
 max
^

.T  L  j.max   ( g1 )  ....   ( g j ) 


For g j 1  1 to G , g j 1  g max
^

G j ( g1 ,...., g j 1 )  Q j ( g 2 ,...., g j 1 ).T  L  j.max   ( g1 )  ....   ( g j 1 ) 


Next g j 1
Next g j
..........
Next g2




T j L  g1  T j 1 L  g1   1
^
^

j
.....
g 2  g max


 
G j 1 ( g1 ,...., g j )   F j ( g1 ,..., g j )   G j ( g1 ,....., g j 1 ) 
g j  g max 
g j 1  g max

 


Next g1
In the special case of a corporate exposure with probability of default
can be reduced to the following algorithm:
j 1

T L  x T L
PD deterministic LGD x , this

A j  T L  x
^
j

and for

x

j   while not converge:
If PD  0.5
A j  T ( L  jx)
^


^
T j L  x T j 1


  PD 
L  x  

 1  PD 
j 1
.A j  A j 1 
Else
A j  T ( L  [ j  2]. x )
^


^
T j L  x T j 1
End if
.


  [1  PD] 
L  x  

PD


j 2
.A j  A j 1 
REFERENCES
Andersen, L., Sidenius, J., and Basu, S. (2003). All your hedges in one basket. Risk Nov. 2003.
Glasserman, P. (2006). Measuring Marginal Risk Contributions in Credit Portfolios. Journal of
Computational Finance, Vol. 9, No. 2.
Kwiatkowski, J. and Burridge, J. (2008). Accurate allocation of risk capital in credit portfolios.
Journal of Credit Risk Vol. 4, No. 1, pp. 21-46, Spring 2008