Lecture 4:
Markov Chain Theory and
CreditMetricsTM
Prepared by
Bambang Hermanto,Ph.D
Lecture outlines
The credit quality migration probability
•
•
•
•
•
•
•
•
•
The concepts of discrete-time Markov chain process
The definitions and properties of discrete-time Markov chain
process
Discrete-time Markov chain states classification
Continuous-time Markov chain process
Computing the transition probabilities in continuous-time
Markov chain process
Survival rate and Hazard rate function
Transition probability matrix estimation with Cohort approach
Transition probability matrix estimation with Duration approach
CreditMetricsTM framework
Session 1
part 1
Markov Chain Theory
(Discrete Time)
The concepts of discrete-time Markov chain
process
Example: tossing a coin for three times
The probability to appear Head (H) in first toss, Tail (T) in
second toss, and Tail (T) in third toss is:
P(HTT) = P(H) P(T) P(T) = (.5)(.5)(.5) = .125
The concepts of discrete-time Markov chain
process
Probability in third month if in first month a customer use a product
Month 3
with brand A
Month 1
Month 2
0.6
Brand A
0.36
0.6
Brand A
0.24
0.6
0.4
Brand B
0.08
Brand A
0.2
Brand A
0.4
0.4
Brand B
0.32
The probability a consumer will use product
with brand B in month 3 if he use brand A in
month 1 is (0.24+0.32 =) 0.56
0.8
Brand B
The concepts of discrete-time Markov chain
process
Transition matrix (from example)
Next month
Brand A Brand B
Brand A
Current month
Brand B
0.6 0.4
0.2 0.8
Transition matrix above is derived from conditional
probability concept. In example above, 0.6 is the
probability to use brand A in next month if this
month he/she use product with brand A.
The concepts of discrete-time Markov chain
process
Probability switching brand in third month if in first
month use Brand A
Third month
Second month
PAm, A
m
PB , B
m
A, B
m
B, A
P
P
0.6
1 0
0.2
0.44
1 0
0.28
0.4 0.6 0.4
0.8 0.2 0.8
0.56 Brand A Brand B
0.44 0.56
0.72
The concepts of discrete-time Markov chain
process
Let us consider an economic or physical system S with
m possible states, represented by the set I :
I= {1, 2,…,m}
Let the system S evolve randomly in discrete time (t= 0,
1, 2,…,n,…), and let Xn be the r.v. representing the
state of the system S at time n.
If Xn =i, then the process is said to be in state i at time
n.
Suppose that whenever the process is in state i, there
is a fixed probability Pij that it will next be in state j.
P X n 1 jn 1 X n in , X n 1 in 1, X 1 i1 , X 0 i0 Pij
for all states i0,i1,…,in-1, i, j and all n≥0.
The concepts of discrete-time Markov chain
process
That a stochastic process is known as a Markov
chain.
A Markov chain is the conditional distribution of any
future state Xn+1 given the past states X0,X1,…,Xn-1
and the present state Xn, is independent of the past
states and depends only on the present state.
The value Pij represents the probability that the
process will, when in state i, next make a transition
into state j.
The concepts of discrete-time Markov chain
process
Since probabilities are nonnegative and since the
process must make a transition into some state, we
have that
Pij 0, i, j 0; X Pij 1 i 0,1, 2,...
j 0
Let P denote the matrix of one-step transition
probabilities Pij, so that
Note:
P00
P
10
P ...
Pi 0
...
P01
P11
...
Pi1
...
P02 ...
P12 ...
... ...
Pi 2 ...
... ...
When the one-step transition
probabilities are independent of
the time variable n, said that the
Markov chain has stationary
transition probabilities
The concepts of discrete-time Markov chain
process
Example (transforming a process into a Markov
chain)
•
•
Suppose that whether or not it rains today depends on
previous weather conditions through the last two days.
That is, suppose that if it has rained for the past two
days, then it will rain tomorrow with probability 0.7; if it
rained today but not yesterday, then it will rain tomorrow
with probability 0.5; if it rained yesterday but not today,
then it will rain tomorrow with probability 0.4; if it has not
rained in the past two days, then it will rain tomorrow
with probability 0.2.
Questions:
• Define the possibility state in that situation
• Construct the matrix transition from that scenario
The concepts of discrete-time Markov chain
process
Answers:
•
•
If we let the state at time n depend only on whether or not it
is raining at time n-1, then the above situation is not a
Markov chain. Why not?
For transforming the above situation into a Markov chain, we
may define:
• State 0 if it rained both today and yesterday
• State 1 if it rained today but not yesterday
• State 2 if it rained yesterday but not today
• State 3 if it did not rain either yesterday or today
The concepts of discrete-time Markov chain
process
Answers:
•
A four-state Markov chain having a transition probability
matrix:
0.7 0 0.3 0
0.5 0 0.5 0
P
0 0.4 0 0.6
0
0.2
0
0.8
The concepts of discrete-time Markov chain
process
Chapman-Kolmogorov equations
•
After defining Pij, we now define the m-step (time period)
transition probabilities Pijn,n+m to be the probability that a
process in state i will be in state j after m additional
transition (wherePij1 = Pij). That is:
Pijn ,n m P X n m j X n i , n, m 0; i, j 0
•
The Chapman-Kolmogorov equations provide a method for
computing these m-step transition probabilities. The
equations are:
Pijn m Pikn Pkjm for all n, m 0; i, j 0
k 0
The concepts of discrete-time Markov chain
process
Chapman-Kolmogorov equations
•
Summing over all intermediate states k yields the
probability that the process will be in state j after n+m
transitions. Formally, we have:
nm
ij
P
P X n m j X 0 i P X n m j , X n k X 0 i
k 0
P X n m j X n k , X 0 i P X n k X 0 i Pjkm Pkin
•
k 0
If we let P(n) denote the matrix of n-step transition
probabilities Pijn, then the equation above asserts that
P
n m
P .P
n
m
The concepts of discrete-time Markov chain
process
Chapman-Kolmogorov equations
•
•
Suppose the one-step transition probability matrix is given by
(weather = rain (0) and not rain (1))
0.7 0.3
P
0.4
0.6
The probability that it will rain four days from today given that it is
raining today
P 2
P 4
4
P00
0.7 0.3 0.7 0.3 0.61 0.39
2
P
.
0.4 0.6 0.4 0.6 0.52 0.48
0.61 0.39 0.61 0.39 0.5749 0.4251
2 2
P
.
0.52 0.48 0.52 0.48 0.5668 0.4332
The definitions and properties of
discrete-time Markov chain process
Definition 2.1 The random sequence (Xn , n∈ Ν ) is a Markov
chain if for all x0, x1,…,xn ∈I:
P X n jn X 0 j0 , X 1 j1 ,..., X n 1 jn 1 P X n jn X n 1 jn 1
(provided this probability has meaning).
Definition 2.2 A Markov chain (Jn, n≥0) is homogeneous if the
probabilities do not depend on n and is non-homogeneous in
the other cases.
For the moment, we will only consider the homogeneous case
for which we write:
P X n j X n 1 i pij
and we introduce the matrix P defined as: P pij
The definitions and properties of
discrete-time Markov chain process
Definition 2.2
The elements of the matrix P have the following properties:
• pij 0 for all i, j I
• pij 1 for all i I
jI
A matrix P satisfying these two conditions is called a Markov
matrix or a transition matrix.
Definition 2.3 A Markov matrix P is regular if there exists a
positive integer k, such that all the elements of the matrix P(k)
are strictly positive.
The definitions and properties of
discrete-time Markov chain process
Definition 2.3 (example)
0.5 0.5
If: P
we have:
0
1
0.75 0.25
P
0.5
0.5
2
so that P is regular.
The transition graph associated to P is given in Figure
below.
The definitions and properties of
discrete-time Markov chain process
Definition 2.3 (example)
0
1
If: P
0.75
0.25
P is not regular, because for any integer k, P12(k)=0
The transition graph associated to P is given in figure
below.
The definitions and properties of
discrete-time Markov chain process
Let i∈I, and let d(i) be the greatest common divisor of the set of
integers n, such that Pii(n)>0
Definition 2.4 If d(i) >1, the state i is said to be periodic with
period d(i). If d(i) =1, then state i is aperiodic.
Clearly, if pii > 0, then i is aperiodic. However, the converse is
not necessarily true.
Remark: If P is regular, then all the states are aperiodic.
Discrete-time Markov chain state
classification
Definition 2.5 A Markov chain is said to be irreducible if there
exists only one equivalence class.
The Markov chain is said to be irreducible if there is only one
class, that is, if all states communicate with each other.
Clearly, if P is regular, the Markov chain is both irreducible and
aperiodic. Such a Markov chain is said to be ergodic.
Discrete-time Markov chain state
classification
Definition 2.6 A subset E of the state space I is said to be
closed if:
p
jE
ij
1 for all i E
It can be shown that every essential class is minimally closed.
See Chung (1960).
Further explanations for
Markov Chain State
Definitions please see
Appendix 5
Session 1
part 2
Markov Chain Theory
(Continuous Time)
Continuous-time Markov chain process
Suppose we have a continuous-time stochastic
process {X(t), t≥0} taking on values in the set of
nonnegative integers.
We say that the process {X(t), t≥0} is a continuoustime Markov chain if for all s, t≥0 and nonnegative
integers i, j, x(u), 0 ≤ u < s < (s+t)
P X s t j X s i, X u x u , 0 u s s t
P X s t j X s i
Continuous-time Markov chain process
In other words, a continuous-time Markov chain is
a stochastic process having the Markovian
property that the conditional distribution of the
future X(s+t) given the present X(s) and the past
X(u), 0≤u<s, depends only on the present and is
independent of the path. If, in addition
P X s t j X s i
is independent of s, then the continuous-time
Markov chain is said to have stationary or
homogeneous transition probabilities.
Continuous-time Markov chain process
A continuous-time Markov chain can also defined
as a stochastic process having the properties that
each time it enters state i
(i) The amount of time it spends in that state before making a
transition into a different state is exponentially distributed
with mean, say 1/vi
(ii) When the process leaves state i, it next enters state j with
some probability, say Pij. Of course, the Pij must satisfy
Pii 0, all i
P
ij
j
1, all i
Continuous-time Markov chain process
Example:
•
•
•
Suppose that a continuous-time Markov chain enters state i
at some time, say time 0, and suppose that the process
does not leave state i (that is, a transition does not occur)
during the next ten minutes.
What is the probability that the process will not leave state i
during the following five minutes?
Solution:
• Now since the process is in state i at time 10 it follows, by
Markovian property, that the probability that it remains in that
state during the interval [10,15] is just the (conditional)
probability that it stays in state i for at least five minutes.
Continuous-time Markov chain process
• Solution:
• That is, if we let Ti denote the amount of time that the process
stays in state i before making a transition into a different state,
then
P Ti 15 Ti 10 P Ti 5
or, in general, by the same reasoning,
P Ti s t Ti s P Ti t for all s, t 0
• Hence, the random variable Ti is memoryless and must thus be
exponentially distributed.
Continuous-time Markov chain process
The Kolmogorov differential equations
•
For any pair of states i and j let
qij vi Pij
•
•
Since vi is the rate at which the process makes a transition
when in state i and Pij is the probability that this transition is
into state j, it follows that qij is the rate, when in i, that the
process makes a transition into j.
The quantities qij are called the instantaneous transition rates.
qij
Since
vi vi Pij qij and Pij
j
j
q
ij
j
it follows that specifying the instantaneous rates determines
the parameters of the process.
Continuous-time Markov chain process
The Kolmogorov differential equations
•
Let:
•
represent the probability that a process presently in state i will
be in state j a time t later.
We shall attempt to derive a set of differential equations for
these transition probabilities Pij(t), for that, first we will need the
following two lemmas
Pij t P X s t j X s i
• Lemma 1
(a) lim
h 0
1 Pij h
h
vi
(b) lim
h 0
Pij h
h
qij when i j
Continuous-time Markov chain process
Proof lemma part (a):
•
•
Note that since the amount of time until a transition occurs is
exponentially distributed it follows that the probability of two or
more transition in a time h is o(h).
Thus, 1-Pii(h), the probability that a process in state i at time 0 will
not be in state i at time h, equals the probability that a transition
occurs within time h plus something small compared to h.
1 Pij h vi h h
Thus part (a)
is proven
Continuous-time Markov chain process
Proof lemma part (b):
•
Note that Pii(h), the probability that a process goes from state i to
state j in a time h, equals the probability that a transition occurs in
this time multiplied by the probability that the transition is into state
j, plus something small compared to h.
Pij h hvi Pij hqij h
Thus, part (b)
is proven
Continuous-time Markov chain process
The Kolmogorov differential equations
•
Lemma 2
Pij s t Pik s Pkj t
k 0
for all s 0, t 0
Continuous-time Markov chain process
Proof lemma 2:
•
In order for the process to go from state i to state j in time s + t,
it must be somewhere at time s and thus
Pij s t P X s t j X 0 i
P X s t j, X s k X 0 i
k 0
P X s t j X s k , X 0 i .P X s k X 0 i
k 0
P X s t j X s k .P X s k X 0 i
k 0
Pkj t Pik s
k 0
Thus, lemma
2 is proven
Continuous-time Markov chain process
The Kolmogorov differential equations
•
•
The set of equation (lemma 2) is known as the ChapmanKolmogorov equations.
From lemma 2, where, h is basis time for (h + t) time we obtain
Pij h t Pij t Pik h Pkj t Pij t
k 0
Pik h Pkj t 1 Pii h Pij t
k i
and thus
lim
h 0
Pij h t Pij t
h
Pik h
1 Pii h
lim
Pkj t
Pij t
h 0
h
k i h
Continuous-time Markov chain process
The Kolmogorov differential equations
•
Now assuming that we can interchange the limit and the
summation in the above and applying Lemma 1, we obtain that
Pij' t qik Pkj t vi Pij t
•
•
k i
In turns out that the above interchange can indeed be justified
and, hence, we have the following theorem.
Theorem 1. (Kolmogorov’s backward equations) For all states
i,j, and times t≥0,
Pij' t qik Pkj t vi Pij t
k j
Continuous-time Markov chain process
The Kolmogorov differential equations
•
•
Another set of differential equations, different from the backward
equations, known as Kolmogorov’s forward equations is derived
as follows.
From the Chapman-Kolmogorov equations (lemma 2), we have
t is basis time for (t + h) time we obtain :
Pij t h Pij t Pik t Pkj h Pij t
k 0
Pik t Pkj h 1 Pjj h Pij t
k i
and thus
Pij t h Pij t
Pkj h 1 Pii h
lim
lim Pik t
Pij t
h 0
h
0
h
h
h
k i
Continuous-time Markov chain process
The Kolomogorov differential equations
• And, assuming that we can interchange limit with summation,
we obtain by lemma 1 that:
Pij' t qkj Pik t v j Pij t
•
•
•
k j
Unfortunately, we cannot always justify the interchange of limit
and summation and thus the above is not always valid.
However, they do hold in most models, including all birth and
death process and all finite state models.
We thus have: theorem 2. (Kolmogorov’s forward equations)
under suitable regularity conditions,
Pij' t qkj Pik t v j Pij t
k i
Computing the transition probabilities in
continuous-time Markov chain process
For any pair of states i and j let
qij if i j
rij
vi if i j
Using this notation, we can rewrite the Kolmogorov backward
equations:
Pij' t qik Pkj t vi Pij t
k j
and the forward equations:
Pij' t qkj Pik t v j Pij t
k i
Computing the transition probabilities in
continuous-time Markov chain process
As follows:
Pij' t rik Pkj t
(backward equation)
k
Pij' t rkj Pik t
(forward equation)
k
This representation is especially revealing when we introduce
matrix notation.
Computing the transition probabilities in
continuous-time Markov chain process
Define the matrices R, P(t), and P’(t) by letting the element
in row i, column j of these matrices be, respectively, rij,
Pij(t), and P’ij(t).
Since the backward equations say that the element in row
i, column j of the matrix P’(t) can be obtained by multiplying
the i-th row of the matrix R by the j-th column of the matrix
P(t), it is equivalent to the matrix equation
P' t RP t
Similarly, the forward equations can be written as
P' t P t R
Computing the transition probabilities in
continuous-time Markov chain process
Now, just as the solution of the scalar differential equation
f ' t cf t (or equivalent, f ' t f t c) is f t f 0 ect
It can be shown that the solution of the matrix differential
equations is given by
P t P 0 e Rt
since P(0)=I (the identity matrix), this yields that
P t eRt
where the matrix eRt is defined by
n
t
e Rt R n
n!
n 0
Computing the transition probabilities in
continuous-time Markov chain process
Notes:
(i) The matrix R contains both positive and negative
elements (remember the off-diagonal elements are the qij
while the i-th diagonal element is –vi)
(ii) We often have to compute many of the terms in the
infinite sum (above equation) to arrive at a good
approximation.
Session 1
part 3
Estimation of Transition
Probability Matrix
(Cohort and Duration Approach)
The credit quality migration probability
The credit quality migration probability can be derived from
Markov chain framework that divided to be:
Discrete-time Markov chain process (Cohort approach)
Continuous-time Markov chain process
Homogenous time model (Duration or Intensity transition
approach)
In-homogenous time model
In this session we will discuss about Markov chain definition and
property that useful in derivation our model of migration
probability.
Cohort approach
Cohort definition:
•
•
•
•
Is a group of people (element or observation unit) who
share a common characteristic or experience within a
defined period (e.g., are born, leave a rating, lose their job,
etc.)
The comparison group may be the general population from
which the cohort is drawn and alternatively, subgroups
within the cohort may be compared with each other
In credit risk (transition matrix definition), the common
characteristic is defined as probability to leave from a state
to another state in rating system
Sometime called as frequency approach because to
calculate the probability to leave a rating using number of
people (element or observation unit) that leave
Cohort approach
Let:
• pij(Δt) be the probability of migrating from grade i to j over
horizon (or sampling interval) Δt. (e.g. for Dt = 1 year), there
are ni firms in rating category i at the beginning of the year,
and nij migrated to grade j by year-end.
An estimate of the transition probability pij (Δt=1 year) is:
N ij
ˆ
Pij
for i j
Ni
For > 1 period observation (e.g. for 5 years observation)
• Use arithmetic mean nfor get transition probability in next
1
one year
Pij T
Pij t For: i j and T t
n
t 1
Duration approach
Time-homogeneity continuous Markov chain
•
•
•
•
Is also called Duration or transition intensity approach
Follows the Markov chain framework with continue and
homogenous time
Homogenous time is defined as in-varying transition
probability with time period t (independent of time)
Transition probability matrix is obtained from Laplace
transformation of generator matrix (Λ):
P t exp t ,
t 0
where Laplace transformation uses exponential function
of generator matrix:
k
2
t
t
t
t
e t
I
...
k!
1!
2!
!
k 0
Duration approach
The generator matrix Λ
• Example:
AAA, AAA
AA, AAA
A, AAA
...
D , AAA
•
AAA, AA AAA, A ... AAA, D
AA, AA AA, A ... AA, D
A, AA
A, A ... A, D
...
...
...
D , AAA
D , A
...
...
D , D
Since the generator matrix is discrete variable then
duration approach is discrete approach that continued with
Laplace transform (Exponential function of Λ)
Duration approach
The generator matrix Λ
•
•
is a K x K matrix
Obtained with using the maximum likelihood estimator
(see for example Kuchler and Sørensen, 1997)
̂i , j
N i , j T
Y t dt
T
0
•
Describes time portion a unit
observation on rating i
i
Where:
• Yi(s) is the number of firms in rating class i at time s and
• Nij(T) is the total number of transitions over the period from i
to j, where i ≠ j.
Duration approach
The generator matrix Λ
• The entries of the generator K satisfy
• First condition:
i , j 0
• Second condition: i ,i
• Third condition:
M
i, j
i j
jL
i j
Example
An illustration of calculation
for Cohort and Duration
approach
Example
Illustration:
•
•
•
•
To illustrate the estimator, consider a rating system
consisting of two nondefault rating categories A and B
and a default category D.
Assume that we observe over one year the history of
20 firms, of which 10 start in category A and 10 in
category B.
Assume that over the year of observation, one A rated
firm changes its rating to category B after one month
and stays there the rest of the year.
Assume that over the same period, one B rated firm is
upgraded after two months and remains in A for the
rest of the period and a firm which started in B defaults
after six months and stays there for the remaining part
of the period.
Solution: Transition probability matrix
estimation with Cohort approach
From \ To
A
B
D
A
9
1
0
B
1
8
1
D
0
0
0
Total
10
9
1
Or if stated in probability form
From \ To
A
B
D
A
0.900 0.100 0.000
B
0.100 0.800 0.100
D
0.000 0.000 0.000
Total
1.000 0.900 0.100
Total
10
10
0
Total
1.000
1.000
0.000
Solution: Transition probability matrix
estimation with Duration approach
• Duration approach
Generator matrix element obtained from:
ˆB , A
12
N B, A
N 1
1
12
12 1 B , A
0.10435
12
YB t dt 1 2 8 12 1 11 1 6
Y
t
dt
B
0
0
12 12 12 12
Generator matrix:
A
B
D
0
0.10084 0.10084
ˆ 0.10435 0.20870 0.10435
0
0
0
A
B
D
Solution: Transition probability matrix
estimation with Duration approach
Duration approach
•
With exponential transformation to generator matrix,
obtained matrix transition:
A
B
D
0.90867 0.08657 0.00475 A
Pˆ1 e 0.08959 0.81607 0.09434 B
0
1 D
0
Session 2
part 1
Application:
Portfolio Credit Risk and the
CreditMetricsTM Methodology
adopted from
CreditMetrics Group
Portfolio credit-risk
management
Definition:
•
The management of the aggregated risk, resulting from credit events, of all
credit-sensitive exposures, taking diversification and mitigating into
consideration.
•
•
Diversification
•
If I have big exposure to alternative energy companies, what is the chance that they
will default together with traditional energy company?
Risk mitigation
• Netting
• Collateralisation, guarantees, …
• Credit derivatives
Examples:
•
•
If PD of bond A is 5% and bond B is 6%, what is the chance that both will fail
together?
If I have a loan collateralised by 100%, or covered by a CDS, what are the
chances that I will lose both the loan and the protection together?
Sample applications of credit
risk management
Regulatory requirement
•
Basel II, Solvency II: A proxy to best market practices
Identifying concentration risk
Portfolio based pricing
•
If both companies have the same credit rating, which one should I take? Which
one should I give better price: the alternative or traditional energy?
What is the profitability (or cost) of client relationship?
•
We often have to sell at market price, what does it mean for our P&L?
What is the most cost effective hedging strategy?
•
Diversification? Collateralisation? Securitisation? Derivatives?
What are our asset allocation options to optimise our risk/return ratio?
•
By industry, credit rating, currency
Do you know what you have?
•
Lahde Capital, made 1,000% in 2007 (over $20b), by doing their own credit
pricing
Sample applications of credit
risk management
Regulatory requirement
•
Basel II, Solvency II: A proxy to best market practices
Identifying concentration risk
Portfolio based pricing
•
If both companies have the same credit rating, which one should I take? Which
one should I give better price: the alternative or traditional energy?
What is the profitability (or cost) of client relationship?
•
We often have to sell at market price, what does it mean for our P&L?
What is the most cost effective hedging strategy?
•
Diversification? Collateralisation? Securitisation? Derivatives?
What are our asset allocation options to optimise our risk/return ratio?
•
By industry, credit rating, currency
Do you know what you have?
•
Lahde Capital, made 1,000% in 2007 (over $20b), by doing their own credit
pricing
Different Approaches to
Portfolio Credit Risk
Merton-based
•
•
•
Bottom-up/Microeconomic casual mode of default
CreditMetrics (RiskMetrics)
PortfolioManager (KMV)
Actuarial
•
•
Loss distribution techniques from Insurance Industry
Top down/no casual mode of default
• CreditRisk+ (CSFB)
Econometric
•
•
PD as a function of logit transformation of indices of
Macroeconomic factors
Bottom-up/Macroeconomic casual mode of default
• CreditPortfolioView (McKinsey)
CreditMetrics Methodology
Horizon value: modelling credit risk associated with
credit defaults and migration
•
•
•
•
Hull & White model
use of Transition Matrices
calculation of horizon value
P&L calculation
Factor model: from asset returns we define credit
migration
Correlation: modelling the risk credit correlation
•
Modelling joint probabilities
Statistical (VaR) analysis: capturing credit risk
Distribution of portfolio values
at horizon date
Expected Loss
Expected Loss = Current Value - Expected Horizon
Value
Value at Risk
Specify VaR confidence level (eg, 99%)
Economic (risk) capital is amount portfolio could lose
with 1% probability
Methodology overview – 3 main
steps
Step 1 - Specify possible credit
rating states and probabilities of
obligors
Transition Matrix
Today's Rating
Next Year's Rating
AAA
AA
A
BBB
BB
B
CCC
Default
AAA
AA
A
BBB
BB
B
CCC
Default
91.35
0.70
0.10
0.02
0.01
0.00
0.00
0.00
8.00
91.03
2.34
0.30
0.11
0.05
0.01
0.00
0.70
7.47
91.54
5.65
0.06
0.25
0.10
0.00
0.10
0.60
5.08
87.98
7.77
0.05
0.30
0.00
0.05
0.10
0.61
4.75
81.77
7.00
2.59
0.00
0.01
0.07
0.26
1.05
7.95
83.50
12.00
0.00
0.01
0.02
0.01
0.10
0.85
3.75
65.00
0.00
0.01
0.01
0.05
0.15
1.00
5.00
20.00
100.00
Step 2 – Revalue instruments in
each future credit state
Use credit spreads to revalue in non-default states
Use recovery rate estimate to revalue in default state
Incorporate any collateral, guarantee, etc.
Eg, for a senior unsecured corporate bond (Industrial) may use…
Present Value and Horizon value
Non-default case
• Bond : BBB, 6year maturity, 6% annual coupon
• Step 1: Cash flow
• Step 2 : MTM at horizon
• Horizon Value = Risk Free Value – Expected Loss
Default case
• Recovery rate Simulation
Hull-White
Credit Risk Adjusted Value
•
Default risky bond price = Risk-Free bond price - PV of
Expected Loss
Where
•
PV of Expected Loss for each cash flow = Risk-Free Discount
factor * PD * LGD
Where
•
LGD = Forward risk-free price - Mean recovery rate *
Recovery claim in default
Valuation at Horizon
All exposures to each obligor
valued in consistent framework
Step 3 - Obligor credit correlations
based on obligor mappings to
country/industry indices
Joint migration probabilities (no
simulation)
Factor Model: obligor
correlations
Correlations driven by asset
value distributions
Assume a connection between asset value and credit rating
(premise behind Merton model)
Transition probabilities (from TM) give us asset return
“thresholds”
Factors give obligor correlations,
which give rise to joint credit
migration behavior
Monte Carlo simulations then let us
calculate portfolio statistics
AAA AA A BBB
AAA
AA
A
BBB
BB
B
CCC
D
BB
B
CCC
D
Monte Carlo Simulation
We simulate the
possible credit
migrations of each
obligor based on their
transition probabilities
and on the pair-wise
asset correlations
between the obligors
Monte Carlo Simulation
In each rating state, we
know the value of all
instruments tied to that
particular obligor
This gives us portfolio
value distribution
For each trial, we sum up
the value of instruments
tied to each obligor
Overview – 3 main steps
Result: portfolio distribution at
horizon
Simulation summary
Distribution of Horizon Values
Analysis Framework
Session 2
part 1
Credit Grades
adopted from
CreditMetrics Group
The mechanics of implied rating
calculation
Why implied ratings?
•
Provide more timely indicator of credit risk than external ratings
Implied ratings can be calculated from market data
•
•
•
Bond spread data
CDS data
Equity data plus balance sheet information
Once implied ratings are known we map to objective default
probabilities
Calculate implied ratings using
a structural model
First, let’s consider the structural model approach
RiskMetrics has implemented a structural model—
CreditGradesTM—that is based on market observables.
Moreover, unlike typical model implementations, the outputs are
market-implied (or risk-neutral measures)
CreditGrades model
CreditGrades Model
Equity-to-credit model
•
•
Input: based on well-understood freely available market data
Output: credit spread (risk-neutral measure)
Asset based model
•
Assets follow standard Brownian motion
Default threshold uncertainty
•
Recovery rate is lognormal
First passage problem
•
Default modeled as a knock-out barrier
Closed-form solution for any tenor
•
Transparent formula
CreditGrades model mechanics
Thanks For
Your Attention
Appendix 1
Extension:
Markov chain state
classification
Discrete-time Markov chain state
classification
Let i∈I, and let d(i) be the greatest common divisor of the set of
integers n, such that Pii(n)>0
Definition 3.1 If d(i) >1, the state i is said to be periodic with
period d(i). If d(i) =1, then state i is aperiodic.
Clearly, if pii > 0, then i is aperiodic. However, the converse is
not necessarily true.
Remark: If P is regular, then all the states are aperiodic.
Definition 3.2 A Markov chain whose states are all aperiodic is
called an aperiodic Markov chain.
From now on, we will have only Markov chains of this type.
Discrete-time Markov chain state
classification
Definition 3.3 A state i is said to lead to state j (written i
►j) if there exists a positive integer n such that Pij(n)>0.
i C j means that i does not lead to j.
Definition 3.4 States i and j are said to communicate if i
►j and j ►i , or if j = i . We write i ◄►j.
Two states i and j that are accessible to each other are
said to communicate, and we write i ◄►j.
Note that any state communicates with itself since, by
definition,
Pii0 P X 0 i X 0 i 1
Discrete-time Markov chain state
classification
Definition 3.4
The relation of communication satisfies the following three
properties:
i) state i communicates with state i, all i ≥0
ii) if state i communicates with state j, then state j
communicates with state i
iii) if state i communicates with state j, and state j
communicates with state k, then state i
communicates with state k.
By the Chapman-Kolmogorov equations, we have that
nm
ik
P
P P P P 0
r 0
n m
ir rk
n
ij
m
jk
Discrete-time Markov chain state
classification
Definition 3.4
Two states that communicate are said to be in the same class.
It is an easy consequence of (i), (ii), and (iii) that any two
classes of states are either identical or disjoint.
Definition 3.5 A state i is said to be essential if it
communicates with every state it leads to; otherwise it is called
inessential.
The relation ◄► defines an equivalence relation over the state
space I resulting in a partition of I . The equivalence class
containing state i is represented by C(i).
It is easy to show that if the state i is essential (inessential)
then all the elements of the class C(i) are essential inessential)
(see Chung (1960)).
Discrete-time Markov chain state
classification
Definition 3.6 A Markov chain is said to be irreducible if
there exists only one equivalence class.
The Markov chain is said to be irreducible if there is only
one class, that is, if all states communicate with each
other.
Clearly, if P is regular, the Markov chain is both irreducible
and aperiodic. Such a Markov chain is said to be ergodic.
Definition 3.7 A subset E of the state space I is said to be
closed if:
p 1 for all i E
jE
ij
It can be shown that every essential class is minimally
closed. See Chung (1960).
Discrete-time Markov chain state
classification
Example:
•
Consider the Markov chain consisting of the three states 0,1,2
and having transition probability matrix:
0
1/ 2 1/ 2
P 1/ 2 1/ 4 1/ 4
0 1/ 3 2 / 3
•
•
Question: verify that this Markov chain is reducible?
Answer: it is possible to go from state 0 to state 2 since
1
1
0
1
2
that is, one way of getting from state 0 to state 2 is to go from
state 0 to state 1 (with probability ½) and then go from state 1 to
state 2 (with probability ¼)
Discrete-time Markov chain state
classification
Example:
•
Consider the Markov chain consisting of the four states
0,1,2,3 and having transition probability matrix:
0
0
1/ 2 1/ 2
1/ 2 1/ 2
0
0
P
1/ 4 1/ 4 1/ 4 1/ 4
0
0
0
1
•
Question: verify that this Markov chain is reducible?
Discrete-time Markov chain state
classification
Definition 3.8 For given states i and j , with X0 = i, we can
define the r.v. ij called the first passage time to state j as
follows:
n if Xv j, 0 v n, Xn j
ij
if Xv j, for all v 0
ij
is said to be the hitting time of the singleton { j }, starting
from state i at time 0.
For any state i we let fi denote the probability that, starting in
state i, the process will ever reenter state i
fij P ij n X 0 i and fij P ij X 0 i
n
Discrete-time Markov chain state
classification
Definition 3.8
We also have:
f ij f ijn
n 1
1 fij P ij X 0 i
n
f
The elements ij can readily be computed by induction, using
the following relations:
pij f ij1
n 1
pij fij v pij n v fij n , n 2
n
v 1
Discrete-time Markov chain state
classification
Definition 3.8
Let:
mij E ij X 0 i
with the possibility of an infinite mean. The value of mij is given
by:
mij nf ij 1 f ij
n
n 1
If I = j, then mij is called the first passage time mean or the
mean recurrence time of state i .
Discrete-time Markov chain state
classification
Definition 3.9 A state i is said to be transient (recurrent) if
the renewal process associated with its successive return
times to i is transient (recurrent).
A direct consequence of this definition is that:
i transient fii 1
i recurrent fii 1
A recurrent state i is said to be null (positive) if mii = ∞
(mii<∞). It can be shown that if mii < ∞ , then we can only
have positive recurrent states.
This classification leads to the decomposition theorem (see
Chung (1960)).
Discrete-time Markov chain state
classification
Let T be the set of all transient states of i, and let C be a
recurrent class.
For all j, k∈C,
f ij f ik
Labeling this common value as fiC , the probabilities (fi,C , i∈T)
satisfy the linear system:
fi ,C pik f k ,C pik , i T
kT
kC
Definition 3.10 The probability fi,C (above) is called absorption
probability in class C, starting from state i.
1 if i C
If class C is recurrent:
f i ,C
0 if i is recurrent i C
Discrete-time Markov chain state
classification
Example:
• Let the Markov chain consisting of the state 0,1,2,3 have the
transition probability matrix
•
•
0
1
P
0
0
0 1/ 2 1/ 2
0 0
0
1 0
0
1 0
0
Question: determine which states are transient and which are
recurrent?
Solution: it is a simple matter to check that all states
communicate and hence, since this a finite chain, all states
must be recurrent.
Discrete-time Markov chain state
classification
Example:
•
Consider the Markov chain having states 0,1,2,3,4 and
0
0
1/ 2 1/ 2 0
1/ 2 1/ 2 0
0
0
P 0
0 1/ 2 1/ 2 0
0
0
1/
2
1/
2
0
1/ 4 1/ 4 0
0 1/ 2
•
Question: determine which states are transient and which are
recurrent?
Appendix 2
Survival rate and Hazard
rate function
Survival rate and Hazard rate function
System life as function of component lives
•
For a continuous distribution G, we define λ(t), the failure
rate function of G, by:
t
•
•
•
g t
G t
where: g t
dG t
dt
G is the distribution of the lifetime of an items, then λ(t)
represents the probability intensity that a t-year-old item will
fail.
We say that G is an increasing failure rate (IFR) distribution if
λ(t) is an increasing function of t.
Similarly, we say that G is a decreasing failure rate (DFR)
distribution if λ(t) is a decreasing function of t.
Survival rate and Hazard rate function
System life as function of component lives
•
•
Recall that the failure rate function of a distribution F(t) having
density f(t)=F’(t) is defined:
f t
t
1 F t
By integrating both sides of the above, we obtain:
t
t
0
0
f s
s ds 1 F s ds log F t
where:
F t e
- t
t
where: t s ds
The function Λ(t) is called the Hazard function 0of the distribution F.
© Copyright 2026 Paperzz