Choices about probability distributions and systemic

Statistica Applicata - Italian Journal of Applied Statistics Vol. 22 (3)
307
CHOICES ABOUT PROBABILITY DISTRIBUTIONS AND
SYSTEMIC-FINANCIAL RISK
Angiola Contini1
Institute of Economic Policy, University Cattolica del Sacro Cuore, Milan, Italy.
Abstract. The paper starts reminding the readers that until the subprime crisis of 2007 there
was a widespread agreement on the self-regulatory power of financial markets, the so
called market efficiency hypothesis. When assuming the consequent normal behaviour of
the market products, the price imposed by the market has been considered the best index
of their value and expected risks could be rated. Thus, through portfolio insurances at
various risk levels to compensate losses, it becomes theoretically possible to devise control
charts to undo the effects of financial irregularities. The subprime crisis made it clear to
investors, if necessary, that financial investments require choices under uncertainty. The
paper illustrates how the theories of choice under uncertainty evolved into choices about
probability distributions and which are the hypotheses underlying them. It also shows the
difficulties with choosing the probability distributions and thus assessing risk, whether
individual or market risk, since each option involves views regarding many different
aspects. The literature about subprime crisis causes suggests that, notwithstanding
impressive mathematical and statistical achievements, the undervaluation of the importance
of the correlation among different assets, subjects and institutions in the financial market
leaves practical and theoretical difficulties in fixing risk thresholds.
Keywords: Subprime crisis, Extreme value theory, Choice of failure limits.
1. FINANCIAL MARKETS AND THE CONFIDENCE UPON MARKET
SELF-REGULATION
During the last twenty years transactions in financial markets have grown more
rapidly than world GDP: in 2007 world gross product amounted to 65 trillions
of US dollars, while Over The Counter (OTC) derivatives market amounted to 596
(8 times over) trillions of US dollars (Philippon, 2008).
Accompanying this growth, until the subprime crisis, there was also
widespread agreement on the self- regulatory power of financial markets, the so
1
Angiola Contini, email: [email protected]
308
Contini A.
called market efficiency hypothesis. Moreover, in the US the hypothesis that, the
more markets grow, the more their behaviour approaches “normal behaviour”,
has been the basis of the Commodity Future Modernization Act of the year 2000,
according to which the market of these products is self-regulated. Thus, once
their expected risk has been rated, the price they command is the best index of their
value.
This paper illustrates how choices under uncertainty have been transformed
into choices about probability distributions and which are the hypotheses
underlying them. It also illustrates the difficulties that arise when these different
choices are used to assess individual or systemic risk, since each option involves
a different view about the weight to be given to different aspects. Although great
results have been achieved in mathematical coherence and statistical finesse,
nothing insofar guarantees that it is possible to measure either risk or conflicting
opinions in terms of a common unit. This point is the more relevant the greater
the awareness of the links between financial markets and real economy and the
concern about the burden that unguarded financial rules, or absence of them,
impose upon economic performances.
In general terms, to choose means to decide about one among different
elements. In economics the term came to mean “rational choice”, so much so that
Lionel Robbins defined economics “the science of rational choice” (Robbins,
1932; Hammond, 1997). Between 1940 and 1960, the term “rational choice” has
been equipped with a logical and mathematical apparatus that is still in use
nowadays, although not universally accepted. It has been subsumed under the
more general branch of “theory of decision”.
In decision theory, the focus is on rational behaviour and the utmost care
is taken in order to define how the word rational is to be determined (Montesano,
2005). Choice is over alternatives and certainty and uncertainty are a matter of
objective or estimated knowledge of the consequences of the alternatives. To
choose under uncertainty means that some elements of the set of alternatives are
either not known or not available.
When dealing with choice under uncertainty, it is necessary to define both
the meaning of uncertainty and the preference order. The order of alternatives
may be quite relevant, so much so that another strand of rational choice theory
makes it the mainstay of the preference-based theory of rational choice (Dhongdeand Pattanaik, 2010). However this approach also accepts the definition of
uncertainty as a characteristic of possibility pertaining to future or unknown
events.
The term possibility may represent both the degree to which evidence
Choices about probability distributions and systemic-financial risk
309
supports beliefs and the degree to which one believes something. The former refers
to the classical meaning of probability: ratio of the number of occurrences of an
event over a large number of trials, the latter to the degree of confidence on the
occurrence of an event.
Nowadays the expression “degree of confidence” is intended as the ratio
between the probability that an event occurs and the sum of the probabilities pro
and against the event occurring, but, in the sense of Keynes, the expression should
be intended as “the balance between the amounts of relevant knowledge and
relevant ignorance” (Basili and Zappia, 2010). Thus, the term possibility can have
a more pervasive meaning, signifying both that knowledge is incomplete and that
the list of the possible states or events is not exhaustive (Vercelli, 2011).
When the occurrence is the return or loss following an investment, if the
probabilities of each occurrence are known - as well as the individual’s preferences
over them – a von Neumann-Morgenstern utility function can be defined on all
possible alternatives and an “expected utility” function constructed (von Neumann
and Morgenstern, 1944; Marschak, 1950). If actual probabilities are not known,
either a subjective probability (Anscombe and Aumann, 1963) can be used or the
expected utility concept abandoned altogether and the alternatives judged by their
known or presumed distribution.
The assumption is that if “enough” observations are collected, their
empirical distribution will approach the “true” distribution. Moreover, when
returns and losses regard assets in the financial market, the more the number of
assets grows, the more their returns and losses can be considered as random
variables.
A hypothetical portfolio can thus be constructed, made by all the different
assets existing in the market, whose mean is the mean of the returns and whose
volatility is the standard deviation of the returns of the assets in the market; this
portfolio constitutes the market benchmark and the ratio between mean and
standard deviation constitutes the market risk .
The distance (Markowitz, 1952) from the benchmark portfolio and any
other portfolio in the market can be evaluated and each portfolio ranked; the price
is set using the market risk premium (the difference between the benchmark
mean and the risk free rate of return) times the amount of risk (β) of the single
portfolio (the ratio between single portfolio volatility and market volatility). This
is the base of the Capital Asset Price Model in its two main versions: Value at Risk
and Default Probability.
310
Contini A.
2. NON NORMAL DISTRIBUTIONS OF RETURNS AND LOSSES:
DEALING WITH THRESHOLDS AND TAILS
Since in the course of time both the independence and the normality of returns
assumptions have been criticised (Levy, 1980), the model has been changed by
considering, instead of returns, the probability distribution function of returns or
losses.
In case of non-normal distributions, however, to change the measure of risk
would not be enough, therefore the set of likely distributions has been enlarged
to consider either the probability of events of predetermined size, such as expected
loss (EL), or the probability of events whose frequency and size cannot be
evaluated, but can still be considered as endowed with certain characteristics, as
it is the case of unexpected losses of given magnitude.
It is also possible to consider returns and losses connected or interacting
with each other. They can be treated as composite variables in which the risk
weight of each component is valued separately with its probability distribution and
then summed up to obtain the typical behaviour of the composite variables.
Assuming a constant index of correlation between the variables, the probability
is calculated that the group move away from typical behaviour. This is the
rationale for the use of a compound index to compare the amount of risk of
different portfolios and to forecast the probability of given amounts of loss, as it
is the case in a host of financial and insurance products.
In order to rank probability distributions according to the confidence over
their characteristics it is necessary to decide which are the most important
features to consider. In some cases the most relevant characteristic is considered
the distance from the typical (most frequent) event in a given reference interval.
If it is possible, through historical data, to state that an event (occurrence
or loss) of exceptional magnitude occurs only once in a specific time interval [0, t],
Poisson distribution can be used. To use Poisson distribution implies the confidence
both that the events are independent and that exceptional events are mixed with the
more frequent events in the same proportion.
Assume that the time interval is fractioned in n sub-intervals is of “small”
width Δt = τ so that n = t / τ , and that the probability of one occurrence is p(τ) with
lim ⎡⎣ p(τ) / τ ⎤⎦ = λ > 0 for τ → 0 . The expected number of occurrences, E (Nt), which
is estimated by the arithmetic mean of the number of observed occurrences, is equal
to the probability p times the number of sub-intervals n, that is
E ( N t ) = p (τ ) ⋅ (t / τ ) ≅ λt.
Therefore, the probability of a large number k of occurrences (X), different
Choices about probability distributions and systemic-financial risk
311
from the expected value E (Nt), diminishes the faster, the greater the distance of k
from it:
( )
k
⎡ E Nt ⎤
⎦ .
P X =k =e ( ) ⎣
(
)
− E Nt
P( X = k ) = e −E ( Nt )
[ E ( N t )]k
k!
k!
According to this logic, using size intervals instead of time intervals, if in a
portfolio the amounts of losses are ordered by size, each size with its own average
number of losses, it is possible to consider the average number of losses, E(NL)
in each order of magnitude. Then, supposing that in each size losses are
independent from losses in other sizes, using Poisson distribution it is possible
to calculate the probability of finding a number of losses k different from the
average:
( )
k
⎡E N ⎤
Lt ⎦
− E( N L )
t .
P NL = k = ⎣
e
t
k!
(
)
In other cases, instead of calculating the average amount in each order of
magnitude, the probability distribution is constructed. Since, in distributions with
fat tails, in each order of magnitude it is possible to establish a loss threshold,
assets are divided into n blocs and blocs ordered according to the amount of
expected loss. In each bloc, the maximum value of the loss, Xmax, can be changed
into the ratio between the distance from the bloc mean, μ, and the bloc standard
deviation, σ, forming a new distribution:
⎛ X1max − μ1 X 2 max − μ2
− μn ⎞
X
,
,…, n max
⎜
⎟.
σ1
σ2
σn
⎝
⎠
This distribution, too, has an exponential form. In this way each portfolio can
be insured against losses, either by covering the whole value of the portfolio, CDS
(Credit Default Swaps), or by covering only selected segments (tranches) of it,
CDO (Collateralised Debt Obligations) (Elizalde, 2006).
Dealers specialise by segments or tranches, dividing assets in groups (floor,
equity, first floor, second, third, senior) according to the specific expected shortfall
of each tranche and insuring risk tranche by tranche.
Each tranche bears only its own risk, whose value can be defined between the
maximum and minimum expected shortfall for that tranche. For example the
“floor” tranche will not bear any loss until returns are positive or zero and in case
312
Contini A.
of loss will bear only 3% of it. Thus, in case of loss, the insurers of that tranche will
pay up to 3% of losses, insurers of the first floor (the equity tranche, from 3 to 7%)
will not pay until loss will reach 3.1%, but will pay from that value up to 7%. Finally,
the insurers of the higher tranches - second, third and senior - will cover from 7 to
10%, from 10 to 15 and from 15 to 30%.
Each insurance bears only its own losses, disregarding lower and higher
values. For example, if the portfolio insured with the tranche system loses 9% of
its value, the equity tranche will bear 3% of it, first tranche the following 4% and
the second tranche the remaining 2%. Insurance premium, accordingly, will
decrease going up the tranches, since the premium of the lowest tranche will be
equal to the value of the asset and the lowest tranche will be the first to repay losses,
while dealers of the higher tranches will pay after all the others. We notice that thus
a form of “quality control” has been introduced owing to the insurance. It is if in a
production process we could automatically repair the defective items. Here the
insurance guarantees that the anomalous results are compensated.
If n is the total number of assets composing a portfolio; Nt the number of assets
bearing a loss; 100, μ1,μ2,…μn, the percentage of loss born by tranche bk (that will
take value b1 when percentage is between 100 and μ1 , b2 when it is between μ1 and
μ2 and so on); the loss of the portfolio is calculated as the sum of the losses born
by the assets surviving in each position:
Nt
λt = ( n – N t ) ∑ bk .
k =1
Writing (n–Nt) = n (1– Nt /n) and supposing n much greater than Nt, the ratio
Nt /n can be considered nearly 0 and therefore the portfolio loss can be taken
simply proportional to the sum of the losses. Since it is assumed that the sum is
dominated by large changes, which occur with very low probability, increasing the
number of assets reduces risk by more than it adds to expected loss.
Another basic element of the system of insurance by tranches (Whetten and
Adelson, 2005) is the assumption that, provided the portfolio is well diversified, the
value of the expected shortfall of any two assets inside the same tranche is the
same, independently of the ratio between other assets. This amounts to assume
that the correlation value between two losses in each segment does not depend upon
the number of losses already existing among other assets, once a well-diversified
portfolio has identified any grade of default probability and each obligor has been
assigned to his grade. Moreover, by the efficient market hypothesis, given an even
flow of information through the market, no manager would keep a not-well-
Choices about probability distributions and systemic-financial risk
313
diversified portfolio and so the market self-regulates.
Under this assumption, the subject who pays for the insurance can put the
insured portfolio on the credit side of his balance sheet and use it as an asset in a fund
called Special Purpose Vehicle (SPV), which can contain also other portfolios. This
new enlarged portfolio is composed of tranches, which bear losses in different
percentages, and has an expected loss which is still the same as the expected loss
of the assets composing it, but a distribution with fatter tails.
The fund manager of the SPV can either sell the insured assets by their
different loss forbearance or move the higher segments, collected in super
tranches, on the positive side of the balance sheet, where they can become the
warrant of new insurance contracts and other financial deals. Firms dealing with
these contracts, too, can trade their shares and issue their equities, called synthetic
CDOs, in the financial market.
In order to value the risk of artificial assets, several models have been
proposed (Gordy, 2000; Embrechts et al., 2003), such as Copula models and
Hazard Rate models, but, together with the growth of the trade of risky and
insurance assets, scholars have become more concerned about the definition of
rare events (Donnelly and Embrechts, 2010), about the weight of factors different
from size of losses and about the possibility of measuring sequences not
completely defined.
3. PROBABILITY OF RARE EVENTS
The measurement of the probability of losses by the distance from the point of
default does not take due account of unexpected losses. In unexpected losses it is
rarely possible to measure actual occurrences since they have very low probability
(from 10-6onward) and a magnitude that can be much greater than most
observed values.
By having recourse to Extreme Value Theory (EVT) (Bensalah, 2000) it is
possible to consider the probability of exceptional events through Peak over
Threshold or Hazard Rate models. In these models it is neither necessary to
depend on the occurrence of other losses or defaults, nor to consider the number
and size of losses previously occurred, since it is admitted that losses be linked to
each other in different ways, resulting in different probability distributions.
EVT concerns the asymptotic stability of the distribution of extreme values
drawn from any distribution. Considering losses as independent, identically
distributed random variables, whose distribution is not necessarily known, they can
be ordered by size and grouped into n (quite large) number of blocs each with nearly
314
Contini A.
m elements, each with mean μ and standard deviation σn. The maximum amounts
of each bloc become the arguments of a new function: Mn = max(X1, Xn). The
Generalized Extreme Value distribution describes the limit distribution of these
maxima, suitably standardised to stabilise scale and position as n grows.
Thus the probability that a value exceeds a given threshold approaches an
asymptotic cumulative probability distribution function:
(
)
()
P ⎡⎣ M n − μ σ n ≤ x ⎤⎦ → Hξ x
if n→∞
The interest lies upon the spacing of rare events. Gnedenko and Kolmogorov
(1954) have demonstrated that this function allows reducing predictions about
the rate of occurrence of extreme events to the consideration of the behavior of
only one parameter, representing the thickness of the tail of the exponential
distribution.
()
This new distribution, Hξ x is independent both of the underlying distribution and of the number of occurrences and describes the limit distribution of the
generalised extreme values:
Hξ ,μ ,σ
−
1
x−μ ξ
Hξ ,μ,σ (x) = exp − 1 + ξ ( ) ?
σ
?
= exp − e
−
( x− )
σμ
if ξ ? 0
if ξ = 0
1
⎧
− ⎫
⎛ x − μ⎞⎤ ξ ⎪
⎪ ⎡
x = exp ⎨− ⎢1 + ξ ⎜
⎥ ⎬
⎝ σ ⎟⎠ ⎦ ⎪
⎪ ⎣
⎩
⎭
x
μ
−
⎡ −( ) ⎤
= exp ⎢ − e σ ⎥
⎢⎣
⎥⎦
()
if ξ ≠ 0
if ξ = 0
If it can be supposed that the values are indeed random, independently and
identically distributed, the probability distribution function may follow only three
different forms, according to the value of the parameter ξ, which models the
thickness of the tail.
The parameter ξ, represents the rate of diminution of the probability of
exceptional events and may be: 1) negative, in case of a sudden stop of the
occurrence of exceptional events; 2) positive, in case of a slow diminution; 3) or
zero, in case of an exponential diminution of the rate of increase.
In this way EVT allows to separate the movement of the changes of different
assets from the movements of the whole portfolio, once due care and attention is
given to the composition of the portfolio (the position of each type of asset) and to
the relations between the different assets composing it (Embrechts, 2000).
Choices about probability distributions and systemic-financial risk
315
The majority of financial models of risk endorse the so called “weakest link”
hypothesis, the idea that the strength of a structure is a function of the strength
of the weakest element. If the limit load is the sum of the contributions of the
different elements N of the blocs composing a portfolio, the distribution of the
loads may be assumed to have marginally a Gaussian distribution by the effect of
Central Limit Theorem. It can be also be shown that, if the joint distribution is a
multivariate Gaussian distribution, in very large N, EVT still holds and the
distribution decays approximately as e- θN, with θ, the coefficient of variation σn/μ,
⎛
1 ⎞
decreasing with size: ⎜ θ →
⎟⎠ .
⎝
N
4. THE IMPORTANCE OF THE AMOUNT OF CORRELATION
Unfortunately, the decrease of θ holds true only for definite values of the correlation
between variables (Blake and Lindsay, 1973).
In fact, the probability distribution function of the forbearance of a structure, composed of different parts, may gradually change due to the effects of
increasing size and correlation (Bazant and Pang, 2007). Moreover, the measure
of the correlation length has an influence as well and may change the exponential
function into a stretched exponential.
Therefore, it is questionable both that the distance of the failure threshold
increases with N and that the distribution indeed converges to one of the three
universal forms of EVT, since at large N, numerical simulations do not help,
because nothing guarantees that errors do not grow with them.
Statisticians have addressed the problem of the changing value of θ, called
the persistence problem (Majumdar, 1999), considering the probability that, in a
large quantity of random variables, collected from different distributions (each
with zero mean and known variance), all the values are positive.
This same problem has been studied by applied mathematicians as “the one
sided barrier problem”, while in physics of solids and statistical mechanics it has
been studied in order to evaluate the limit load that can be borne by specific
structures or materials.
Exact asymptotic results for the failure distribution have been derived only in
some particular cases, using very detailed models of different sizes and shapes of
the failures of materials.
The conclusion is that either these distributions do not fall into any of the EV
statistics universal forms or that, when adequately transformed, they converge to
them very slowly and then only under the condition of constancy of the ratio
316
Contini A.
between the rate of change of the failure limit and the rate of change of the size of
the system to which the failure is applied (Manzato et al., 2012).
These considerations should warn against the practice of fitting samples of
data to the forms of EV distributions in order to extrapolate failure limits of
financial structures, but, more than that, should invite to keep a careful watch
upon the making of the said structures, since, as financial markets grow both
larger and more interconnected, stress influences changes more disorderly.
Nothing, in other terms, guarantees that mathematical models, however
right theoretically and however carefully adapted to financial markets, hold right
when predicting failure limits (failure probability can change from 10-6 to 10-3 for
very large structures and the modelling of interactions would require an exceedingly
large amount of calculations).
The debate about the type of utility function to be used (Meyer, 2007); the type
of probability distribution and also the meaning of the term ”uncertainty” is still
on-going. The value to be given to the term “uncertainty” and the definition of risk
are of practical importance when it is necessary to regulate the activity of financial
intermediation.
Once risk is defined as “the maximum amount of loss that is possible to bear
upon one’s assets”, the amount and type of assets to be kept as warrant is
determined as well. The definition, however, rests upon the assumptions regarding
both the position of extreme events in the probability distribution of losses and the
position of the financial traders in the market.
The rule is that subjects in the financial market set aside a given amount
against risk of loss, an amount determined by the subjects’ assets, liabilities and
maximum expected shortfall (Bassi et al., 1998).
This rule is based upon the assumption that it is possible to calculate the risk
of each portfolio, but when it is the risk of the market to be considered, the word
changes its meaning according to a quality, the liquidity of the market, that was
not considered when market was “less risky” (Acerbi and Scandolo, 2008).
It is true that assets / liabilities ratio may have different weight for different
subjects, but when it is time to set thresholds, it is necessary to consider also the
degree of correlation among the different risk segments, the order of magnitude
of the insurance, the influence that either external subjects or external occurrences
may bear (correlation, concentration, contagion and context).
An accepted criterion (Artzner et al., 1999) is that, when the portfolio is a sum
of other portfolios, in order to have a view of the total risk, the risk of an enlarged
portfolio may be at most equal to the sum of the individual risks. This coherence
criterion follows the hypothesis that, as the market grows and the number of
Choices about probability distributions and systemic-financial risk
317
connections of each financial trader increases, so does the strength of the system,
diminishing individual risk (Gabrieli, 2011).
The point is precisely whether the degree of correlation of insolvencies can
be considered constant. Upon the market efficiency hypothesis, both returns and
losses should be random, however, even if it would be possible to measure and
manage the risk of single institutions, this hypothesis needs reconsidering when
insolvency is a threat to the whole system.
5. SYSTEMIC RISK
In the last few years the scenario has changed because the growth of the amount of
derivatives used as insurance devices, together with the growth of the default
probability, has been accompanied to the diminishing reserves of the would be
financial counterparts.
However, the weight given to the probability may change according to the size
of the set of observations. Even when observers have decided time intervals and
degree of correlation among the observations, the probability measure might still
change together with the observers’ information.
Thresholds may depend upon reaching a critical value, such as a critical
shock magnitude or a critical length of time, but it may happen that a threshold
crossing be due to circumstances hitherto neglected or to changes in other points
of the system.
In this case a decision has to be taken about which element to consider as
forerunner. An example is the problem of setting debt limits for financial
institutions: the problem is whether to fix the amount of guarantees against
possible shortfalls or to measure institutions according to their stress resistance;
in other words, to consider the point at which risk becomes systemic.
Systemic risk (Acharya et al., 2010) is defined as the risk of relevant losses
happening together due to common factors, to contagion from losses of own debtors
or to liquidity shocks blocking a strategic point of the credit system.
The relevance that it is possible to assign to each of the causes depends on
the way of modelling interactions. The weight can be put on the link between real
and financial sector of the economic system (Cont and Kan, 2011).
Assuming that financial sector’s behaviour mirrors that of the real sector, it
Nt
can be measured λt = ( n – N t ) ∑ bk how fast the financial sector, after a shock,
returns to its stationary state. To do this, to the loss intensity measure, a factor ( eYt ),
318
Contini A.
is joined, linked to Yt, the distance of the economic system from its trend:
Nt
λt = eYt ( n – N t ) ∑ bk .
Thus, loss correlation can change, since the weight of the factor eYt varies
according to the situation of the real economic system.
Another way of measuring systemic risk is using graph theory: considering
links among financial firms as a measure of their importance, the weight of an
element in the financial system is proportional to the number of its connections
with other elements (Cont et al., 2010).
To measure systemic risk, therefore, it would be necessary to measure not
only the asset over liability ratio at the appropriate scale, but the interrelation
structure between financial institutions in the financial market on one side and
the links between financial and real markets in the economy.
To do this would require more than a formal exam of the balance sheets of
financial intermediaries; to guarantee against systemic risk greater attention
should be given to institutions more liable to spread losses as well as to institutions
which will defend from losses in case of a general deterioration of the economic
situation.
To the question whether the most connected institutions or the ones with the
fastest connection growth will be the institutions more liable to spread losses, the
theory of graphs does not give a univocal answer (Haldane, 2009). Moreover,
there are different positions about risk coverage as well: too strict edge requirements
could make financial institutions overcautious in credit giving but, on the other
side, to leave financial institutions discretionary power under general guidelines
has not proved sufficient (Thurner, 2011).
6. RECONSIDERING THE NATURE OF UNCERTAINTY
When size, time and consequences of events are uncertain, scholars take two strands
of positions: one is grounded on the opinion that, collecting and appropriately
organising available information allows forming an estimate of the behaviour of
their probability distribution.
Another strand, instead, stresses that uncertainty cannot be reduced to
probability, since it is random or subjective, because when uncertainty is taken as
“non-complete knowledge”, the term may span any meaning from total ignorance
to equal probability.
Choices about probability distributions and systemic-financial risk
319
In the first case, if there are conjectures, guesses, feelings, about the unknown
events, it is possible to build an opinion function, to order, inside a belief function,
the agents’ opinions about the possible value of a variable and afterwards to
compare the different belief functions with an ideal opinion function (Minh HaDuong, 2002).
It is possible to construct a possibility distribution, because DempsterShafer theory allows combining data sets taken in different times by different
subjects, each of them with its own weight of probability, plausibility or belief.
Thus, ultimately, uncertainty comes to regard the measure of credibility to
be given to an opinion function or to the distance from it; however, this solution
does not take proper care of conflicting opinions, since the suggested way of
solving conflicts is to measure the distance of different opinions from the best or
worst measure previously established (Sentz, 2002), but nothing is said about how
to judge their difference.
The other strand of thought argues that, even if it is possible to measure the
relevance quotient, representing the relevance of yet unknown over known events,
this is valid only insofar as “they belong to a single set, of magnitude measurable
in term of a common unit” (Keynes, 1921). In fact, following Keynes’ argument,
it would be more appropriate to discuss about probability, possibility or relevance
quotient of propositions and not about events, consequences or happenings.
As procedures go on including corrections and generalisations, the argument
still shifts from the relevance of the results obtained to the appropriateness of the
choices made about parameters, assumptions and frames. Some scholars study
how to detect changes in the probabilistic properties of a process (Kolmogorov et
al., 1990), while others (Binmore, 2006) are in doubt about the possibility of
reducing uncertainty under the realm of mathematical coherence.
Acknowledgements
I wish to thank professor Angelo Zanella, who encouraged me to present this paper
and my two referees for many important and useful suggestions. Besides, I heartily
thank my colleagues and friends Carlo Beretta, Vito Moramarco, Carsten Nielsen,
Marie-Louise Schmitz, Marta Spreafico, Luciano Venturini and Paola Villa, who
read, commented, suggested more appropriate terms and in general helped greatly
to improve the form of the paper. Franca Fassini solved many of my difficulties
concerning computer use, all this notwithstanding, errors and omissions are my
own only.
320
Contini A.
REFERENCES
Acerbi, C. and Scandolo, G. (2008). Liquidity Risk Theory and Coherent Measures of Risk. http://
ssrn.com/abstract=1048322.
Acharya, V., Pedersen, L.H., Philippon, T. and Richardson, M. (2010). Measuring Systemic Risk.
Working Paper 10-02. Stern School of Business, New York University.
www.clevelandfed.org/research/workpaper/2010/wp1002.pdf.
Anscombe, F. and Aumann, R. (1963). A definition of subjective probability. The Annals of
Mathematics and Statistics, (34): 199-205.
Artzner, P., Delbaen, F., Eber, J.M. and Heath, D. (1999). Coherent measures of risk. Mathematical
Finance, (9): 203-228.
Basili, M. and Zappia, C. (2010). Ambiguity and uncertainty in Ellsberg and Shackle. Cambridge
Journal of Economics, (34): 449-474.
Bassi, F., Embrechts, P. and Kafetzaki, M. (1998). Risk management and quantile estimation. In:
Adler, R., Feldman, R. and Taqqu, M. (Eds.), A practical guide to heavy tails. Birkhauser,
Boston: 111-130.
Bazant, Z. and Pang, S.D. (2007). Activation energy based EV statistics and size effect. Journal of
the Mechanics and Physics of Solids, (55): 91-131.
Bensalah, Y. (2000). Steps in Applying EVT to Finance. Bank of Canada Working Paper n. 2000-20.
www.bankofcanada.ca/wp-content/uploads/2010/01/wp00-20.pdf.
Binmore, K. (2006). Rational decisions in large worlds. Paper presented to the ADRES Conference.
Marseille. http://else.econ.ucl.ac.uk/papers/uploaded/266.pdf.
Blake, I. and Lindsay, W. (1973). Level-crossing problems for random processes. IEEE Transactions
on Information Theory, (19): 295-315.
Cont, R., Moussa, A. and Minca, A. (2010). Measuring default contagion and systemic risk: Insights
from network models. Paper presented to Swissquote Conference 28-29 October. Ecole
Polytechnique Fédérale de Lausanne.
http://sfi.epfl.ch/files/content/sites/sfi/files/shared/Swissquote
%20Conference%202010/Cont_talk.pdf.
Cont, R. and Kan, Y.H. (2011). Dynamic hedging of portfolio credit derivatives. SIAM Journal of
Financial Mathematics, (2): 112-140.
Dhongde, S. and Pattanaik, P. (2010). Preference, choice, and rationality: Amartya Sen’s critique of
the theory of rational choice in economics. In: Morris, C. (Ed.), Amartya Sen. Cambridge
University Press, New York: 13-39.
Donnelly, C. and Embrechts, P. (2010). The devil is in the tail. ASTIN Bulletin, (40): 1-33.
Elizalde, A. (2006). Credit risk models. IV: Understanding and pricing CDOs. CEMFI Working
Paper No. 0608. ftp://ftp.cemfi.es/wp/06/0608.pdf.
Embrechts, P. (2000). Extreme value theory: Potential and limitations as an integrated risk management tool. Derivatives Use, Trading & Regulation, (6), 449-456.
Embrechts, P., Frey, R. and McNeil, A. (2003). Credit Risk Models: An Overview. ETH. Zurich.
Gabrieli, S. (2011). The Microstructure of the Money Market before and after the Financial Crisis:
A Network Perspective. CEIS Tor Vergata. Research Paper Series n. 181.
Gnedenko, B. and Kolmogorov, A. (1954). Limit Sums of Independent Random Variables. AddisonWesley, London.
Choices about probability distributions and systemic-financial risk
321
Gordy, M. (2000). A comparative anatomy of credit risk models. Journal of Banking and Finance,
(24): 119-149.
Haldane, A. (2009). Rethinking the Financial Network. Federal Reserve Bank of Chicago 12th Annual
International Banking Conference.
Hammond, P. (1997). Rationality in economics. Rivista Internazionale di Scienze Sociali, (105): 247288.
Keynes, J.M. (1921). The Treatise on Probability, Macmillan, London, reprinted in Keynes J.M.
(1983), The Collected Writings of J. M. Keynes, vol. VIII, Macmillan, London.
Kolmogorov, A.N., Prokhorov, Y.V. and Shiryaev, A.N. (1990). Probabilistic-statistical methods of
detecting spontaneously occurring effects. Proceedings of the Steklov Institute of Mathematics,
1-21.
Levy, H. (1980). The CAPM and the investment horizon. Journal of Portfolio Management, (7): 3240.
Majumdar, S. (1999). Persistence in Non-equilibrium Systems. http://arxiv.org/pdf/cond-mat/
9907407.pdf.
Manzato, C, Shekhawat, A., Nukala, P., Alava, M., Sethna, J. and Zapperi, S. (2012). Fracture strength
of disordered media: universality, interactions and tail asymptotics. Physical Review Letters,
(108), 065504.
Markowitz, H. (1952). Portfolio selection. Journal of Finance, (7): 77-91.
Marschak, J. (1950). Rational behavior, uncertain prospects and measurable utility. Econometrica,
(18): 111-141.
Meyer, J. (2007). Representing risk preferences as expected utility-based decision models. SCC-76
Meeting. March 15-17. Gulf Shores. Alabama.
Minh, Ha-Duong (2002). Uncertainty theory and complex systems scenarios analysis. CASOS 2002.
June 21-23. Pittsburgh.
Montesano, A. (2005). La nozione di razionalità in economia. Rivista italiana degli economisti, (10):
23-42.
Philippon, T. (2008). The evolution of the financial industry. Mimeo, New York University.
Robbins, L. (1932). Essay on the Nature and Significance of Economic Science. Macmillan & Co,
London.
Sentz, K. (2002). Combination of Evidence in Dempster-Shafer Theory.
http://www.sandia.gov/epistemic/Reports/SAND2002-0835.pdf.
Thurner, S. (2011). Systemic financial risk: agent based models to understand the leverage cycle on
national scales and its consequences IFP/WKP/FGS(2011)1.
http://www.futurict.eu/sites/default/files/Report%20commissioned%20by%20the
%20OECD%20on%20Systemic%20Risk,%20by%20Stefan%20Thurner.pdf.
Vercelli, A. (2011). Weight of argument and economic decisions. In: Marzetti, S. and Scazzieri, R.
(Eds.), Fundamental Uncertainty. Palgrave Macmillan, Basingstoke.
Von Neumann, J. and Morgenstern, O. (1944). Theory of Games and Economic Behaviour. Princeton
University Press, Princeton.
Whetten, M. and Adelson, M. (2005). CDOs-squared demystified. Nomura Securities International,
Inc. http://www.securitization.net/pdf/Nomura/CDO-Squared_4Feb05.pdf.