Discussion Paper: 2010/08 Saddlepoint approximation of expected shortfall for transformed means Simon A. Broda and Marc S. Paolella www.feb.uva.nl/ke/UvA-Econometrics Amsterdam School of Economics Department of Quantitative Economics Roetersstraat 11 1018 WB AMSTERDAM The Netherlands Saddlepoint Approximation of Expected Shortfall for Transformed Means Simon A. Brodaa, ∗ a Marc S. Paolellab, c Department of Quantitative Economics, University of Amsterdam, The Netherlands b Swiss Banking Institute, University of Zurich, Switzerland c Swiss Finance Institute December 2010 Abstract Expected shortfall, as a coherent risk measure, has received a substantial amount of attention in the literature recently. For many distributions of practical interest however, it cannot be obtained in explicit form, and numerical techniques must be employed. The present manuscript derives a saddlepoint approximation for the expected shortfall associated with certain random variables that permit a stochastic representation in terms of some underlying random variables possessing a moment generating function. The new approximation can be evaluated quickly and reliably, and provides excellent accuracy. The doubly noncentral t distribution is considered as an example. Key Words: Expected Shortfall, Noncentral t Distribution, Saddlepoint Approximation ∗ Address correspondence to Simon Broda, Department of Quantitative Economics, 1018WB Amsterdam, The Netherlands, or email [email protected]. Part of the research of M. Paolella has been carried out within the National Centre of Competence in Research “Financial Valuation and Risk Management” (NCCR FINRISK), which is a research program supported by the Swiss National Science Foundation. 1 Introduction The question of how to adequately measure risk is central to risk management, and has seen much controversy lately: beginning with the seminal paper of Artzner, Delbaen, Eber, and Heath (1999), an enormous body of literature has accumulated which criticizes the most widely used risk measure, Value at Risk (VaR), for its failure to be coherent. In particular, VaR is not subadditive, thus violating one of the four axioms defining a coherent risk measure. The expected shortfall, as a coherent alternative, appears to be emerging as a new standard. Suppose the return on a financial position is represented by a continuous random variable X with density fX (x). Then the expected shortfall (ES) is defined, for a given confidence level q ∈ (0, 1), as ES (q) 1 (X) ≡ −E [X|X ≤ xq ] = − q Z xq xfX (x)dx, (1) −∞ where xq is the corresponding VaR, i.e., the 100q% quantile of X, see Acerbi and Tasche (2002a,b). Often q is set to 1%. Note that here, we have defined the expected shortfall in terms of relative loss, as is common. For many distributions of interest, (1) must be evaluated numerically. This time-consuming task potentially precludes the use of certain distributions in computationally demanding tasks such as mean-ES portfolio allocation, so that a reliable approximation is desirable. The saddlepoint approximation, introduced next, is a natural candidate. Saddlepoint approximations for densities result from expanding the Fourier-Mellin integral of the moment generating function (hereafter mgf) in an asymptotic series. Since their introduction by Daniels (1954), they have been found to deliver excellent accuracy in countless applications; see Butler (2007), Broda and Paolella (2011), and the references therein. The cumulative distribution function could be approximated by numerically integrating the approximate density; however, Lugannani and Rice (1980) derived a direct approximation to the distribution function which circumvents this need. Both Daniels’s and Lugannani and Rice’s approximation require that the random variable to which the approximation is to be applied possess an mgf. Daniels and Young (1991) have generalized these results to marginal distributions of nonlinear functions of bivariate random vectors; see also Jing and Robinson (1994). Concerning the expected shortfall as defined in (1) above, the result of Martin (2006) can be used whenever the moment generating function of X exists. For certain problems however, X does not possess an mgf, but may permit a stochastic representation in terms of random variables that do. The present paper derives a saddlepoint approximation that is applicable in this setting. At first glance, it might perhaps appear as if the most natural route to such a result were via a routine application of asymptotic expansions. This is certainly possible (if a bit tedious), and would lead to a result which generalizes that of Martin (2006) in much the same the way Daniels and Young’s result generalizes that of Lugannani and Rice. However, careful scrutiny of the results available in the univariate setting reveals that an alternative strategy may lead to an approximation that is both more accurate and simpler to compute. This is the approach taken here. The plan is as follows. Section 2 reviews some useful results from the theory of saddlepoint 1 approximations. Section 3 demonstrates an alternative method of proof for some known saddlepoint approximations to expected shortfall, and motivates the new approximation given in Section 4. Section 5 applies the approximation to the noncentral t distribution and concludes. 2 Saddlepoint Approximations for Densities and Tail Probabilities Consider the mean of n independent and identically distributed variables, X̄ = n−1 Pn i=1 Xi , and suppose that each Xi possesses a density, and that its cumulant generating function KX(t) converges in a nonvanishing interval containing the origin. The cumulant generating function is the natural logarithm of the mgf, which is, in turn, defined as MX (t) ≡ E etX whenever the expectation exists. It generates the moments of a distribution, in the sense that its jth derivative evaluated at zero equals the jth moment of X. The usefulness of the mgf in the context of risk management derives from the fact that the mgf corresponding to the convolution of two distributions (viz., the mgf of a sum of two independent random variables) is given by the product of the individual mgfs. Hence, the mgf possesses a tractable form in many cases in which the distribution function does not. If the distribution of X̄ is absolutely continuous, as we shall assume throughout, then its density fn (x̄) can be recovered from the mgf via the Fourier-Mellin integral n fn (x̄) = 2πi Z i∞ en(KX(t)−tx) dt. (2) −i∞ Similar expressions exist for the distribution function and the expected shortfall. Typically, the integral in (2) permits no analytical solution, and numerical quadrature must be used. An alternative is to approximate the integral by means of a truncated asymptotic series: Daniels (1954) derived the asymptotic expansion # m X ak + O n−(m+1) , fn (x̄) = gn (x̄) nk " where gn (x̄) = s (3) k=0 n 2 e−nŵ /2 , ′′ 2π KX t̂ ŵ = sgn t̂ q 2t̂x̄ − 2KX t̂ , and the saddlepoint t̂ = t̂ (x̄) solves ′ x̄ = KX t̂ . In most applications this has to be evaluated numerically for each x̄ at which the density is to be approximated. The first two coefficients in the expansion (3) are 5 1 a0 = 1 and a1 = λ̂4 − λ̂23 , 8 24 2 ′′)j/2 , and the truncated series g is referred to as the saddlepoint approxiwhere λ̂j = KX(j) / (KX n mation to fn . A uniform asymptotic expansion for the distribution function of X̄ has been derived in Lugannani and Rice (1980). It is given by Fn (x̄) = Φ ŵn 1/2 + r # "m 1 −nŵ2 /2 X bk −(m+1) , e +O n 2πn nk (4) k=0 where Φ is the cdf of the standard normal distribution. The first two coefficients in the expansion (4) are b0 = 1 1 − ŵ û and b1 = where û = t̂ and a1 is as in (3). λ̂3 1 1 a1 − 3+ 2− , 3 û ŵ 2û û q ′′ t̂ KX Lugannani and Rice derived expansion (4) using complex variable methods. It can, however, also be obtained from (3) by means of the following result of Temme (1982), which we shall use frequently. Theorem 1 (Temme, 1982). Suppose Ψn (ζ) is analytic in a strip containing the real axis in its interior and such that, for some fixed λk and ω, dk λk ωζ 2 ζ e Ψ (ζ) = O n dζ k as Re ζ → ∞. If Ψn (ζ) permits the asymptotic expansion Ψn (ζ) = m X ψk (ζ) k=0 nk + O Ψn (ζ)n−(m+1) , where the ψk (ζ) are analytic in ζ and do not depend on n, then Z η n 2 e−nζ /2 Ψn (ζ)dζ = Hn (η) = 2π −∞ "m # # r "m X Ak 1 −nη2 /2 X Bk (η) 1/2 −(m+1) −(m+1) = Φ ηn , + O(n ) + e +O n nk 2πn nk r k=0 k=0 where Ak are the coefficients in the asymptotic expansion Hn (∞) = ∞ X Ak k=0 B−1 (η) = 0, nk and ηBk (η) = Ak − ψk (η) + 3 , d Bk−1 (η), dη k ∈ {0, 1, . . .}. 3 Saddlepoint Approximations for Expected Shortfall The first step to approximating the expected shortfall of X̄ is to find an approximation to its quantiles x̄q . This can be achieved by truncating the series for Fn (x̄) after k terms, yielding the approximation F̂nk (x̄), say, and numerically solving ˆkq ) = q, F̂nk (x̄ q ∈ (0, 1). (5) It then remains to find an approximation to integrals of the form In (c) = Z c x̄fn (x̄)dx̄. (6) −∞ Martin (2006) shows that In (c) = Φ(ŵc n 1/2 )µ + r 1 −nŵc2 /2 e 2πn µ c −1 − +O n , ŵc ûc (7) ′ (0), and hatted quantities with a subscript c correspond to those in (3) and (4), where µ = KX but evaluated at c rather than x̄. Martin derived his approximation using the same technique employed by Lugannani and Rice in obtaining their expansion of the distribution function. The same result is obtained by applying Theorem 1 to the integral in (6) and retaining only the leading term. Incorporating the second term, which is derived in Appendix A, one has In (c) = Φ(ŵc n1/2 )µ+ r 1 −nŵc2 /2 + e 2πn ! " # cλ̂3,c ca1 c 1 c 1 µ µ − + − + O n−2 , (8) − 3+ − ŵc ûc n û3c ŵc 2û2c ûc t̂c ûc where a1 is as in (3). 10 0 −0.1 5 % relative error of I(c) −0.2 I(c) −0.3 −0.4 −0.5 −0.6 Martin 2nd order SPA Butler and Wood Exact −0.7 −0.8 −15 −10 0 −5 −10 −15 −5 c 0 −20 −15 5 −10 −5 c 0 Figure 1: Approximations to In (c) (left panel) and relative error in percent (right panel). 4 5 20 30 Martin 2nd order SPA Butler and Wood Exact 18 20 % relative error of ES(q) 16 14 ES(q) 12 10 8 6 4 10 0 −10 −20 2 0 0 0.2 0.4 0.6 0.8 −30 1 0.1 0.2 0.3 q 0.4 0.5 q 0.6 0.7 0.8 0.9 Figure 2: Approximations (left panel) and relative error in percent (right panel) for the expected shortfall at level q. Butler and Wood (2004), via a completely different method of proof, obtained an approximation for the mgf, and its logarithmic derivatives, of the truncated random variable a X̄b with density fn (x̄)I(a,b) (x̄)/(Fn (b)−Fn (a)), where as before, fn and Fn are the density and distribution function of X̄, respectively. Setting a = −∞, b = c, and evaluating their approximation for the logarithmic derivative at zero produces the following approximation: In (c) = Φ(ŵc n 1/2 )µ + r 1 −nŵc2 /2 e 2πn µ c 1 c µ 1 −1 +O n . − + − − ŵc ûc n ŵc3 ŵc3 t̂ûc (9) We illustrate the relative merits of approximations (7), (8) and (9) by considering one copy (i.e., n = 1) of a Normal Inverse Gaussian (NIG) random variable with parameters (α, β, δ, µ) = (1, −1/2, 2, 0). The NIG belongs to the family of Generalized Hyperbolic (GHyp) distributions. These, and the NIG in particular, have been very successfully applied to financial returns data, see, e.g., Eberlein and Keller (1995), Barndorff-Nielsen (1997) and Broda and Paolella (2009). As shown in the latter paper, the saddlepoint approximation for the NIG is explicit, and the √ ′′(t̂ ) = ȳ 3 δ 3 α, ŵ = relevant quantities are t̂c = zα/ȳ − β, z = (c − µ)/δ, ȳ = 1 + z 2 , KX c c sgn(t̂c )[2δ(ȳα − zβ − (α2 − β 2 )1/2 ]1/2 , λ̂3,c = 3z(ȳαδ)−1/2 and λ̂4,c = 3(1 + 5z 2 )(ȳαδ)−1 . The results are shown in Figure 1. As expected, the second order approximation (8) clearly dominates the first order approximation (7), with under 10% error across the entire support. It is also observed that despite being formally accurate to the same order, approximation (9) cannot compete even with (7). This picture changes dramatically if instead of the tail integral In (c), one considers its normalized version, viz., the expected shortfall: the approximation is ES(q) (X) ∼ − ˆkq ) Inℓ (x̄ , ˆkq ) F̂ k (x̄ where ℓ = 1 refers to approximation (7), ℓ = 2 to approximation (8), and ℓ = 3 to approximation ˆkq is determined according to (5) with k = 1 if ℓ ∈ {1, 3} and k = 2 if ℓ = 2. In (9), and x̄ 5 other words, the order of the approximation to the distribution function used in determining the quantile is chosen to agree with that of the approximation to In . The approximations are illustrated in Figure 2. Despite the fact that (9) is only an expansion to order n−3/2 , the resulting approximation to the expected shortfall is comparable to that based on (8), which is O n−5/2 . The reason for this is the following: In order to benefit from error cancellation, one would ideally approximate the expected shortfall by ES(q) (X) ∼ − Z ˆkq x̄ x̄fˆnℓ (x̄)dx̄ −∞ "Z ˆkq x̄ fˆnℓ (x̄)dx̄ −∞ #−1 , that is, use the same approximation fˆnℓ (x̄) in both the numerator and denominator. If the ˆkq ), is used as the denominator, this first order approximation to the distribution function, F̂ 1 (x̄ amounts to approximating the numerator integral by applying Theorem 1 to ˆkq x̄ d 1 − F̂ (x̄) dx̄. x̄ dx̄ n −∞ Z Now, differentiating the distribution function approximation yields c1 d 1 , F̂n (x̄) = gn (x̄) 1 + dx̄ n where c1 = 1 λ̂3 û + − 3. 2 û 2û ŵ A simple way of seeing that this approach leads to approximation (9) is to replace a1 with c1 + O n−1 in the second-order expansion (8). 4 Expected Shortfall for Transformed Means The approximations given in the foregoing section are limited to random variables which possess an mgf. This section derives an approximation for the case where the random variable of interest does not possess an mgf, but can be represented as a smooth function of two underlying random variables that do. The derivation is motivated by the following observation: as demonstrated above for the NIG distribution, the expected shortfall approximation based on (9) provides similar accuracy to that based on (8), despite the fact that the former only involves quantities which are anyway needed for the saddlepoint approximation to the distribution function (4). Note that prior to computing the expected shortfall, one needs to find the relevant quantile of the loss distribution, and thus evaluate the approximate distribution function. Therefore computing the expected shortfall via approximation (9), unlike using (8), entails no extra effort. To fix ideas, let X = (X1 , X2 ) be a bivariate random vector having a density and a joint ′ (t) and K ′′(t) its cumulant generating function, and denote the latter as KX(t). Denote by KX X gradient and hessian, respectively, and by X̄ = (X̄1 , X̄2 ) the mean of X from a sample of n 6 observations. Consider a smooth bijection g mapping the support of Y onto the support of X, such that X̄ = g(Y ) = (g1 (Y ), g2 (Y ))′ , and denote its inverse by h so that Y = (Y1 , Y2 ) = h(X̄) = (h1 (X̄), h2 (X̄))′ . We wish to obtain an approximation for the expected shortfall of Y1 which is analogous to (9); that is, we will approximate In (c) = Z c x̄fn (y1 )dx̄ (10) −∞ by replacing fn (y1 ) with the first derivative of the appropriate distribution approximation and then applying Theorem 1. Our starting point is the following result from Daniels and Young (1991). Let ∇yi g(y) = (∂g1 /∂yi , ∂g2 /∂yi )′ , i ∈ {1, 2}, and let Jg (y) = ∂g/∂y denote the Jacobian of g. The density and distribution function of Y1 are r fn (y1 ) = n −nw̃2 /2 e 1 + O n−1 2π K̃ and Fn (y1 ) = Φ w̃n 1/2 respectively, where w̃ = sgn(y1 − α) q + r 1 1 −nw̃2 /2 1 −1 e − +O n , 2πn w̃ ũ (11) ′ (0), 2 t̃′ g(ỹ) − KX(t̃) , ỹ = (y1 , ỹ2 ), α = h1 (µ), µ = KX i ′′(t̃) h −1 det KX ′ ′′ ′ 2 ∇ g(ỹ) K ( t̃) ∇ g(ỹ) + t̃ ∇ g(ỹ) , K̃ = y y X y2 2 2 det2 Jg (ỹ) ũ = t̃′ ∇y1 g(ỹ) K̃ 1/2 , and, for each value of y1 , t̃ and ỹ2 solve the system ′ KX (t̃) = g(ỹ) t̃′ ∇y2 g(ỹ) = 0. The next step is to differentiate the expansion for Fn . This yields fn (y1 ) = r n −nw̃2 /2 1 ũ 1/2 d 1 −1 + K̃ +O n e . 1− n w̃3 dy1 ũ 2π K̃ As shown in Appendix B, substituting this into (10) and applying Theorem 1 yields 1/2 1 ) A0 + A1 + n In (c) = Φ(w̃c n r 1 −nw̃c2 /2 e + 2πn ! # " A0 c 1 A1 c A0 1 −1 , (12) +O n − + + 3− 3− ′ w̃c ũc n w̃c w̃c w̃c t˜c ∇y1 g(ỹc ) ũc where as before, tilded quantities with a subscript c are evaluated at c rather than x̄, and A0 and A1 are coefficients in the asymptotic expansion of E[y1 ]. Denoting the elements of the ′′(0) as κ , these are A = α Hessian of h1 (X̄) evaluated at µ as hij and the elements of KX ij 0 7 and A1 = hij κij /2, see Hurt (1976). Here we have used the summation convention, i.e., indices appearing as both a subscript and a superscript are to be summed over. For practical purposes, saddlepoint approximations are often applied for n = 1, incorporating the convolution into the mgf itself. Following that practice, we have the following result. Theorem 2. Let X = (X1 , X2 ) be a bivariate random vector possessing a density and joint cumulant generating function KX(t). Let g be a smooth bijection such that X = g(Y ) = (g1 (Y ), g2 (Y ))′ , with inverse Y = (Y1 , Y2 ) = h(X) = (h1 (X), h2 (X))′ . Then φ(w̃c ) E Y1 Y1 < c ≈ (α + A1 ) + F̂ 1 (c) where w̃c = sgn(c − α) q (c − α) 1 1 − 3 w̃c ũc A1 − + ′ ũc t˜c ∇y1 g(ỹc ) ũc 1 ! , (13) ′ ′ (0), A = 2 t˜c g(ỹc ) − KX(t˜c ) , ỹc = (c, ỹ2,c ), α = h1 (µ), µ = KX 1 ′′(0), respectively, hij κij /2, hij and κij are the elements of the Hessian of h1 (X̄) at µ and KX i ′′(t̃ ) h −1 det KX c ′ ′′ ′ 2 K̃c = ∇y2 g(ỹc ) KX(t̃c ) ∇y2 g(ỹc ) + t̃c ∇y2 g(ỹc ) , det2 J(ỹc ) ũc = t̃′c ∇y1 g(ỹc ) K̃c1/2 , , and t̃c and ỹ2,c solve the system F̂ 1 (c) = Φ (w̃c ) + φ(w̃c ) w̃c−1 − ũ−1 c ′ KX (t̃c ) = g(ỹc ) t̃′c ∇y2 g(ỹc ) = 0. 5 Application In this section, we apply our approximation to the doubly noncentral t distribution, which has recently found application as a model for financial returns, see Harvey and Siddique (1999), Broda and Paolella (2007), Paolella (2010), and the references therein. Despite the excellent fit it offers, the widespread adoption of this distribution has been hampered by the computational difficulties associated with it: neither the density nor the distribution function can be expressed in closed form. This limitation is overcome by the saddlepoint approximations for the density and distribution functions derived in Broda and Paolella (2007). The resulting approximations are explicit and hence trivially computed, while offering extraordinary accuracy. This benefit is inherited by the expected shortfall calculation as well. Apart from location and scale parameters (which, without loss of generality, can be taken to be zero and one, respectively), the doubly noncentral t distribution has 3 parameters: the numerator noncentrality µ, denominator noncentrality θ, and the degrees of freedom, n. The relevant quantities entering the approximations are t̃′c ∇y1 g(ỹc ) = (t̃1,c ỹ2,c ), w̃c = q 2 . 2 2 , K̃c = (c2 + 2nt̃2,c )(2nν 2 + 4θν 3 ) + 4n2 ỹ2,c 2nỹ2,c −µt̃1,c − n log ν − 2θν t̃2,c sgn c − α , 8 p α = µ/ 1 + θ/n, where ν = (1 − 2t̃2,c )−1 , t̃1,c = −µ + cỹ2,c , t̃2,c = −(ct̃1c )/(2nỹ2,c ), and, with a3 = c4 + 2nc2 + n2 , a2 = −2c3 µ − 2cnµ, a1 = c2 µ2 − nc2 − n2 − θn, a0 = cnµ, c2 = a2 , a3 c1 = a1 , a3 c0 = a0 , a3 1 1 q = c1 − c22 , 3 9 √ 1 1 r = (c1 c2 − 3c0 ) − c32 , m = q 3 + r2 , and s1,2 = (r ± m)1/3 , 6 27 p p c2 ỹ2,c = −4q cos cos−1 r/ −q 3 /3 − , 3 and it is easily seen that A1 = 3α(n + 2θ)/(4(n + θ)2 ). With these, approximation (13) can be computed. For illustration, we choose the set of parameters (µ, θ) = (−10, 10), which corresponds to a heavily left-skewed distribution. For n ∈ {2, 4, 8, 16}, Figure 3 displays the accuracy of (13). The exact values (the solid line in the graph) are obtained from the relationship In (c) = cFn (c) − Z c Fn (x̄)dx̄, c < 0, (14) −∞ which follows from (6) upon integration by parts, and where the exact distribution function Fn is computed via the algorithm described in Reeve (1986). The accuracy of the approximation clearly improves with larger n, as expected, but is already quite good for n as low as 2 (note that for n = 1, the doubly noncentral t does not possess a mean, and thus the expected shortfall is not defined). When the t distribution is fitted to actual financial returns data, the degrees of freedom parameter is typically estimated between 4 and 8. The approximation is certainly accurate enough in this range for all practical purposes. 9 n=2 n=4 −4 −4 −6 −6 −8 −8 −12 E[T|T<tq] E[T|T<tq] −10 −14 −16 −18 −10 −12 −14 −20 −24 0 −16 Exact SPA −22 0.2 0.4 0.6 0.8 −18 0 1 0.2 0.4 q 0.6 0.8 1 0.6 0.8 1 q n=8 n=16 −7 −8 −8 −9 −9 −10 −11 −11 E[T|T<tq] E[T|T<tq] −10 −12 −13 −12 −13 −14 −14 −15 −15 −16 −17 0 0.2 0.4 0.6 0.8 −16 0 1 q 0.2 0.4 q Figure 3: Approximations to expected shortfall of a doubly noncentral t with µ = −10, θ = 10, and n = 2, 4, 8, 16, from top left to bottom right. 10 References Acerbi, C. and D. Tasche (2002a): “Expected Shortfall: A Natural Coherent Alternative to Value at Risk,” Economic Notes, 31, 379–388. 1 ——— (2002b): “On the coherence of Expected Shortfall,” Journal of Banking and Finance, 26, 1487–1503. 1 Artzner, P., F. Delbaen, J.-M. Eber, and D. Heath (1999): “Coherent Measures of Risk,” Mathematical Finance, 9, 203–228. 1 Barndorff-Nielsen, O. E. (1997): “Normal Inverse Gaussian Distributions and Stochastic Volatility Modelling,” Scandinavian Journal of Statistics, 24, 1–13. 5 Broda, S. and M. S. Paolella (2007): “Saddlepoint Approximations for the Doubly Noncentral t Distribution,” Computational Statistics & Data Analysis, 51, 2907–2918. 8 Broda, S. A. and M. S. Paolella (2009): “CHICAGO: A Fast and Accurate Method for Portfolio Risk Calculation,” Journal of Financial Econometrics, 7, 412–436. 5 ——— (2011): “Saddlepoint Approximations: A Review and Some New Applications,” in Handbook of Computational Statistics, ed. by J. E. Gentle, W. K. Härdle, and Y. Mori, Berlin: Springer, 2nd ed. 1 Butler, R. W. (2007): Saddlepoint Approximations with Applications, Cambridge: Cambridge University Press. 1 Butler, R. W. and A. T. A. Wood (2004): “Saddlepoint Approximation for Moment Generating Functions of Truncated Random Variables,” Annals of Statistics, 32, 2712–2730. 4 Daniels, H. E. (1954): “Saddlepoint Approximation in Statistics,” Ann. Math. Statist., 25, 631–650. 1, 2 Daniels, H. E. and G. A. Young (1991): “Saddlepoint Approximation for the Studentized Mean, with an Application to the Bootstrap,” Biometrika, 78, 169–179. 1, 7 Eberlein, E. and U. Keller (1995): “Hyperbolic Distributions in Finance,” Bernoulli, 1, 281–299. 5 Harvey, C. R. and A. Siddique (1999): “Autoregressive Conditional Skewness,” Journal of Financial and Quantitative Analysis, 34, 465–487. 8 Hurt, J. (1976): “Asymptotic Expansions of Functions of Statistics,” Applications of Mathematics, 21, 444–456. 8 Jing, B. and J. Robinson (1994): “Saddlepoint Approximations for Marginal and Conditional Probabilities of Transformed Variables,” Annals of Statistics, 22, 1115–1132. 1 11 Lugannani, R. and S. O. Rice (1980): “Saddlepoint Approximations for the Distribution of Sums of Independent Random Variables,” Adv. Appl. Prob., 12, 475–490. 1, 3, 4 Martin, R. (2006): “The Saddlepoint Method and Portfolio Optionalities,” Risk Magazine, 19, 93–95. 1, 4 Paolella, M. S. (2010): “ALRIGHT: Asymmetric Large-Scale (I)GARCH with Hetero-Tails,” Tech. rep., available at SSRN. 8 Reeve, C. P. (1986): “An Algorithm for Computing the Doubly Noncentral t C.D.F. to a Specified Accuracy,” SED Note N-86-5, National Bureau of Standards, Statistical Engineering Division. 9 Temme, N. M. (1982): “The Uniform Asymptotic Expansion of a Class of Integrals Related to Cumulative Distribution Functions,” SIAM Journal on Mathematical Analysis, 13, 239–253. 3 A Proof of (8) Replacing fn (x̄) in (6) with its second order saddlepoint approximation and applying Theorem 1 yields In (c) = = = = Z c x̄fn (x̄)dx̄ −∞ r n 2π Z r n 2π Z r n 2π Z ŵc −∞ ŵc −∞ ŵc h i x̄ a1 2 q + O n−2 dx̄ e−nŵ /2 1 + n Kx′′(t̂) h i x̄ dx̄ a1 2 q + O n−2 dŵ e−nŵ /2 1 + dŵ n Kx′′(t̂) e−nŵ 2 /2 Ψn (ŵ)dŵ, −∞ where h i a1 x̄ dx̄ q 1+ + O n−2 dŵ n Kx′′(t̂) i ŵx̄ h a1 = + O n−2 1+ û n ψ1 (ŵ) = ψ0 (ŵ) + + O n−2 , n Ψn (ŵ) ≡ with ŵx̄ and ψ0 (ŵ) ≡ û 5 2 ŵx̄ 1 λ̂4 − λ̂3 . ψ1 (ŵ) ≡ û 8 24 12 (15) Equation (15) is in the form of Theorem 1. The coefficients in the expansion evaluate to B0 (ŵc ) = µ c − ŵc ûc and d B0 (ŵc ) ŵc B1 (ŵc ) = A1 − ψ1 (ŵc ) + dŵc µ 5 1 dc c d ŵc c 1 λ̂4 − λ̂23 − 2 − + 2 ûc =− ûc 8 24 ŵc ûc dŵc ûc dŵc ŵc c 1 5 2 µ ŵc c ŵc 1 =− λ̂4 − λ̂3 − 2 − + ŵc λ̂3 + ûc 8 24 ŵc 2 t̂c ûc û2c ûc 5 µ c 1 1 c cλ̂3 λ̂4 − λ̂23 − 3 − ⇔ B1 (ŵc ) = − + 3 + 2. ûc 8 24 ŵc û 2û t̂c ûc c c B Proof of (12) Replacing fn (y1 ) in (10) with the first derivative of the appropriate tail probability approximation and applying Theorem 1 yields In (c) = Z c y1 fn (y1 )dy1 −∞ = r n 2π = r Z n 2π = r Z n 2π Z c −∞ w̃c −∞ w̃c 1 ũ y1 −nw̃2 /2 1/2 d 1 −1 p e 1− + K̃ +O n dy1 n w̃3 dy1 ũ K̃ 1 ũ dy1 y1 −nw̃2 /2 1/2 d 1 −1 p e 1− + K̃ +O n dw̃ dw̃ K̃ n w̃3 dy1 ũ e−nw̃ 2 /2 Ψn (w̃)dw̃, −∞ where with dy1 y1 1 ũ 1/2 d 1 −1 p + K̃ +O n Ψn (w̃) ≡ 1− dw̃ K̃ n w̃3 dy1 ũ w̃y1 1 ũ 1/2 d 1 −1 = + K̃ 1− +O n ũ n w̃3 dy1 ũ ψ1 (w̃) + O n−1 , = ψ0 (w̃) + n ψ0 (w̃) ≡ w̃y1 and ũ − ψ1 (w̃) ≡ 13 y1 w̃y1 K̃ 1/2 d 1 + . w̃2 ũ dy1 ũ (16) Again, Equation (16) is in the form of Theorem 1, with the coefficients in the expansion given by B0 (w̃c ) = c A0 − w̃c ũc and w̃c B1 (w̃c ) = A1 − ψ1 (w̃c ) + d B0 (w̃c ) dw̃c 1/2 = A1 + c w̃c cK̃c + w̃c2 ũc 1/2 c w̃c cK̃c = A1 + 2 + w̃c ũc d 1 d A0 c + − dc ũc dw̃c w̃c ũc d 1 A0 d c − 2− dc ũc w̃c dw̃c ũc 1/2 d 1 A0 dc d c − 2− dc ũc w̃c dw̃c dc ũc 1/2 1/2 A0 w̃c K̃c d 1 w̃c cK̃c d 1 1 c − 2− +c = A1 + 2 + w̃c ũc dc ũc w̃c ũc ũc dc ũc = A1 + w̃c cK̃c c + 2 w̃c ũc 1/2 A0 w̃c K̃c c = A1 + 2 − 2 − w̃c w̃c ũ2c . 14
© Copyright 2026 Paperzz