Asymptotic Optimal Strategy for Portfolio Optimization in a Slowly Varying Stochastic Environment Jean-Pierre Fouque∗ Ruimeng Hu† March 9, 2016 Abstract In this paper, we study the portfolio optimization problem with general utility functions and when the return and volatility of underlying asset are slowly varying. An asymptotic optimal strategy is provided within a specific class of admissible controls under this problem setup. Specifically, we first establish a rigorous first order approximation of the value function associated to a fixed zeroth order suboptimal trading strategy, which is given by the heuristic argument in [J.-P. Fouque, R. Sircar and T. Zariphopoulou, Mathematical Finance, 2016]. Then, we show that this zeroth order suboptimal strategy is asymptotically optimal in a specific family of admissible trading strategies. Finally, we show that our assumptions are satisfied by a particular fully solvable model. Keywords: Portfolio allocation, stochastic volatility, regular perturbation, asymptotic optimality 1 Introduction The portfolio optimization problem was first introduced and studied in the continuous-time framework in Merton [1969, 1971], which provided explicit solutions on how to trade stocks and/or how to consume so as to maximize one’s utility, with risky assets following the Black-Scholes-Merton model (that is, geometric Brownian motions with constant returns and constant volatilities), and when the utility function is of specific types (for instance, Constant Relative Risk Aversion (CRRA)). Following these pioneer works, additional constraints were added in this model to mimic real-life investments. This includes transaction cost originally considered by Magill and Constantinides [1976] and a user’s guide by Guasoni and MuhleKarbe [2013], and investments under drawdown constraint for instance by Grossman and Zhou [1993], Cvitanic and Karatzas [1995] and Elie and Touzi [2008], just to name a few. In the meantime, general models for risky assets have been also considered, among which, Cox and Huang [1989] and Karatzas et al. [1987] first studied the incomplete market case, Zariphopoulou [1999] studied the case where the drift and volatility terms are non-linear functions of the asset price, and Chacko and Viceira [2005] gave a closed-form solution under a particular one-factor stochastic volatility model. Recently, multiscale factor models for risky assets were considered in the portfolio optimization problem in Fouque et al. [2016], where return and volatility are driven by fast and slow factors. Specifically, the authors heuristically derived the asymptotic approximation to the value function and the optimal strategy for general utility functions. In this paper, we shall focus on the risky asset modeled by only slowly varying stochastic factor, and the reason is twofold: Firstly, slow factor is particularly important in long-term investment, because the effect of fast factor is approximately averaged out in the long time as studied in [Fouque et al., 2016, Section 2]. Secondly, analysis under the model with fast mean-reverting stochastic factor requires singular asymptotic techniques, and more technique details in combining the fast and slow factors, and thus, this will be presented in another paper in preparation (Fouque and Hu [2016]). We describe the model as below, with dynamics of the underlying asset and slowly varying factor denoted as St and Zt respectively, dSt = µ(Zt )St dt + σ(Zt )St dWt , √ dZt = δc(Zt ) dt + δg(Zt ) dWtZ , ∗ Department of Statistics & Applied Probability, University of California, [email protected]. Work supported by NSF grant DMS-1409434. † Department of Statistics & Applied Probability, University of California, [email protected]. 1 (1.1) (1.2) Santa Barbara, CA 93106-3110, Santa Barbara, CA 93106-3110, where the standard Brownian motions Wt , WtZ are correlated, |ρ| < 1. d W, W Z t = ρ dt, Assumptions on the coefficients µ(z), σ(z), c(z), g(z) of the model will be specified in Section 2.4. In (1.2), D (1) δ is a small positive parameter that characterizes the slow variation of the process Z. Note that Zt = Zδt , (1) where the diffusion process Z has the following infinitesimal generator, denoted by M M= 1 2 g(z)2 ∂zz + c(z)∂z . 2 (1.3) We refer to Fouque et al. [2011] for more details of this model, where asymptotic results in the limit δ → 0 are derived for linear problems of option pricing. Denote by Xtπ the wealth process associated to the Markovian strategy π, and in this strategy, the amount of money π(t, x, z) is invested in stock at time t, when the stock price is x, and the level of the slow factor Zt is z, with the remaining money held in money market earning a risk-free interest of r. Assuming that the portfolio is self-financing, then Xtπ follows dSt + r(Xtπ − π(t, Xtπ , Zt )) dt St = (rXtπ + π(t, Xtπ , Zt )(µ(Zt ) − r)) dt + π(t, Xtπ , Zt )σ(Zt ) dWt . dXtπ = π(t, Xtπ , Zt ) For simplicity and without loss of generality, we assume r = 0 for the rest of paper, and then dXtπ = π(t, Xtπ , Zt )µ(Zt ) dt + π(t, Xtπ , Zt )σ(Zt ) dWt . (1.4) An investor aims at finding an optimal strategy π which maximizes her terminal expected utility E [U (XTπ )], where U (x) is in a general class of utility functions. Denote by V δ (t, x, z) the value function V δ (t, x, z) = sup π∈A(t,x,z) E [U (XTπ )|Xtπ = x, Zt = z] , (1.5) where the supremum is taken over all admissible strategies A(t, x, z), A(t, x, z) = {π : Xsπ in (1.4) stays nonnegative ∀s ≥ t, given Xt = x, and Zt = z} . The Hamilton-Jacobi-Bellman (HJB) equation for V δ is given by √ 1 δ δ δ 2 2 δ δ = 0. Vt + δMV + max σ(z) π Vxx + π µ(z)Vx + δρg(z)σ(z)Vxz π∈A 2 (1.6) (1.7) As in Fouque et al. [2016], we shall assume that V δ is the unique classical solution of (1.7). Maximizing in π and plugging in the optimizer gives the following nonlinear equation, 2 √ δ λ(z)Vxδ + δρg(z)Vxz Vtδ + δMV δ − = 0, (1.8) δ 2Vxx V δ (T, x, z) = U (x), where the optimizer (optimal control) is given in the feedback form by √ δ δρg(z)Vxz λ(z)Vxδ ⋆ π =− − , δ δ σ(z)Vxx σ(z)Vxx and the Sharpe ratio is λ(z) = µ(z)/σ(z). The HJB equation (1.7) is fully nonlinear and not explicitly solvable in general. In Fouque et al. [2016], a regular perturbation approach is used to derive an approximation for V δ up to the first order, namely, the value function V δ is formally expanded as follows: √ (1.9) V δ = v (0) + δv (1) + δv (2) + · · · , 2 with v (0) and v (1) identified by asymptotic equations. It is also observed in [Fouque et al., 2016, Section 3.2.1] that the zeroth order suboptimal strategy (0) π (0) (t, x, z) = − λ(z) vx (t, x, z) , (0) σ(z) vxx (t, x, z) (1.10) √ not only gives the optimal value up to the principal term v (0) , but also up to first order δ correction √ v (0) + δv (1) . h i √ (0) Main result. We first prove that the first order approximation of E U (XTπ ) is v (0) + δv (1) , where (0) XTπ is the terminal value of the wealth process associated to the strategy π (0) given by (1.10), and U (x) is in a general class of utility functions with precise assumptions given in Section 2.3. Then we establish the optimality of π (0) in the class of admissible controls of the form π e0 (t, x, z) + δ α π e1 (t, x, z), with precise properties given in Section 4. We also solve and analyze a concrete model that satisfies all the assumptions in order to demonstrate that they are reasonable. We remark that the optimality of π (0) in the full class of controls (1.6) remains open. Organization of the paper. In Section 2, we briefly review the classical Merton problem and heuristic results in Fouque et al. [2016]. We also list all the assumptions needed for our theoretical proofs. In Section 3, we apply the regular perturbation technique to the value function associated to the strategy π (0) , and we prove its first order accuracy. Then we show π (0) is asymptotically optimal in a smaller class of admissible controls in Section 4. A fully solvable example with a closed-form solution is presented in Section 5, which satisfies all the assumptions listed in Section 2. We make conclusive remakrs in Section 6. 2 Preliminaries and assumptions In this section, we review the classical Merton problem, summarize the heuristic results in Fouque et al. [2016], and list the assumptions on the utility function and the state processes needed for later proofs. 2.1 Merton problem with constant coefficients We first discuss the case of µ and σ being constant in (1.1), which plays a crucial role in interpreting the leading order value function v (0) in (1.9) and analysis of the regular perturbation. This problem has been widely studied and completely solved, and we start with some background results. Let Xt be the solution to dXt = π ⋆ (t, Xt )µ dt + π ⋆ (t, Xt )σ dWt , (2.1) where π ⋆ (t, x) is the optimal trading strategy, then, Xt stays nonnegative up to time T , and Z T 2 |σπ ⋆ (t, Xt )| dt < ∞, almost surely. (2.2) 0 We refer to [Karatzas and Shreve, 1998, Chapter 3] for details. Following the notations in Fouque et al. [2016], we denote by M (t, x; λ) the Merton value function, then, one also has the following regularity results. Proposition 2.1. If the investor’s utility U (x) is C 2 (0, ∞), strictly increasing, strictly concave, and satisfies the Inada and Asymptotic Elasticity conditions: U ′ (0+) = ∞, U ′ (∞) = 0, AE[U ] := lim x x→∞ U ′ (x) < 1, U (x) then, the Merton value function is strictly increasing, strictly concave in the wealth variable x, and decreasing in the time variable t, and it is the unique C 1,2 ([0, T ] × R+ ) solution to the HJB equation 1 M2 1 2 2 (2.3) σ π Mxx + µπMx = Mt − λ2 x = 0, M (T, x; λ) = U (x), Mt + sup 2 2 Mxx π 3 where λ = µ/σ is the constant Sharpe ratio. It is also continuously differentiable with respect to λ = µ/σ, Mx and π ⋆ = − σλ M . xx The proof is referred to [Fouque et al., 2016, Section 2.1] where these properties were stated and it was also mentioned that they have been established primarily using Fenchel-Legendre transformation. The following relation between partial derivatives of M (t, x; λ) is provided in [Fouque et al., 2016, Lemma 3.2]. Lemma 2.2. The Merton value function M (t, x; λ) satisfies the “Vega-Gamma” relation Mλ = −(T − t)λR2 Mxx , where R(t, x; λ) = − Mx (t, x; λ) , Mxx (t, x; λ) (2.4) (2.5) is the risk-tolerance function. Note that R(t, x; λ) is continuous, strictly positive due to the regularity, concavity and monotonicity of M (t, x; λ). As introduced in Fouque et al. [2016], we recall the notation Dk = R(t, x; λ)k ∂xk , k = 1, 2, · · · , 1 Lt,x (λ) = ∂t + λ2 D2 + λ2 D1 . 2 (2.6) (2.7) Note that the coefficients of Lt,x (λ) depend on R(t, x; λ), and then on M (t, x; λ), and the Merton PDE (2.3) can be re-written as Lt,x (λ)M (t, x; λ) = 0. (2.8) The following proposition regarding the linear operator Lt,x (λ) will be used repeatedly in Sections 3 and 4. Proposition 2.3. Let Lt,x (λ) be the operator defined in (2.7), and assume that the utility function U (x) satisfies the conditions in Proposition 2.1 and U (0+) = 0 (or finite), then Lt,x (λ)u(t, x; λ) = 0, u(T, x; λ) = U (x). (2.9) has a unique nonnegative solution. Proof. First, observe that M (t, x; λ) is a solution of (2.9). To show uniqueness, we use the following transformation in Fouque et al. [2016], ξ = − log Mx (t, x; λ) + 21 λ2 (T − t), (2.10) t′ = t, ′ xx which is one-to-one since the Jacobian −M Mx stays positive. Define w(t , ξ; λ) = u(t, x; λ), then w solves: 1 Hw = wt′ + λ2 wξξ = 0, 2 w(T, ξ; λ) = U (I(e−ξ )). Uniqueness of the nonnegative solution then follows from classical results for the heat equation [John, 1982, Chapter 7.1(d)]. 4 2.2 Existing results under slowly varying stochastic factor In this subsection, we summarize the results in Fouque et al. [2016] that will be used in later sections. Recall the formal expansion of V δ : √ V δ = v (0) + δv (1) + δv (2) + · · · , then: (i) The leading order term v (0) is the unique solution to 2 (0) vx 1 (0) vt − λ(z)2 = 0, (0) 2 vxx v (0) (T, x, z) = U (x), (2.11) and is the Merton value function associated with the current (or frozen) Sharpe ratio λ(z) = µ(z)/σ(z): v (0) (t, x, z) = M t, x; λ(z) . (2.12) (ii) The first order correction v (1) solves the linear PDE: (1) vt 1 + λ(z)2 2 (0) vx (0) vxx !2 (0) (1) vxx − λ(z)2 (0) (0) vx vx vxz vxx vxx v (1) = ρλ(z)g(z) (0) x (0) , v (1) (T, x, z) = 0. Using the notations (2.6)-(2.7), v (1) satisfies Lt,x (λ(z))v (1) = −ρλ(z)g(z)D1 vz(0) , v (1) (T, x, z) = 0. (2.13) The linear equation (2.13) with zero terminal condition v (1) (T, x, z) = 0 has a unique solution. As a consequence, the following equation: Lt,x (λ(z))u(t, x, z) = 0, u(T, x, z) = 0, (2.14) has a unique the solution u ≡ 0. (iii) By the “Vega-Gamma” relation stated in Lemma 2.2, the z-derivative of the leading order term v (0) satisfies: vz(0) = −(T − t)λ(z)λ′ (z)D2 v (0) , (2.15) and v (1) is explicitly given in term of v (0) by (0) (0) vx vxz 1 v (1) = − (T − t)ρλ(z)g(z) (0) . 2 vxx 2.3 (2.16) Assumptions on the utility U(x) Assumption 2.4. Throughout the paper, we make the following assumptions on the utility U (x): (i) U(x) is C 7 (0, ∞), strictly increasing, strictly concave and satisfying the following conditions (Inada and Asymptotic Elasticity): U ′ (0+) = ∞, U ′ (∞) = 0, AE[U ] := lim x x→∞ (ii) U(0+) is finite. Without loss of generality, we assume U(0+) = 0. 5 U ′ (x) < 1. U (x) (2.17) (iii) Denote by R(x) the risk tolerance, R(x) := − U ′ (x) . U ′′ (x) (2.18) Assume that R(0) = 0, R(x) is strictly increasing and R′ (x) < ∞ on [0, ∞), and there exists K ∈ R+ , such that for x ≥ 0, and 2 ≤ i ≤ 5, i i ∂ R (x) ≤ K. (2.19) x (iv) Define the inverse function of the marginal utility U ′ (x) as I : R+ → R+ , I(y) = U ′(−1) (y), and assume that, for some positive α, I(y) satisfies the polynomial growth condition: I(y) ≤ α + κy −α . (2.20) Note that the risk tolerance R(x) given by (2.18) is in fact the risk tolerance function R(t, x; λ) at terminal time T , and that the assumption (2.19) holds for the case i = 1 as a consequence stated in the following lemma, although it is made for 2 ≤ i ≤ 5, Lemma 2.5 (Källblad and Zariphopoulou [2014], Proposition 14). Assume that the risk tolerance R(x) satisfies: R(0) = 0, R(x) is strictly increasing and R′ (x) < ∞ on [0, ∞), and there exists K ∈ R+ such that 2 2 ∂ R (x) ≤ K, x then p where C = K/2. R′ (x) ≤ C R(x) ≤ Cx, and (2.21) Lemma 2.6. The Asymptotic Elasticity condition (2.17) is implied by the following condition: R(x) ≤ Cx. Proof. It follows directly from Proposition B.3 in Schachermayer [2004], which we recall here for completeness. If lim inf AP [U ] = a > 0, then AE[U ] ≤ (1 − a)+ , x→+∞ where AP [U ] is the Arrow-Pratt risk aversion given by: AP [U ] = −x U ′′ (x) . U ′ (x) (2.22) Now this lemma follows by: a = lim inf x→+∞ x R(x) ≥ lim inf x→+∞ x 1 = > 0. Cx C Remark 2.7. Assumption 2.4 (ii) is a sufficient assumption, in fact, there are cases where U(0+) is not γ finite, but our main Theorem 3.1 still holds. For example, power utility U (x) = xγ with γ < 0, and logarithmic utility U (x) = log(x). For the first case, the fully non-linear accuracy problem is completely solved in Fouque et al. [2016] by distortion transformation, which linearizes the problem. By expending ∂xi Ri (x) in (2.19) and Lemma 2.5 , it is easily shown that Assumption 2.4 (iii) is equivalent to the following conditionson the risk tolerance R(x): a) R(0) = 0, and R(x) is strictly increasing on [0, ∞); and b) Rj (x) ∂xj+1 R(x) ≤ K, ∀0 ≤ j ≤ 4. Proposition 2.8. The following classes of utility functions satisfy Assumption 2.4: R (i) Average of powers: U (x) = E xy ν(dy), where ν(dy) is a finite positive measure, and the support E is compact, contained in [0, 1) and ν({0}) = 0. Two special cases are: 6 a) Power utility U (x) = γ1 xγ , with γ ∈ (0, 1); γ2 γ1 b) Mixture of power utilities U (x) = c1 xγ1 + c2 xγ2 , with γ1 , γ2 ∈ (0, 1) and c1 , c2 > 0. In both cases, ν(dy) is a counting measure of point(s) in [0,1). (ii) U(x) is given by positive inverse of the marginal utility I(y) = U ′(−1) (y) : R+ → R+ , I(y) = Z N y −s ν(ds), (2.23) 0 with ν being finite and positive on compact support (N < +∞). This is Example 18 in Källblad and Zariphopoulou [2014]. The proof of Proposition 2.8 is left to Appendix A. Remark 2.9. In the first class of utilities, 1 ∈ / E in general , unless further assumptions are prescribed on ν(dy). For instance if ν(dy) = dy and E = [0, 1], then, AE[U ] = limx→+∞ ln(x)−1 ln(x) = 1 which does satisfy (2.17). γ Remark 2.10. For the power utility U (x) = xγ , the Arrow-Pratt risk aversion (2.22) is constant and the risk tolerance function (2.5) is linear, given by AP [U ] = −x U ′′ (x) = 1 − γ, U ′ (x) R(x) = x . 1−γ Compared to above, general utilities, such as the mixture of two powers U Mix (x) = c1 xγ2 xγ1 + c2 , γ1 γ2 0 < γ1 ≤ γ2 < 1, produce nonlinear risk aversion functions: AP [U Mix ] = c1 (1 − γ1 )xγ1 −γ2 + c2 (1 − γ2 ) , c1 xγ1 −γ2 + c2 (2.24) as well as nonlinear risk tolerances, x , as x → ∞, c1 xγ1 −γ2 + c2 2 R(x) = x ∼ 1−γ x c1 (1 − γ1 )xγ1 −γ2 + c2 (1 − γ2 ) 1−γ1 , as x → 0. (2.25) This is illustrated in Figure 1. Therefore, working with general utility enables us to model nonlinear relation between the relative risk aversion and the wealth (middle plot), and makes our model closer to results from empirical studies on how AP [U ] varies with wealth. UTILITIES 14 10 80 γ = 0.25 γ = 0.75 0.9 γ2 = 0.75 70 γ = 0.75 Mixture 0.8 Mixture 60 Mixture 1 12 RISK TOLERANCE ARROW−PRATT RISK AVERSION 1 2 8 γ = 0.25 1 γ = 0.25 1 2 0.7 50 0.6 40 0.5 30 0.4 20 0.3 10 6 4 2 0 0 5 10 Wealth x 15 20 0.2 0 5 10 Wealth x 15 20 0 0 5 10 Wealth x Figure 1: Mixture of power utilities with γ1 = 0.25, γ2 = 0.75 and c1 = c2 = 1/2. 7 15 20 2.4 (0) Assumptions on the state processes (Xtπ , St , Zt ) Note that z is only a parameter in the function v (0) (t, x, z) given by (2.11), and for fixed z and t, v (0) is a concave function that has a linear upper bound. For t = 0, there exists a function G(z), so that v (0) (0, x, z) ≤ G(z) + x, ∀(x, z) ∈ R+ × R. Assumption 2.11. We make the following assumptions on the state processes (Xtπ (0) , St , Zt ): (i) For any starting points (s, z) and fixed δ, the system of stochastic differential equations (1.1) - (1.2) has a unique strong solution (St , Zt ). Moreover, λ(z) is a C 3 (R) function, g(z) is a C 2 (R) function, and the coefficients g(z), c(z), λ(z) as well as their derivatives g ′ (z), g ′′ (z), λ′ (z), λ′′ (z) and λ′′′ (z) are at most polynomially growing. (ii) The process Z (1) with infinitesimal generator M defined in (1.3) admits moments of any order uniformly in t ≤ T : (1) k sup E Zt ≤ C(T, k). (2.26) t≤T (iii) The process G(Z· ) is in L2 ([0, T ] × Ω) uniformly in δ, i.e., "Z # T E(0,z) 0 G2 (Zs ) ds ≤ C1 (T, z), (2.27) where C1 (T, z) is independent of δ and Zs follows (1.2) with Z0 = z. (iv) The wealth process X·π (0) is in L2 ([0, T ] × Ω) uniformly in δ , i.e., "Z # T 2 π (0) Xs E(0,x,z) ds ≤ C2 (T, x, z), (2.28) 0 where C2 (T, x, z) is independent of δ and Xsπ dXsπ X0π (0) (0) = π (0) (s, Xsπ (0) (0) follows , Zs )µ(Zs ) ds + π (0) (s, Xsπ (0) , Zs )σ(Zs ) dWs , s > 0; (2.29) = x. Remark 2.12. Note that in Assumption 2.11 (i), the word “polynomially growing” is interpreted in different ways depending on the domain of g(z), c(z) and λ(z). If a function h(z) : R → R, for instance, when Z is an Ornstein–Uhlenbeck process, then polynomial growth means that there exists a integer k and a > 0, such that k |h(z)| ≤ a(1 + |z| ). Otherwise if h(z) : R+ → R, for example when Z is a Cox–Ingersoll-Ross process, then it means that there exists a k ∈ N and a > 0, such that |h(z)| ≤ a(1 + z k + z −k ). In Assumption 2.11 (iii), if the diffusion process Z has exponential moments, then at-most exponential growth of G(z) ensures (2.27). An explicit example will be given in Section 5. Now we are ready to present the following estimate, which will be used in the rest of the paper. (0) Lemma 2.13. Under Assumption 2.11 (iii) - (iv), the process v (0) (·, X·π , Z· ) is in L2 ([0, T ]×Ω) uniformly in δ, i.e. ∀(t, x, z) ∈ [0, T ] × R+ × R: "Z # T 2 (0) π (0) E(t,x,z) v (s, Xs , Zs ) ds ≤ C3 (T, x, z), (2.30) t where v (0) (t, x, z) is defined in Section 2.2 and satisfies equation (2.11). 8 Proof. It follows by the straightforward computation: "Z # "Z T 2 (0) π (0) v (s, Xs , Zs ) ds ≤ E(t,x,z) E(t,x,z) t ≤ E(t,x,z) "Z t ≤ 2 E(t,x,z) ≤ 2 E(0,z) T (0) 2 ds G(Zs ) + Xsπ "Z "Z T 2 # # T t G (Zs ) ds + E(t,x,z) t T 2 # G (Zs ) ds + E(0,x,z) 0 v "Z "Z (0) T t T 0 . = 2 (C1 (T, z) + C2 (T, x, z)) = C3 (T, x, z), 2 (0) (0, Xsπ , Zs ) (0) 2 ds Xsπ (0) 2 Xsπ ds ds # #! #! where we have successively used the monotonicity (decreasing property) of v (0) in t, the concavity of v (0) in x and Assumptions 2.11 (iii)-(iv). 3 First order approximation of the value function V π λ(z) In this section, we assume π (0) = − σ(z) (0) (0) vx (0) vxx (0) ,δ is given, and analyze the value function associated to it by perturbation methods. Assume π is admissible, and recall the dynamics of the wealth process associated to the strategy π (0) and the slow factor Zt : dXtπ (0) (0) = π (0) (t, Xtπ , Zt )µ(Zt ) dt + π (0) (t, Xtπ √ dZt = δg(Zt ) dt + δc(z) dWtZ . (0) , Zt )σ(Zt ) dWt , Then one defines the value function as the expected utility of terminal wealth: n o (0) (0) (0) V π ,δ (t, x, z) = E U (XTπ )|Xtπ = x, Zt = z , (3.1) where U (·) is a general utility function satisfying Assumption 2.4. Our main result of this section is: Theorem 3.1. Under assumptions 2.4 and 2.11, the residual function E(t, x, z) defined by √ (0) E(t, x, z) := V π ,δ (t, x, z) − v (0) (t, x, z) − δv (1) (t, x, z), is of order δ. In other words, ∀(t, x, z) ∈ [0, T ]×R+ ×R, there exists a constant C, such that E(t, x, z) ≤ Cδ, where C may depend on (t, x, z) but not on δ. k δ k We recall that a function f δ (t, x,z) is of order δ , kdenoted by f (t, x, z) ∼ O(δ ), if ∀(t, x, z) ∈ [0, T ] × δ R × R, there exists C such that f (t, x, z) ≤ Cδ , where C may depned on (t, x, z), but not on δ. Similarly, we denote f δ (t, x, z) ∼ o(δ k ), if lim supδ→0 |f δ (t, x, z)|/δ k = 0. + 3.1 Estimate of risk tolerance function R(t, x; λ(z)) and leading order term v (0) In this subsection, we derive several properties of the risk tolerance function R(t, x; λ(z)), which will be needed in the proof of Theorem 3.1. Some of the proofs involve lengthy calculations which we put in the appendix. Proposition 3.2. Let I : R+ → R+ be the inverse of marginal utility, and assume it satisfies the growth condition in Assumption 2.4 (iv). Also, define H : R × [0, T ] × R → R+ by 1 Mx (t, H(x, t, λ(z)); λ(z)) = exp{−x − λ2 (z)(T − t)}, 2 where M (t, x; λ(z)) is the Merton value function. Then: 9 (3.2) (i) For each λ(z), H(x, t, λ(z)) is the unique solution to the heat equation, 1 Ht + λ2 (z)Hxx = 0, 2 (3.3) with the terminal condition H(x, T, λ(z)) = I(e−x ). (ii) Moreover, for each t ∈ [0, T ] and λ(z) ∈ R, H(x, t, λ(z)) is strictly increasing and of full range, lim H(x, t, λ(z)) = 0 x→−∞ and lim H(x, t, λ(z)) = ∞. x→∞ (3.4) (iii) Define the inverse function H −1 (y, t, λ(z)) : R+ × [0, T ] × R → R: H(H −1 (y, t, λ(z)), t, λ(z)) = y, then, for (t, x, z) ∈ [0, T ] × R+ × R, the risk tolerance function R(t, x; λ(z)) is given by R(t, x; λ(z)) = Hx H (−1) (x, t, λ(z)), t, λ(z) . (3.5) Proof. Similar results under constant λ with multiple assets are presented in [Källblad and Zariphopoulou, 2014, Propositions 4 and 6], and here we generalize the statement to our case. Direct computation shows that H(x, t, λ(z)) satisfies (3.3). The existence and uniqueness of the solution to (3.3) follows from the nonnegativeness of H(x, t, λ(z)). The function H is strictly increasing in x since Hx (x, t, λ(z)) solves the same heat equation (3.3) with a different terminal condition Hx (x, T, λ(z)) = −e−x I ′ (e−x ) > 0 which is positive. Standard comparison principle gives the positiveness of Hx (x, t, λ(z)) for previous time t < T . Equation (3.4) follows by using the definition of H(x, t, λ(z)) in (3.2) and the fact that the value function M (t, x; λ(z)) satisfies the Inada conditons for all t < T (see Karatzas et al. [1987] for more details). Therefore the inverse function H (−1) (y, t, λ(z)) is well defined for any y ≥ 0, and (3.5) is implied by (3.2) and (2.5). ′ Proposition 3.3. Suppose the risk tolerance R(x) = − UU′′(x) (x) is strictly increasing for all x in [0, ∞) (this is part of Assumption 2.4 (iii)), then, for each t ∈ [0, T ) and λ(z) ∈ R, the risk tolerance function R(t, x; λ(z)) is strictly increasing in the wealth variable x. Proof. The proof follows the idea in the case of constant λ presented in [Källblad and Zariphopoulou, 2014, Proposition 9], which we generalize to the case of variable λ(z) here. Using the relation in (3.5), when t = T , we deduce: Hxx (y, T, λ(z)) ′ . R (x) = Hx (y, T, λ(z)) y=H (−1) (x,T,λ(z)) Since R(x) and H(x, T, λ(z)) are strictly increasing, we claim that H(x, T, λ(z)) is strictly convex, namely, Hxx (x, T, λ(z)) > 0. Standard comparison argument gives the positiveness of Hxx (x, t, λ(z)), for t < T . Therefore, ∀t ∈ [0, T ], Rx (t, x; λ(z)) satisfies Hxx (y, t, λ(z)) > 0, Rx (t, x; λ(z)) = Hx (y, t, λ(z)) y=H (−1) (x,t,λ(z)) and R(t, x; λ(z)) is strictly increasing in x. Proposition 3.4. Under Assumption 2.4, the risk tolerance function R(t, x, λ(z)) satisfies: ∀0 ≤ j ≤ 4, ∃Kj > 0, such that ∀(t, x, z) ∈ [0, T ) × R+ × R, j R (t, x; λ(z)) ∂ j+1 R(t, x; λ(z)) ≤ Kj . (3.6) x 10 e j > 0, such that ∀(t, x, z) ∈ [0, T ) × R+ × R, Or equivalently, ∀1 ≤ j ≤ 5, there exists K j j ∂ R (t, x; λ(z)) ≤ K ej. x Moreover, for (t, x, z) ∈ [0, T ) × R+ × R, R(t, x; λ(z)) ≤ K0 x. (3.7) Proof. The proof of Proposition 3.4 is left to Appendix B. Proposition 3.5. The risk tolerance function R(t, x; λ(z)) satisfies the relation: Rλ = (T − t)λ(z)R2 Rxx . (3.8) Proof. Differentiating (2.15) with respect to x gives: (0) (0) vxz = (T − t)λλ′ (Rx vx(0) + Rvxx ), (0) (0) (0) vxxz = (T − t)λλ′ (Rxx vx(0) + 2Rx vxx + Rvxxx ). The definition (2.5) of R(t, x; λ(z)) and equation (2.12) imply: (0) (0) vx vxxx Rx = −1 + 2 . (0) vxx Differentiating (2.5) with respect to z, and using the above three equations produces (0) (0) Rz = (0) (0) −vxx vxz + vx vxxz 2 (0) vxx (0) = (T − t)λλ′ (0) vxx (0) (0) −Rx vx − Rvxx + (T − t)λλ′ vx (0) vxx (0) (0) (0) 2 (Rxx vx + 2Rx vxx + Rvxxx ) = (T − t)λλ′ Rx R − R + R2 Rxx − 2Rx R + R(Rx + 1) = (T − t)λλ′ R2 Rxx . Then, the chain-rule relation Rz = Rλ λ′ (z) implies the conclusion. Proposition 3.6. Under Assumption 2.4 (iii) and Assumption 2.11, there exist functions di,j (z) and dei,j (z) at most polynomially growing such that the following inequalities are satisfied: (0) (0) (0) vz (t, x, z) ≤ d01 (z)v (0) (t, x, z), vxz (t, x, z) ≤ d11 (z)vx (t, x, z), (0) (0) vxxz (t, x, z) ≤ d21 (z) vxx (t, x, z) , |Rz (t, x; λ(z))| ≤ de01 (z)R(t, x; λ(z)), (0) |Rxz (t, x; λ(z))| ≤ de11 (z), vzz (t, x, z) ≤ d02 (z)v (0) (t, x, z), (0) (0) |Rzz (t, x; λ(z))| ≤ de02 (z)R(t, x; λ(z)), vxzz (t, x, z) ≤ d12 (z)vx (t, x, z), (0) (0) (0) (0) vxxzz (t, x, z) ≤ d22 (z) vxx (t, x, z) , vxzzz (t, x, z) ≤ d13 (z)vx (t, x, z). Proof. The proof of Proposition 3.6 is given in Appendix C where we use Propositions 3.4 and 3.5. 11 3.2 Proof of Theorem 3.1 The value function V π (0) ,δ defined in (3.1), satisfies the following PDE: 2 √ (0) 1 π (0) ,δ π (0) ,δ + π (0) µ(z)Vxπ ,δ + δρg(z)σ(z)Vxz + σ 2 (z) π (0) Vxx = 0, 2 (0) V π ,δ (T, x, z) = U (x). Vtπ (0) ,δ + δMV π (0) ,δ (3.9) In the regime δ small, this is a regular perturbation problem, and the natural expansion takes the form √ (0) (0) (0) (0) V π ,δ = v π ,(0) + δv π ,(1) + δv π ,(2) + · · · , √ (0) (0) (0) where v π ,(0) is the leading order term and v π ,(1) is the first order δ correction of V π ,δ . Collecting (0) terms of O(1) in (3.9) gives the linear parabolic equation satisfied by v π ,(0) , 2 (0) (0) 1 π ,(0) + π (0) µ(z)vxπ ,(0) = 0, + σ 2 (z) π (0) vxx 2 π (0) ,(0) v (T, x, z) = U (x). π (0) ,(0) (3.10) vt Using the linear operator Lt,x defined in (2.7) and π (0) defined in (1.10), (3.10) becomes: Lt,x (λ(z))v π (0) ,(0) vπ = 0, (0) ,(0) (T, x, z) = U (x). By (2.8) and (2.12), we know that v (0) is a solution to (3.10) and the uniqueness result obtained in (0) Proposition 2.3 implies v π ,(0) √ ≡ v (0) . Next, collecting terms of O( δ) yields 2 (0) (0) 1 π ,(1) π (0) ,(0) + π (0) µ(z)vxπ ,(1) + ρπ (0) g(z)σ(z)vxz = 0, + σ 2 (z) π (0) vxx 2 (0) v π ,(1) (T, x, z) = 0. π (0) ,(1) vt (3.11) (0) Replacing v π ,(0) by v (0) in the above equation gives the same equation (2.13) for v (1) . As mentioned in (0) Section 2.2, (2.13) has a unique solution v (1) , thus v π ,(1) ≡ v (1) . Therefore, the heuristic expansion of (0) V π ,δ up to the first order is identified as: √ (0) V π ,δ = v (0) + δv (1) + · · · . Recall the residual function E(t, x, z) introduced in Theorem 3.1, E =Vπ (0) ,δ − v (0) − √ δv (1) . Subtracting (3.10) and (3.11) from (3.9) and using the relations v π (0) ,(0) ≡ v (0) , v π (0) ,(1) 2 √ 1 Et + σ(z)2 π (0) Exx + π (0) µ(z)Ex + δME + δρσ(z)g(z)π (0) Exz 2 √ (1) + δM(v (0) + δv (1) ) + δρσ(z)g(z)π (0) vxz = 0, ≡ v (1) , one has (3.12) E(T, x, z) = 0. Feynman–Kac formula givesthe following probabilistic representation for E(t, x, z) E(t, x, z) =δE(t,x,z) Z t T Mv (0) (0) (s, Xsπ , Zs ) √ (0) (0) (1) (s, Xsπ , Zs ) ds + δMv (1) (s, Xsπ , Zs ) + ρσ(Zs )g(Zs )π (0) vxz :=δI + δ 3/2 II + δρIII, 12 where E(t,x,z) [·] = E[·|Xt = x, Zt = z] and "Z # 1 2 (0) π (0) I := E(t,x,z) + g (Zs )vzz (s, Xs , Zs ) ds , 2 t "Z # T 1 2 (1) π (0) (1) π (0) II := E(t,x,z) c(Zs )vz (s, Xs , Zs ) + g (Zs )vzz (s, Xs , Zs ) ds , 2 t "Z # T π (0) (1) π (0) III := E(t,x,z) λ(Zs )g(Zs )R(s, Xs ; λ(Zs ))vxz (s, Xs , Zs ) ds . T (0) c(Zs )vz(0) (s, Xsπ , Zs ) (3.13) (3.14) (3.15) t In order to show that E is of order δ, it suffices to show that I, II and III are uniformly bounded in δ. The derivation of these bounds are given in the appendix D. 4 Asymptotic Optimality of π (0) 0 1 For a fixed choice of (e π0 , π e1 , α > 0), we introduce the family of admissible trading strategies A0 (t, x, z) π e ,π e ,α defined by 0 1 0 (4.1) A0 (t, x, z) π e ,π e ,α = π e + δα π e1 0≤δ≤1 . Further conditions will be given in Assumption 4.1. (0) The goal of this section 0 is1to show that the strategy π defined in (1.10) asymptotically outperforms every family A0 (t, x, z) π e ,π e , α as precisely stated in our main Theorem 4.5 in Section 4.3. 0 1 δ e Denote by V the value function associated to the trading strategy π := π e0 +δ α π e1 ∈ A0 (t, x, z) π e ,π e ,α : Ve δ = E [U (XTπ )|Xtπ = x, Zt = z] , (4.2) dXtπ = π(t, Xtπ , Zt )µ(Zt ) dt + π(t, Xtπ , Zt )σ(Zt ) dWt , √ dZt = δc(Zt ) dt + δg(Zt ) dWtZ . (4.3) where Xtπ is the wealth process following the strategy π, and Zt is slowly varying with the same δ: (4.4) (0) We need to compare Ve δ with V π ,δ defined in (3.1), for which we have established the first √ √ order approximation v (0) + δv (1) in Theorem 3.1. This comparison is asymptotic in δ up to order δ, and our first step is to obtain the corresponding approximation for Ve δ . This is done heuristically in Section 4.1 in the two cases π e0 ≡ π (0) and π e0 6≡ π (0) , and depending on the value of the parameter α. The proof of accuracy is given in Section 4.2. Asymptotic optimality of π (0) is obtained in Section 4.3. Assumption 4.1. For a fixed choice of (e π0 , π e1 , α > 0), we require: (i) The whole family (in δ) of strategies {e π0 + δα π e1 } is contained in A(t, x, z); (ii) Functions π e0 (t, x, z) and π e1 (t, x, z) are continuous on [0, T ] × R+ × R; est,x )t≤s≤T be the solution to: (iii) Let (X es = µ(z)e es , z) ds + σ(z)e es , z) dWs , dX π 0 (s, X π 0 (s, X (4.5) starting at x at time t. e t,x is nonnegative and we further assume that it has full support R+ for any t < s ≤ T . By (i), X s Remark 4.2. Notice that π (0) defined in (1.10) is continuous on [0, T ] × R+ × R, thus it is natural to require that π e0 and π e1 have the same regularity as π (0) , that is (ii). Regarding (iii), from Section 2, π (0) is the optimal trading strategy for the Merton problem when δ = 0, in which case Zt is frozen at its initial b t,x starting at x at time t is the solution to position z. The associated wealth process X s bs = µ(z)π (0) (s, X bs , z) ds + σ(z)π (0) (s, X bs , z) dWs , dX 13 bt = x. X Then, from [Källblad and Zariphopoulou, 2014, Proposition 7], one has b t,x = H H −1 (x, t, λ(z)) + λ2 (z)(s − t) + λ(z)(Ws − Wt ), s, λ(z) , X s b t,x has where H : R × [0, T ] × R → R+ is defined in Proposition 3.2 and is of full range. Consequently, X s + + t,x e has full support R , that is (iii). full support R , and thus, it is natural to require that X s 0 1 0 Remark 4.3. We have A0 (t, x, z) π e ,π e , 0 = A0 (t, x, z) π e +π e1 , 0, α , so that it is enough to consider α > 0. Heuristic Expansion of the Value Function Ve δ 4.1 We look for an expansion of the value function Ve δ defined in (4.2) of the form √ Ve δ = e v (0) + δ α veα + δ 2α ve2α + · · · + δ nα venα + δ ve(1) + · · · , (4.6) where n is the largest integer such that nα < 1/2. Note that in the case α > 1/2, n is simply zero. In the derivation, we are interested in identifying the zeroth order term ve(0) and the first non-zero term up √ to order δ. The term following ve(0) will depend on the value of α. Denote by L the infinitesimal generator of the state processes (Xtπ , Zt ) given by (4.3) - (4.4) √ 2 1 e0 + δ α π e1 ∂xz , e0 + δ α π e1 µ(z)∂x + δρg(z)σ(z) π e0 + δ α π e1 ∂xx + π L := δM + σ 2 (z) π 2 then, the value function Ve δ defined in (4.2) satisfies ∂t Ve δ + LVe δ = 0, Ve δ (T, x, z) = U (x). Collecting terms of order one yields the equation satisfied by ve(0) 2 (0) 1 + µ(z)e π 0 vex(0) = 0, e0 vexx + σ 2 (z) π 2 ve(0) (T, x, z) = U (x). (0) vet (4.7) (4.8) The order of approximation will depend on π e0 being identical to π (0) or not. 4.1.1 Case π e0 ≡ π (0) In this case, from the definition (1.10) of π (0) , equation (4.8) becomes (2.9) which is also satisfied by v (0) by (2.12). By Proposition 2.3, we deduce ve(0) ≡ v (0) . To identify the term of next order, one needs to discuss case by case: (i) α = 1/2. The next order term is ve(1) and it satisfies 2 1 (0) (1) (0) + σ 2 π (0) vexx + µ(z)e vx(0) = 0, + π (0) µ(z)e vx(1) + π (0) ρg(z)σ(z)e vxz +π e1 σ 2 (z)π (0) vexx 2 ve(1) (T, x, z) = 0. (1) vet It reduces to equation (2.13) since we have the relations ve(0) = v (0) and (0) σ 2 (z)π (0) vexx = −µ(z)e vx(0) , (4.9) from the definition (1.10) of π (0) . From Section 2.2 item (ii), v (1) is the unique solution to (2.13) and therefore, we obtain ve(1) ≡ v (1) . (ii) α > 1/2. The next order is of O(δ 1/2 ). By collecting all terms of order δ 1/2 , we also obtain that ve(1) satisfies (2.13), and ve(1) ≡ v (1) . 14 (iii) α < 1/2. The next order correction is O(δ α ). Collecting all terms of order δ α in (4.7) yields 2 1 α (0) vetα + σ 2 (z) π (0) e vxx + π (0) µ(z)e vxα + π e1 σ 2 (z)π (0) vexx + µ(z)e vx(0) = 0, 2 veα (T, x, z) = 0. (4.10) The last two terms cancel via the relation (4.9), and (4.10) becomes (2.14), which only has the trivial solution, namely veα ≡ 0. Therefore, we need to identify the next non-vanishing term. • 1/4 < α < 1/2. The next order is of O(δ 1/2 ), and ve(1) satisfies 2 1 (1) (0) + σ 2 (z) π (0) vexx + π (0) µ(z)e vx(1) + ρπ (0) g(z)σ(z)e vxz = 0, 2 ve(1) (T, x, z) = 0. (1) vet It coincides with (2.13) and we deduce ve(1) = v (1) . • α = 1/4. The next order is of O(δ 1/2 ), and the PDE satisfied by ve(1) becomes 2 2 (0) 1 1 (0) (1) + π (0) ρg(z)σ(z)e vxz = 0, + π (0) µ(z)e vx(1) + σ 2 (z) π e1 vexx + σ 2 (z) π (0) vexx 2 2 ve(1) (T, x, z) = 0, (1) vet (4.11) which will be used later when we compare v (1) and ve(1) . • 0 < α < 1/4. The next order is of O(δ 2α ) since 2α < 1/2, and 2 2 (0) 1 1 2α = 0, + π (0) µ(z)e vx2α + σ 2 (z) π e1 vexx vet2α + σ 2 (z) π (0) vexx 2 2 ve2α (T, x, z) = 0. Feynman–Kac formula gives: "Z T 2α ve (t, x, z) = E t # 1 2 (0) 1 2 e e e (s, Xs , z)e vxx (s, Xs , z) ds Xt = x , σ (z) π e 2 (4.12) (4.13) 2 (0) es following (4.5). Notice that, for fixed z, if the source term 1 σ 2 (z) π e1 vexx is identically with X 2 zero after some time t1 , then, ve2α (t1 , x, z) is zero. Therefore, further analysis is needed in order (0) to find the first non-zero term after ve(0) at point (t, x, z). Note that both σ(z) and (−e vxx ) are (0) (0) 1 strictly positive ( ve = v is strictly concave), hence, π e is the problematic term. Accordingly, we define t1 (z) = inf{t ∈ [0, T ] : π e1 (u, x, z) = 0, ∀(u, x) ∈ [t, T ] × R+ }, where we use the convention inf{∅} = T . Based on t1 (z), the following two regions are defined: K1 = (t, x, z) : 0 ≤ t < t1 (z), x ∈ R+ , z ∈ R , (4.14) + C1 = (t, x, z) : t1 (z) ≤ t ≤ T, x ∈ R , z ∈ R , (4.15) which form a partition of [0, T ] × R+ × R. – For any (t, x, z) ∈ K1 , since t < t1 (z), there exists a point (t′ , x′ , z) ∈ [t, t1 (z))×R+ ×{z} such that π e1 (t′ , x′ , z) 6= 0. By continuity of π e1 , there exist η > 0 and a set A := [t′ , t′ +ǫ]×[x′ , x′ +ǫ] ′ 1 with 0 < ǫ < t1 (z) − t such that |e π | ≥ η on A × {z}. By (4.13) and denoting by µs the 15 e t,x , we deduce that distribution of X s 1 ve (t, x, z) ≤ σ 2 (z) 2 2α Z x′ +ǫ x′ 1 ≤ − σ 2 (z)η 2 2 Z Z t′ +ǫ t′ Z x′ +ǫ x′ π e1 2 t′ +ǫ t′ (0) (s, y, z)e vxx (s, y, z) ds µs ( dy) (0) [−e vxx (s, y, z)] ds µs ( dy) 1 (0) ≤ − σ 2 (z)η 2 inf [−e vxx (s, y, z)] A 2 Z t′ +ǫ t′ Z ! x′ +ǫ µs ( dy) x′ ds < 0. (4.16) The conclusion e v 2α (t, x, z) < 0 follows from ve(0) ≡ v (0) , strict concavity and continuity of (0) est,x . v , and the full-support assumption on the distribution µs of X – For any (t, x, z) ∈ C1 , equation (4.12) becomes (2.14) (since π e1 ≡ 0 in C1 ), and consequently, ve2α (t, x, z) ≡ 0. Therefore, we need to analyze the next order term. Recall that n is the largest integer such that nα < 1/2 and we are in the case 0 < α < 1/4. ∗ If n = 2, collecting terms of order δ 1/2 and using the facts that ve2α ≡ 0 in C1 and veα ≡ 0, yields (2.13) for ve(1) , and therefore, e v (1) = v (1) . ∗ For n ≥ 3, namely, the next order is δ 3α and α < 1/6, then ve3α satisfies 2 1 3α 2α vet3α + σ 2 (z) π (0) e vxx + π (0) µ(z)e vx3α + σ 2 (z)π (0) π e1 e vxx + µ(z)e π 1 vex2α = 0, 2 ve3α (T, x, z) = 0. (4.17) Notice that in the above PDE, z is simply a parameter. For fixed z, on the region [t1 (z), T ] × R+ , ve2α (t, x, z) ≡ 0 and the above equation reduces to (2.14) again. Therefore, ve3α (t, x, z) ≡ 0 in the region C1 . Repeating this argument until venα , we obtain veiδ (t, x, z) ≡ 0, 2 ≤ i ≤ n, ∀(t, x, z) ∈ C1 , and, as in the case n = 2, we conclude ve(1) = v (1) . We summarize the above discussion in the following table: Value of α α = 1/2 α > 1/2 1/4 < α < 1/2 α = 1/4 0 < α < 1/4 4.1.2 Case π e0 6≡ π (0) Table 1: Expansion of Ve δ when π e0 ≡ π (0) . Expansion v (0) + Remark √ (1) δv √ (1) v (0) + δe v Region K1 : v (0) + √ δ 2α ve2α (0) Region C1 : v + δv (1) Recall that the leading order term ve(0) satisfies (4.8): (0) vet 2α ve ve(1) satisfies equation (4.11) satisfies equation (4.12) and (4.16) 2 (0) 1 e0 vexx +π e0 µ(z)e vx(0) = 0, + σ 2 (z) π 2 ve(0) (T, x, z) = U (x). For z ∈ R, we introduce n o t0 (z) = inf t ≥ 0 : π e0 (u, x, z) ≡ π (0) (u, x, z), ∀(u, x) ∈ [t, T ] × R+ , 16 inf{∅} = T. Define the regions: K = {(t, x, z) : 0 ≤ t < t0 (z), x ∈ R+ , z ∈ R}, + C = {(t, x, z) : t0 (z) ≤ t ≤ T, x ∈ R , z ∈ R}. (4.18) (4.19) We claim that in the region K, π e0 and π (0) differ, while in the region C, ve(0) ≡ v (0) and we need to identify the next non-varnishing term. In order to compare v (0) and ve(0) , we rewrite the equation (2.11) satisfied by v (0) as: (0) vt 2 2 (0) 1 1 (0) + σ 2 (z) π +π e0 µ(z)vx(0) − σ 2 (z) π e0 vxx e0 − π (0) vxx = 0, 2 2 (0) (0) where we have used the relation −σ 2 (z)π (0) vxx = µ(z)vx . Now, let f (t, x, z) be the difference of the two leading order terms: f (t, x, z) = v (0) (t, x, z) − ve(0) (t, x, z). It satisfies 2 2 1 1 (0) e0 µ(z)fx − σ 2 (z) π = 0, e0 fxx + π e0 − π (0) vxx ft + σ 2 (z) π 2 2 f (T, x, z) = 0. By the Feymann-Kac formula, one has: "Z # 2 T 1 2 0 (0) (0) es , z)vxx (s, X es , z) dsX et = x , f (t, x, z) = −E σ (z) π e −π (s, X 2 t (4.20) es follows (4.5). Using the argument given in Section 4.1.1 for the case 0 < α < 1/4, we deduce where X that the right-hand side in (4.20) is strictly positive. Consequently f (t, x, z) > 0, and ve(0) (t, x, z) < v (0) (t, x, z), ∀(t, x, z) ∈ K. (4.21) √ (0) Thus, in that case, the next term will not play a role when comparing Ve δ and V π ,δ = v (0) + δv (1) +O(δ). For any (t, x, z) ∈ C, since we have π e0 ≡ π (0) on C, we can apply here the whole discussion in Section 4.1.1 (on the partition {C ∩ K1 , C ∩ C1 } in the case 0 < α < 1/4). The expansion results are summarized in the table: Region K C C ∩ K1 C ∩ C1 4.2 Table 2: Expansion of Ve δ when π e0 6≡ π (0) . Value of α all α = 1/2 α > 1/2 1/4 < α < 1/2 α = 1/4 0 < α < 1/4 Expansion ve(0) v (0) + Remark ve(0) satisfies (4.8) and (4.21) √ (1) δv √ (1) v (0) + δe v (0) 2α 2α v + δ√ ve v (0) + δv (1) 2α ve ve(1) satisfies equation (4.11) satisfies equation (4.12) and (4.16) Accuracy of Approximations In order to make rigorous the above expansions, we need additional assumptions listed in Appendix They E. are technical integrability conditions, uniformly in δ, on the strategies in the class A0 (t, x, z) π e0 , π e1 , α defined in (4.1) and their associated wealth processes. 17 Proposition 4.4. Under Assumption 2.4 (i)-(ii), 4.1 and E.1, we obtain the following accuracy results: Table 3: Accuray of approximations of Ve δ . Case Region Value of α α = 1/2 α > 1/2 1/4 < α < 1/2 α = 1/4 all π e0 ≡ π (0) K1 C1 K π e0 6≡ π (0) 0 < α < 1/4 all α = 1/2 α > 1/2 1/4 < α < 1/2 α = 1/4 C C ∩ K1 C ∩ C1 0 < α < 1/4 Approximation v (0) + √ (1) δv √ (1) v (0) + δe v v (0) + δ√2α ve2α v (0) + δv (1) ve(0) v (0) + √ (1) δv √ (1) v (0) + δe v v (0) + δ√2α ve2α v (0) + δv (1) Accuracy O(δ) O(δ) O(δ 2α ) O(δ 3/4 ) O(δ 3α∧(1/2) ) O(δ) O(δ α∧(1/2) ) O(δ) O(δ) O(δ 2α ) O(δ 3/4 ) O(δ 3α∧(1/2) ) O(δ) where we the meaning of O is as in Theorem 3.1. Proof. Recall that Ve δ satisfies √ 2 δ 1 e0 + δ α π e1 µ(z)Vexδ + δρg(z)σ(z) π e0 + δ α π e1 Vexδ xx + π Vetδ + δMVe δ + σ 2 (z) π , e0 + δ α π e1 Vexz 2 Ve δ (T, x, z) = U (x). The proofs of accuracy for the approximations given in Tables 1 and 2 are quite standard, and we sketch them following the order of Table 3. In each case, E denotes the difference between Ve δ and its approximation. We start with the case π e0 = π (0) . (i) α = 1/2. Subtracting equation (2.11) and (2.13) from (4.7), we obtain the PDE satisfied by E(t, x, z): √ 2 (0) √ (1) δ (1) vxx + δvxx + δσ 2 (z)π (0) π e1 e1 vxx Et + LE + δM(v (0) + δv (1) ) + σ 2 π 2 √ 1 (1) (1) (0) π vxz = 0, + δµ(z)e π 1 vx(1) + δρg(z)σ(z) π (0) vxz +π e1 vxz + δe E(T, x, z) = 0. Then, Feynman–Kac formula produces Z T 1 2 (0) π π 1 2 (0) E(t, x, z) = δE(t,x,z) Mv (s, Xs , Zs ) + σ (Zs ) π vxx (s, Xs , Zs ) ds e 2 t Z T 1 2 (1) π π 1 2 (1) 3/2 Mv (s, Xs , Zs ) + σ (Zs ) π vxx (s, Xs , Zs ) ds e + δ E(t,x,z) 2 t Z Th i (1) σ 2 (Zs )π (0) π e1 vxx (s, Xsπ , Zs ) + µ(Zs )e π 1 vx(1) (s, Xsπ , Zs ) ds + δE(t,x,z) t + δρE(t,x,z) Z T t +δ 3/2 ρE(t,x,z) h i (1) (0) g(Zs )σ(Zs )π (0) vxz (s, Xsπ , Zs ) + g(Zs )σ(Zs )e π 1 vxz (s, Xsπ , Zs ) ds Z T t (1) g(Zs )σ(Zs )e π 1 vxz (s, Xsπ , Zs ) ds. Under Assumption E.1 (ia), one has E = O(δ). 18 (ii) α > 1/2. Similarly, we have √ (1) 2 (0) √ (1) δ 2α (1) δv ) + e1 vxx vxx + δvxx + δ 1/2+α σ 2 (z)π (0) π e1 σ(z)2 π 2 (1) (0) (1) = 0, + δ 1/2+α µ(z)e π 1 vx(1) + δρg(z)σ(z) π (0) vxz + δ α−1/2 π e1 vxz + δα π e1 vxz Et + LE + δM(v (0) + E(T, x, z) = 0. By Feynman–Kac formula and Assumption E.1 (ia), we deduce E = O(δ). (iii) 1/4 < α < 1/2. We have √ (1) 2 (0) √ (1) δ 2α (1) vxx + δvxx + δ 1/2+α σ 2 (z)π (0) π e1 σ(z)2 π δv ) + e1 vxx 2 √ 1 (1) (1) (0) + δ 1/2+α µ(z)e π 1 vx(1) + δ 1/2+α ρg(z)σ(z) δ 1/2−α π (0) vxz +π e1 vxz + δe π vxz = 0, Et + LE + δM(v (0) + E(T, x, z) = 0, and by Assumption E.1 (ia), we have E = O(δ 2α ). (iv) α = 1/4. Subtracting equation (2.11) and (4.11) from (4.7) yield √ (1) δ 3/4 (1) v )+ + δ 3/4 µ(z)e π 1 vex(1) e1 + δ 1/4 (e π 1 )2 vexx σ(z)2 2π (0) π Et + LE + δM(v (0) + δe 2 √ 1 (1) (0) (1) + δ 3/4 ρg(z)σ(z) π e1 vxz + δe π vexz + δ 1/4 π (0) e vxz = 0, E(T, x, z) = 0, and Assumption E.1 (ic) implies E = O(δ 3/4 ). (v) 0 < α < 1/4. In the region K1 , subtracting (2.11) and (4.12) from (4.7) produces δ 3α 2α + δ 3α µ(z)e π1 e vx2α Et + LE + δM(v (0) + δ 2α ve2α ) + e1 + δ α (e π 1 )2 vexx σ(z)2 2π (0) π 2 √ (0) 2α = 0, + δ 2α e vxz + δρg(z)σ(z) π (0) + δ α π e1 vxz E(T, x, z) = 0, and by Assumption E.1 (ib), one concludes that E = O(δ 3α∧(1/2) ). In the complementary region C1 , E satisfies: √ (1) 2 (0) √ (1) δ 2α (1) δv ) + e1 vxx vxx + δvxx + δ 1/2+α σ 2 (z)π (0) π e1 σ(z)2 π 2 √ √ (1) (0) (1) + δ 1/2+α µ(z)e π 1 vx(1) + δρg(z)σ(z) δπ (0) vxz + δαπ e1 vxz + δ α+1/2 π e1 vxz = 0, Et + LE + δM(v (0) + E(T, x, z) = 0. Note that in the region C1 , π e1 ≡ 0, and the above equation reduces to: √ (1) = 0, Et + LE + δM(v (0) + δv (1) ) + δρg(z)σ(z)π (0) vxz E(T, x, z) = 0, and then, Assumption E.1 (ib) implies E = O(δ). Now, we turn to the case π e0 6≡ π (0) . In the region K, we know by (4.21) that ve(0) < v (0) . Therefore, δ (0) e V −v is asymptotically of order one and negative. Thus, the next term will not play a role and we 19 define E = Ve δ − ve(0) . Subtracting equation (4.8) from (4.7) gives Et + LE + δMe v (0) + E(T, x, z) = 0. √ 2 (0) 1 (0) δ π e0 + δ α π e1 ρg(z)σ(z)e vxz + σ 2 (z) δ α π e1 e vxx 2 (0) π0 δα π e1 vexx e1 µ(z)e vx(0) = 0, + σ 2 (z)e + δαπ By Assumption E.1 (ii), we conclude that E = O(δ α∧(1/2) ). Remark that π e0 ≡ π (0) in the region C. Therefore, the whole analysis of case π e0 ≡ π (0) can be applied here, except that the case 0 < α < 1/4, where the accuracy results hold in the partition {C ∩ K1 , C ∩ C1 } of C. This complete the proof. 4.3 Asymptotic Optimality Our main result in this section is the following: 0 1 Theorem 4.5. For fixed (t, x, z) and any family of trading strategies A0 (t, x, z) π e ,π e , α , then, (0) Ve δ (t, x, z) − V π ,δ (t, x, z) √ ≤ 0. δ→0 δ lim (4.22) √ (0) That is, the strategy π (0) which generates V π ,δ , performs asymptotically better up to order δ than the family π e0 + δ α π e1 which generates Ve δ . Additionally, if π e0 6≡ π (0) , and in the region K defined by (4.18), the strategy π (0) performs asymptotically better at order one: (0) lim Ve δ (t, x, z) = ve(0) (t, x, z) < v (0) (t, x, z) = lim V π ,δ (t, x, z). δ→0 δ→0 (4.23) 0 1 Proof. To compare the asymptotic performance of π (0) with the family of trading strategies A0 (t, x, z) π e ,π e ,α , e δ summarized in Table 3 with the first order approxwe are essentially comparing the approximations of V √ (0) imation v (0) + √δv (1) of V π ,δ obtained in Theorem 3.1. In each case in Table 3 where the approximation of Ve δ is v (0) + δv (1) , it is easy to check that (4.22) is satisfied and the limit is zero. The remaining five cases are: (a) π e0 ≡ π (0) and α = 1/4; (a’) π e0 6≡ π (0) , in C, and α = 1/4; (b) π e0 ≡ π (0) and 0 < α < 1/4 in the region K1 ; (b’) π e0 6≡ π (0) , in C ∩ K1 , and 0 < α < 1/4; and (c) π e0 6≡ π (0) in the region K. √ √ (1) (a) In the case π e0 ≡ π (0) and α = 1/4, the approximation of Ve δ up to order δ is v (0) + δe v , and it suffices to show that ve(1) ≤ v (1) for all (t, x, z) ∈ [0, T ] × R+ × R. Let f (t, x, z) be the difference f (t, x, z) = v (1) (t, x, z) − ve(1) (t, x, z). Subtracting (4.11) from (2.13) produces 2 2 (0) 1 1 = 0, e1 vxx ft + σ 2 (z) π (0) fxx + π (0) µ(z)fx − σ 2 (z) π 2 2 f (T, x, z) = 0, and the representation f (t, x, z) = −E "Z t T # 1 2 (0) 1 2 es , z)v (s, X es , z) dsX et = x , (s, X σ (z) π e xx 2 et follows (4.5). The concavity of v (0) implies f (t, x, z) ≥ 0 and therefore, (4.22) holds. where X 20 (b) In the case π e0 ≡ π (0) and 0 < α < 1/4, the approximation of Ve δ is v (0) + δ 2α ve2α + o(δ 3α∧1/2 ), where 2α v is strictly negative by (4.16). Consequently, e (0) Ve δ (t, x, z) − V π ,δ (t, x, z) δ 2α ve2α − √ lim = lim δ→0 δ→0 δ √ (1) δv + o(δ 3α∧1/2 ) √ = −∞, δ and (4.22) holds. (c) In the case π e0 6≡ π (0) and (t, x, z) ∈ K, the approximation of Ve δ is ve(0) + o(1), and (4.21) shows that (0) v is strictly less than v (0) . Thus, we deduce (4.23). e The proof for the case (a’) (resp. (b’)) is essentially the same as in (a) (resp. (b)) but in the region C (resp. C ∩ K1 ). 5 A Fully-Solvable Example In this section, we consider a model studied in Chacko and Viceira [2005] where explicit solutions are derived for the consumption problem over infinite horizon, and in Fouque et al. [2016] where expansions for the terminal wealth problem are derived and accuracy of approximation is proved under power utility with one factor. Our goal is to show that this model satisfies the various assumptions we have made in this paper and, therefore, justify that they are reasonable. The underlying asset St and the slowly varying factor Zt are modeled by: r 1 dSt = µSt dt + St dWt , (5.1) Zt √ p dZt = δ(m − Zt ) dt + δβ Zt dWtZ , (5.2) with β > 0 and µ > 0. The standard Feller condition β 2 ≤ 2m is assumed to ensure that Zt stays positive. In this example, we consider power utilities: U (x) = xγ , γ 0 < γ < 1, for which Assumption 2.4 is satisfied by Proposition 2.8. This model fits in the class of models (1.1)-(1.2) by identifying the coefficients µ(z), σ(z), c(z) and g(z) as follows: p √ µ(z) = µ, σ(z) = 1/z, c(z) = m − z, g(z) = β z. For Assumption 2.11 (i)-(ii) of (with state space (0, ∞)), we notice that (Zt ) is the unique strong solution to (5.2) and it has finite moments of any order uniformly in δ ≤ 1 and t ≤ T , see for instance [Fouque et al., 2011, Chapter 3]. The process (St ) is given by: Z t Z tr 1 1 St = S0 exp µ− ds + dWs . 2Zs Zs 0 0 For Assumption 2.11 (iii)-(iv), we first solve (2.11) to obtain v (0) and π (0) : (0) vt The ansatz v (0) (t, x, z) = 2 (0) vx 1 − µ2 z (0) 2 vxx xγ γ f (t, z) = 0, v (0) (T, x, z) = xγ . γ gives the following ODE for f (t, z): ft (t, z) + 1 µ2 γ zf (t, z) = 0, 21−γ 21 f (T, z) = 1, which admits the explicit solution µ2 γ f (t, z) = e 2(1−γ) z(T −t) . By Proposition 2.3, we deduce the unique solution µ2 γ xγ 2(1−γ) z(T −t) e . γ v (0) (t, x, z) = Consequently, the zeroth order strategy π (0) and the risk tolerance function R(t, x; λ(z)) are given by π (0) (t, x, z) = µxz , 1−γ and R(t, x; λ(z)) = x . 1−γ (5.3) Note that in this case, the relations on the derivatives of v (0) in Proposition 3.6 can be verified by direct computation. The verification of Assumption 2.11 (iii)-(iv) will be presented in the next two sections. 5.1 Integrability of the Process G(Z· ) As in Andersen and Piterbarg [2007], one can compute the left-hand sides of (2.27) and (2.28) by solving Riccati equations. For convenience and notations, we recall the classical result: Lemma 5.1. Let y(τ ) be the solution to the constant coefficient Riccati equation: y ′ (τ ) = q0 + q1 y(τ ) + q2 y(τ )2 , y(0) = 0. Then y(τ ) is given by one of the following forms, depending on the sign of ∆ = q12 − 4q0 q2 : (i) ∆ > 0 y(τ ) = α− 1 − e−ατ , − −ατ 1− α α+ e (5.4) where α= √ ∆, α+ = −q1 + α , 2q2 α− = −q1 − α . 2q2 (ii) ∆ < 0 y(τ ) = where q0 sin(bτ ) , b cos(bτ ) − a sin(bτ ) a = q1 /2, b= (5.5) √ −∆/2. (iii) ∆ = 0 y(τ ) = a2 τ . q2 (1 − aτ ) (5.6) As mentioned in Section 2.4, v (0) (0, x, z) is a concave function, and it has a linear upper bound G(z)+x. To obtain G(z), we derive: ∀x0 ∈ R+ , v (0) (0, x, z) ≤ v (0) (0, x0 , z) + ∂ (0) v (0, x0 , z)(x − x0 ) ∂x µ2 γz µ2 γz µ2 γz xγ0 2(1−γ) T − xγ−1 e 2(1−γ) T x0 + xγ−1 e 2(1−γ) T x e 0 0 γ µ2 γz µ2 γz 1 = e 2(1−γ) T x. − 1 xγ0 e 2(1−γ) T + xγ−1 0 γ = 22 µ2 γz T Let x0 = e 2(1−γ)2 so that the coefficient in front of x is 1, and G(z) can be chosen as: µ2 γz µ2 γz 1 1 T − 1 xγ0 e 2(1−γ) T = − 1 e 2(1−γ)2 . G(z) = γ γ Then, we shall show that G(Z· ) = uniformly in δ. We have E(0,z) "Z µ2 γT 1 Z − 1 e 2(1−γ)2 · ∈ L2 ([0, T ] × Ω), γ T 2 G (Zs ) ds = 0 where # 1 −1 γ 2 Z (5.7) T f δ (0, z; s) ds, (5.8) 0 2 µ γT Z f δ (t, z; s) = E e (1−γ)2 s Zt = z , solves δ δ + δ(m − z)fzδ = 0, t ∈ [0, s), ftδ + β 2 zfzz 2 µ2 γT . f δ (s, z; s) = ewz , with w = (1 − γ)2 (5.9) This equation admits the solution f δ (t, z; s) = ewz+A δ (s−t)z+B δ (s−t) , (5.10) δ where A (τ ) satisfies the Riccati equation: Aδ (τ )′ = δ 2 δ 2 β A (τ ) + δβ 2 w − δ Aδ (τ ) + 2 Aδ (0) = 0, δ 2 2 β w − δw , 2 τ ∈ (0, s], (5.11) (5.12) δ and B (τ ) solves B δ (τ )′ = δm(w + Aδ (τ )), 2 B δ (0) = 0. (5.13) δ In this case, the discriminant ∆ = δ is positive, and A (τ ) follows case (i) in Lemma 5.1: −w 1 − e−δτ δ A (τ ) = , τ ∈ [0, τ ⋆ (δ)), 1 − w−w 2 e−δτ (5.14) β2 where [0, τ ⋆ (δ)) is the domain where Aδ (τ ) stays finite. It remains to show that Aδ (τ ) and B δ (τ ) are uniformly bounded in (δ, τ ) ∈ [0, δ] × [0, T ] for some δ ≤ 1. Note that the boundedness of B δ (τ ) is a consequence of that of Aδ (τ ) via equation (5.13). Since Aδ (τ ) is continuous on (0, 1] × [0, τ ⋆ (δ)), it suffices to show that i) there exists δ, such that τ ⋆ (δ) > T for δ ≤ δ, and ii) limδ→0 Aδ (τ ) exists. To this end, we examine the following cases: (a) w < 2 β2 . (c) w = 2 β2 . The denominator of (5.14) stays above 1, τ ⋆ (δ) = ∞, and limδ→0 Aδ (τ ) = 0. w− β22 2 1 ⋆ (b) w > β 2 . Here τ (δ) = − δ ln , limδ→0 τ ⋆ (δ) = ∞, and limδ→0 Aδ (τ ) = 0. w This case gives the trivial solution Aδ (τ ) ≡ 0. In all cases, Aδ (τ ) is uniformly bounded in [0, δ] × [0, T ]. Denoting by C(T) the uniform bound δ A (τ ) ≤ C(T ), ∀(δ, τ ) ∈ [0, δ] × [0, T ], then, following (5.13), we obtain a uniform bound for B δ (τ ): B δ (τ ) ≤ δm(w + C(T ))T ≤ m(w + C(T ))T. Therefore, combined with (5.8) and (5.10), we deduce that Assumption 2.11 (iii) is satisfied. 23 5.2 Moments of the Wealth Process Xtπ (0) First, using the explicit formula (5.3) for π (0) , equation (2.29) becomes √ µ Z s π(0) µ2 Zs π(0) π (0) X ds + X dWs , dXs = 1−γ s 1−γ s hR i T In order to control E(0,x,z) 0 Xs2 ds , we introduce f δ (t, x, z; s) = E which solves ftδ s ≥ t. (5.15) (0) 2 Xt = x, Zt = z , Xsπ √ δµβ δ 2 δ 1 µ2 z µ2 z 2 δ δ δ δ x fxx + δ(m − z)fz + β zfzz + ρ xfx + zxfxz = 0, + 2 1−γ 2 (1 − γ) 2 1−γ f δ (s, x, z; s) = x2 . The solution is of the form f δ (t, x, z; s) = x2 eA δ (s−t)z+B δ (s−t) (5.16) (5.17) , where Aδ (τ ) satisfies the Riccati equation: δ A (τ ) = β 2 Aδ (τ )2 + 2 δ ′ ! √ (3 − 2γ)µ2 2 δρµβ , − δ Aδ (τ ) + 1−γ (1 − γ)2 τ ∈ (0, s], Aδ (0) = 0, (5.18) and B δ (τ ) solves B δ (τ )′ = δmAδ (τ ), B δ (0) = 0. (5.19) By a similar argument used in Section 5.1, the verification of the uniform bound "Z # Z T T 2 E(0,x,z) Xs ds = f δ (0, x, z; s) ds ≤ C2 (T, x, z), 0 0 reduces to i) there exists δ, such that τ ⋆ (δ) > T for δ ≤ δ, (recall that τ ⋆ (δ) is defined to be the explosion time) and ii) limδ→0 Aδ (τ ) exists. The details are given in Appendix F. 6 Conclusion In this paper, we have considered the portfolio allocation problem in the context of a slowly varying stochastic environment and when the investor tries to maximize her terminal utility in a general class of utility functions. We proved that the zeroth order strategy identified in Fouque et al. [2016] is in fact asymptotically optimal up to the first order within a specific class of strategies. We have made precise the assumptions needed in order to rigorously establish this asymptotic optimality. These assumptions are on the coefficients of the model, on the utility function, and on the zeroth order value function, that is the solution to the classical Merton problem with constant coefficients. Finally, we analyzed a fully solvable example in order to demonstrate that our assumptions are reasonable. In an ongoing work, we are establishing the same type of results in the case of a fast varying stochastic environment, and, ultimately, in the case of a model with two factors, one slow and one fast. We also plan to analyze the effect of the first order correction in the strategy on the second order correction of the value function. Our analysis deals with classical solutions of the partial differential equations involved in the problem, HJB equations for the value functions and linear equations with source for the following terms. A full optimality result would require working with viscosity solutions, and that is also part of our future research. 24 A Proof of Proposition 2.8 Proof of (i). Without loss of generality, we assume E = [a, b] ⊂ [0, 1). Notice that a can be zero, but b is (7) strictly less than 1. Define f (x, y) = xy , since fx (x, y) is continuous in [x0 − δ, x0 + δ] × E, ∀x0 ∈ (0, ∞) (7) and fx (x, y) is integrable on E, Z f (x, y) ν(dy) U (x) = E is C 7 (0, ∞), Moreover, we have U (i) (x) = Z E fx(i) (x, y) ν(dy), for i ≤ 7. (A.1) The monotonicity and concavity follows by the sign of U ′ (x) and U ′′ (x) in (A.1). U (0+) = 0 follows by Dominated Convergence Theorem (DCT). We have: 1−b Z b 1 ′ y−1 lim U (x) ≥ lim yx ν(dy) ≥ lim (a + δ) ν([a + δ, b]) = +∞, for a given δ; x→0+ x→0+ a+δ x→0+ x 1−b Z b 1−y 1 1 ′ lim U (x) = lim y ν(dy) ≤ lim b ν([a, b]) = 0; x→+∞ x→+∞ a x→+∞ x x Rb y R b y−1 yx ν(dy) x a yx ν(dy) AE[U ] = lim = lim Rab ≤ b < 1, Rb x→+∞ x→+∞ y y ν(dy) x ν(dy) x a a which shows the Inada and Asymptotic Elasticity conditions (2.17). To show Assumption 2.4 (iii) is satisfied, we follow Remark 2.7, and prove the following: a) R(0) = 0, R(x) is strictly increasing on [0, ∞); and b) Rj (x) ∂xj+1 R(x) ≤ K, ∀0 ≤ j ≤ 4. For convenience, we introduce the short-hand notation Z f (y)xy ν(dy), hf (y)ix = E and in the sequel, we shall omit the subscript x when there is no confusion. Following from (A.1) and using the short-hand notation, R(x) is given by R yxy−1 ν(dy) hyi E R . R(x) = =x y−2 ν(dy) hy(1 − y)i y(1 − y)x E Since 1 − y is bounded by 1 − b and 1 − a, we deduce x x ≤ R(x) ≤ , 1−a 1−b (A.2) (A.3) and obtain R(0) = 0 by letting x → 0. Taking derivative in (A.2) gives hy(y + 1)i hy(1 − y)i − hyi y 2 (1 − y) ′ R (x) = . 2 hy(1 − y)i The positiveness of R′ (x) on [0, ∞) follows by 2 hyi y 3 + hyi2 − y 2 − hyi y 2 hyi2 − hyi y 2 ′ R (x) = ≥ 2 2 hy(1 − y)i hy(1 − y)i hyi 1 hyi ≥ = , = hy(1 − y)i (1 − a) hyi 1−a 2 1 on [0, ∞), and consequently, where we have used hyi y 3 ≥ y 2 . Thus, R′ (x) is bounded below by 1−a R(x) is strictly increasing for x ≥ 0. To show R′ (x) < K, we derive the upper bound as follows: 2 R′ (x) ≤ (b + 1)(1 − a) hyi 2 (1 − b)2 hyi 25 = (b + 1)(1 − a) . (1 − b)2 (A.4) To show |R(x)R′′ (x)| ≤ K, we first compute R′′ (x): ! 2 2 2 2 y y (1 − y) + hyi y 3 (1 − y) + y 2 (1 − y) hy(y + 1)i y (y + 1) 2 y 2 (1 − y) hyi . − + 3 2 hy(1 − y)i hy(1 − y)i hy(1 − y)i 1 R (x) = x ′′ Then, the upper bound and lower bound of R(x)R′′ (x) are computed as follows: ! 2 2 2 2 (b + 1) y 2(1 − b) y hyi 1 b+3 hyi ≤ R(x)R′′ (x) ≤ + ; 3 3 hy(1 − y)i (1 − b) hyi 1 − b1−b (1 − b) hyi 2 (1 − a) y 2 + (1 − a) hyi y 3 + (1 − a)(1 + b) y 2 hyi (b + 3) hyi ′′ ≥− . R(x)R (x) ≥ − 2 hy(1 − y)i (1 − b)2 (1 − b)2 hyi Combine the above bounds, we have the desired results |R(x)R′′ (x)| ≤ K. Similar arguments works for j = 2, 3, 4 by straightforward calculations. R The last step is to show the growth condition of I(y) = U ′(−1) (y). For y ≥ U ′ (1) = E y ν(dy), we have I(y) ≤ 1. The other case y = U ′ (x) ≤ U ′ (1), where x ≥ 1, Z U ′ (x) ≤ xb−1 y ν(dy), ∀x ≥ 1, E 1/(b−1) ! Z t ′ R ⇒ U y ν(dy) ≤ t, ∀t ≤ E E y ν(dy) Z y ν(dy), ⇒ I(y) ≤ κy −γ , ∀y ≤ E where κ = R E 1 y ν(dy) 1/(b−1) is a constant depending solely on ν(dy), and α = 1 1−b > 1. Combining the −α two cases, we have I(y) ≤ α + κy . Proof of (ii). This class of utility functions is defined via the inverse of marginal utility I(y), where U (x) can be recovered by: Z x Z x U (x) = U ′ (t) dt = I (−1) (t) dt. (A.5) 0 0 Then U (0+) = 0 is automatically satisfied. By definition of I(y), I(y) ∈ C ∞ (0, ∞), so does U (x). The strictly monotonicity and strictly concavity are given by: ′ U (x) = I (−1) (x) > 0, ′′ U (x) = 1 I ′ (I (−1) (x)) = Z N 0 !−1 −s−1 (−1) −s I (x) ν(ds) < 0. By DCT, one has I(+∞) = lim y→+∞ I(0) = lim y→0 Z 0 N Z N y −s ν(ds) = 0, 0 y −s ν(ds) ≥ lim y→0 Z δ N y −s ν(ds) ≥ lim y −δ ν[δ, N ] = +∞, y→0 x 1 U ′ (x) + xU ′′ (x) 1 U ′ (x) = lim = lim 1 − = 1 − lim ≤1− , AE[U ] = lim x ′ ′ x→+∞ x→+∞ x→+∞ x→+∞ U (x) U (x) R(x) R (x) N where we have used the fact that R′ (x) ≤ N derived below. RN From Proposotion 3.2, H(x, T, λ(z)) = I(e−x ) = 0 exs ν(ds), and the risk tolerance R(x) is given by: R(x) = − U ′ (x) = Hx (H (−1) (x, T, λ(z)), T, λ(z)) = U ′′ (x) 26 Z 0 N seH (−1) (x,T,λ(z))s ν(ds). The fact that R(0) = 0 follows by DCT and H (−1) (0+, T, λ(z)) = −∞. R(x) has bounded derivative, since: R N H (−1) (x,T,λ(z))s 2 e s ν(ds) Hxx (H (−1) (x, T, λ(z)), T, λ(z)) ′ R (x) = = R0N (−1) ≤ N. (−1) (x,T,λ(z))s s ν(ds) Hx (H (x, T, λ(z)), T, λ(z)) eH 0 R(x) is strictly increasing, since the numerator stay positive for x > 0, i.e. 0 < R′ (x) ≤ N . To show |R(x)R′′ (x)| ≤ K, one needs R N H (−1) (x,T,λ(z))s 3 e s ν(ds) Hxxx (H (−1) (x, T, λ(z)), T, λ(z)) 1 2 2 ′′ ′′ ′ = R0N (−1) ≤ N 2. R(x)R (x) + (R (x)) = (R (x)) = (−1) (x,T,λ(z))s s ν(ds) 2 Hx (H (x, T, λ(z)), T, λ(z)) eH 0 Since (R2 (x))′′ and R′ (x) are bounded, so does R(x)R′′ (x). Similar arguments works for j = 2, 3, 4 by using the following identities: R N H (−1) (x,T,λ(z))s 4 e s ν(ds) ∂x4 H(H (−1) (x, T, λ(z)), T, λ(z)) 2 ′′′ ′3 ′ ′′ = R0N (−1) R R + R + 4RR R = ≤ N3 (−1) (x,T,λ(z))s s ν(ds) Hx (H (x, T, λ(z)), T, λ(z)) eH 0 R3 R(4) + 7R2 R′ R′′′ + R′4 + 11RR′2R′′ + 4R2 R′′2 ∂ 5 H(H (−1) (x, T, λ(z)), T, λ(z)) ≤ N 4, = x Hx (H (−1) (x, T, λ(z)), T, λ(z)) 12R′5 + 32R2 R′2 R′′′ + 57RR′3 R′′ + R4 R(5) + 15R3 R′′ R′′′ + 11R3 R′ R(4) + 54R2 R′ R′′2 = ∂x6 H(H (−1) (x, T, λ(z)), T, λ(z)) ≤ N 5. Hx (H (−1) (x, T, λ(z)), T, λ(z)) Notice that I(y) satisfies the polynomial growth condition due to the following: denote by κ = ν([0, N ]), if y ≥ 1, I(y) ≤ κ, otherwise when y < 1 I(y) ≤ κy −N . Therefore, combining the two cases and defining α = max{N, κ} yields the inequality (2.20). B Proof of Proposition 3.4 To show (3.6), we use the relation (3.5) between the risk tolerance function R(t, x; λ(z)) and the function H(x, t, λ(z)) (which is defined in Proposition 3.2), namely R(t, x; λ(z)) = Hx (H (−1) (x, t, λ(z)), t, λ(z)). (B.1) Differentiating (B.1) with respect to x, and letting t = T produces: Hxx (y, T, λ(z)) . R′ (x) = Hx (y, T, λ(z)) y=H (−1) (x,T,λ(z)) Using R′ (x) ≤ C = p K/2 (proved in Proposition 2.5) gives, for all x, z ∈ R, p |Hxx (x, T, λ(z))| ≤ K/2Hx (x, T, λ(z)). Notice that for fixed λ(z), both Hxx (x, t, λ(z)) and Hx (x, t, λ(z)) satisfy the heat equation (3.3). Comparison Principle ensures that the inequality is preserved for t < T , i.e. for (x, t, z) ∈ R × [0, T ] × R, p |Hxx (x, t, λ(z))| ≤ K/2Hx (x, t, λ(z)). Using (B.1) again, we obtain: p Hxx (y, t, λ(z)) ∂x R(t, x; λ(z)) = ≤ K/2 := K0 . Hx (y, t, λ(z)) y=H (−1) (x,t,λ(z)) 27 (B.2) Thus, we have shown (3.6) with j = 0. To complete the proof in the case j = 1, we first obtain a relation between derivatives of R(t, x; λ(z)) and derivatives of H(x, t, λ(z)): 1 Hxxx (y, t, λ(z)) Rx2 (t, x; λ(z)) + RRxx (t, x; λ(z)) = ∂x2 R2 (t, x; λ(z)) = . (B.3) 2 Hx (y, t, λ(z)) y=H (−1) (x,t,λ(z)) Let t = T in the above identity, then, the middle quantity is reduced to 12 ∂x2 R2 (x) and is bounded by K/2 as assumed in (2.19), so does the ratio of Hxxx (x, T, λ(z)) over Hx (x, T, λ(z)) for all x ∈ R. Standard Comparison Principle applies and the ratio remains bounded for t < T . This results in the boundedness of Rx2 (t, x; λ(z)) + RRxx (t, x; λ(z)). Combining with (B.2), we achieve: |RRxx (t, x; λ(z))| ≤ K/2 + K02 := K1 . To deal with j = 2, we first obtain the identity: 2 R Rxxx (t, x; λ(z)) + Rx3 (t, x; λ(z)) ∂x4 H(y, t, λ(z)) . + 4RRx Rxx (t, x; λ(z)) = Hx (y, t, λ(z)) y=H (−1) (x,t,λ(z)) (B.4) At terminal time T, each term on the left-hand side is bounded (cf. Remark 2.7), therefore, the right-hand side is bounded. Then a similar argument based on Comparison Principle gives the following estimate: 2 R Rxxx (t, x; λ(z)) ≤ K2 . The remaining cases are completed by replacing (B.4) by the following 2 R3 R(4) (t, x; λ(z)) + 7R2 Rx Rxxx (t, x; λ(z)) + Rx4 (t, x; λ(z)) + 11RRx2 Rxx (t, x; λ(z)) + 4R2 Rxx (t, x; λ(z)) 5 ∂ H(y, t, λ(z)) = x , Hx (y, t, λ(z)) y=H (−1) (x,t,λ(z)) 12Rx5 (t, x; λ(z)) + 32R2 Rx2 Rxxx (t, x; λ(z)) + 57RRx3 Rxx (t, x; λ(z)) + R4 R(5) (t, x; λ(z)) + 15R3 Rxx Rxxx (t, x; λ(z)) ∂x6 H(y, t, λ(z)) 3 (4) 2 2 + 11R Rx R (t, x; λ(z)) + 54R Rx Rxx (t, x; λ(z)) = , Hx (y, t, λ(z)) y=H (−1) (x,t,λ(z)) and repeating the same argument. e j are easily obtained by expanding ∂xj Rj (t, x; λ(z)) and using (3.6). To show (3.7), we The bounds K notice that R(t, x; λ(z)) is shown to be strictly increasing in Proposition 3.3, and R′ (t, x; λ(z)) ≤ K0 , integrating on both sides with respect to x yields the desired result. C Proof of Proposition 3.6 In the sequel, to shorten the notation, we shall omit the argument of v (0) (t, x, z) and its derivatives as well as the arguments of the risk tolerance function R(t, x; λ(z)) and use R when there’s no confusion (note that this is not the risk tolerance R(x) introduced in Assumption 2.4 (iii)). In what follows, we repeatedly use the results in Proposition 3.4 and concavity of v (0) . (1) Recall the “Vega-Gamma” relation in (2.15). A direct calculation gives: (0) vz = (T − t) |λ(z)λ′ (z)| Rvx(0) ≤ (T − t) |λ(z)λ′ (z)| K0 xvx(0) ≤ K0 (T − t) |λ(z)λ′ (z)| v (0) = d01 (z)v (0) , where d01 (z) = K0 (T − t) |λ(z)λ′ (z)|. 28 (2) Differentiating (2.15) with respect to z brings: (0) (0) vxz = (T − t) λ(z)λ′ (z) Rx vx(0) + Rvxx ≤ (T − t) |λ(z)λ′ (z)| K0 vx(0) + vx(0) = (K0 + 1)(T − t) |λ(z)λ′ (z)| vx(0) = d11 (z)vx(0) , where d11 (z) = (K0 + 1)(T − t) |λ(z)λ′ (z)|. (0) (3) Differentiating vxz with repsect to x again produces: (0) (0) (0) + Rvxxx vxxz = (T − t) λ(z)λ′ (z) Rxx vx(0) + 2Rx vxx (0) (0) (0) + 2Rx vxx − (Rx + 1)vxx = (T − t) λ(z)λ′ (z) −Rxx Rvxx (0) ≤ (T − t) |λ(z)λ′ (z)| (K1 + K0 + 1) vxx (0) = d21 (z) vxx , where d21 (z) = (K1 + K0 + 1)(T − t) |λ(z)λ′ (z)|. (4) Results in Proposition 3.5 gives: |Rz | = (T − t) λ(z)λ′ (z)Rxx R2 ≤ (T − t) |λ(z)λ′ (z)| K1 R = de01 R, where de01 (z) = K1 (T − t) |λ(z)λ′ (z)|. (5) Differentiating equation (2.15) with respect to z yields: (0) (0) vzz = (T − t) (λ(z)λ′ (z))z Rvx(0) + λ(z)λ′ (z)Rz vx(0) + λ(z)λ′ (z)Rvxz ≤ (T − t) |(λ(z)λ′ (z))′ | + |λ(z)λ′ (z)| (de01 (z) + d11 (z)) Rvx(0) ≤ (T − t) |(λ(z)λ′ (z))′ | + |λ(z)λ′ (z)| (de01 (z) + d11 (z)) K0 v (0) = d02 (z)v (0) , where d02 (z) = K0 (T − t) |(λ(z)λ′ (z))′ | + |λ(z)λ′ (z)| (de01 (z) + d11 (z)) . (6) Differentiating equation (3.8) with respect to x gives: |Rxz | = (T − t) λ(z)λ′ (z) R2 Rxx x = (T − t) |λ(z)λ′ (z)| Rxxx R2 + 2RRx Rxx ≤ (T − t) |λ(z)λ′ (z)| (K2 + 2K1 K0 ) = de11 (z), where de11 (z) = (K2 + 2K1 K0 )(T − t) |λ(z)λ′ (z)|. (0) (7) Differentiating vxz given in (2) with respect to z, one has: (0) (0) vxzz = (T − t) λ(z)λ′ (z) Rx vx(0) + Rvxx z (0) (0) (0) (0) = (T − t) (λ(z)λ′ (z))′ Rx vx(0) + Rvxx + λ(z)λ′ (z) Rxz vx(0) + Rx vxz + Rz vxx + Rvxxz ≤ (T − t) |(λ(z)λ′ (z))′ | K0 vx(0) + vx(0) (0) (0) + (T − t) |λ(z)λ′ (z)| de11 (z)vx(0) + K0 d11 (z)vx(0) + de01 (z)R vxx + Rd21 (z) vxx n o = (T − t) |(λ(z)λ′ (z))′ | (K0 + 1) + |λ(z)λ′ (z)| (de11 (z) + K0 d11 (z) + de01 (z) + d21 (z)) vx(0) = d12 (z)vx(0) , n o where d12 (z) = (T − t) |(λ(z)λ′ (z))′ | (K0 + 1) + |λ(z)λ′ (z)| (de11 (z) + K0 d11 (z) + de01 (z) + d21 (z)) . 29 (8) Differentiating (3.8) with respect to z gives: |Rzz | = (T − t) (λ(z)λ′ (z))′ R2 Rxx + λ(z)λ′ (z)(R2 Rxxz + 2RRz Rxx ) o n e 1 (z)R + 2K1 de01 (z)R) ≤ (T − t) |(λ(z)λ′ (z))′ | K1 R + |λ(z)λ′ (z)| (K o n e 1 (z) + 2K1 de01 (z)) R = (T − t) |(λ(z)λ′ (z)′ )| K1 + |λ(z)λ′ (z)| (K = de02 (z)R, o n e 1 (z) is the bound e 1 (z) + 2K1 de01 (z)) . Here K where de02 (z) = (T − t) |(λ(z)λ′ (z))′ | K1 + |λ(z)λ′ (z)| (K of RRxxz , i.e. ∀(t, x, z) ∈ [0, T ] × R+ R, e 1 (z), |Rxxz R| ≤ K (C.1) and is computed by differentiating (3.8) with respect to x twice: 2 |Rxxz R| = (T − t) |λ(z)λ′ (z)| Rxxxx R3 + 4R2 Rx Rxxx + 2RRx2 Rxx + 2R2 Rxx e 1 (z). ≤ (T − t) |λ(z)λ′ (z)| (K3 + 4K0 K2 + 2K 2 K1 + 2K 2 ) = K 0 1 (0) (9) Differentiating vxzz with respect to x again, we achieve: (0) (0) (0) (0) (0) + λ(z)λ′ (z) Rxz vx(0) + Rx vxz + Rz vxx + Rvxxz vxxzz = (T − t) (λ(z)λ′ (z))′ Rx vx(0) + Rvxx x x (0) (0) ≤ (T − t) |(λ(z)λ(z))′ | Rxx vx(0) + 2Rx vxx + Rvxxx (0) (0) (0) (0) (0) + (T − t) |λ(z)λ′ (z)| Rxxz vx(0) + 2Rxz vxx + Rxx vxz + 2Rx vxxz + Rz vxxx + Rvxxxz (0) ≤ (T − t) |(λ(z)λ′ (z))′ | (Rxx R + 2K0 + (K0 + 1)) vxx + (T − t) |λ(z)λ′ (z)| (0) (0) (0) (0) (0) · Rxxz Rvxx + 2de11 (z) vxx + d11 (z) Rxx (z)vx(0) + 2d21 (z) Rx vxx + de01 (z) Rvxxx + Rvxxxz (0) ≤ (T − t) |(λ(z)λ′ (z))′ | (K1 + 3K0 + 1) vxx + (T − t) |λ(z)λ′ (z)| e 1 (z) + 2de11 (z) + d11 (z)K1 + 2d21 (z)K0 + de01 (z)(K0 + 1) + K e 31 (z) v (0) · K xx (0) = d22 (z) vxx , where n e 1 (z) + 2de11 (z) + d11 (z)K1 + 2d21 (z)K0 d22 (z) =(T − t) |(λ(z)λ′ (z))′ | (K1 + 3K0 + 1) + |λ(z)λ′ (z)| K o e 31 (z) . +de01 (z)(K0 + 1) + K During the derivation we have used the inequalities: (0) (0) Rxx vx(0) = Rxx Rvxx , ≤ K1 vxx (0) (0) (0) Rvxxx = (Rx + 1) vxx ≤ (K0 + 1) vxx , (0) e 31 (z) v (0) , Rvxxxz ≤ K xx To obtain the last one, we first claim: 2 (0) (0) (0) (0) + 2(Rx + 1)2 vxx R vxxxx = Rxx vx(0) − (Rx + 1)vxx ≤ (K1 + 2K02 + 3K0 + 1) vxx , 30 (C.2) (C.3) (C.4) and then, the last inequality (C.4) follows by: (0) (0) (0) (0) + 3RRx vxxx + R2 vxxxx Rvxxxz = (T − t) λ(z)λ′ (z) RRxxx vx(0) + 3RRxx vxx (0) (0) (0) 2 (0) ≤ (T − t) |λ(z)λ′ (z)| R2 Rxxx vxx + 3K1 vxx + 3K0 Rvxxx + R vxxxx (0) e 31 (z), ≤ (T − t) |λ(z)λ′ (z)| K2 + 3K1 + 3K0 (K0 + 1) + K1 + 2K02 + 3K0 + 1 vxx =K e 31 (z) = (T − t) |λ(z)λ′ (z)| K2 + 4K1 + 5K 2 + 6K0 + 1 . where K 0 (0) (10) Finally, we differentiate vxzz with respect to z again and obtain: (0) vxzzz ” o “ ” ” “ n “ (0) (0) (0) (0) (0) (0) (0) (0) + λ(z)λ′ (z) Rxz vx + Rx vxz + Rz vxx + Rvxxz = (T − t) (λ(z)λ′ (z))′′ Rx vx + Rvxx + 2(λ(z)λ(z)′ )′ Rx vx + Rvxx z z Taking absolute value on both sides, n (0) (0) (0) (0) + Rz vxx + Rvxxz ) vxzzz ≤ (T − t) |(λ(z)λ′ (z))′′ | (K0 + 1)vx(0) + 2 (λ(z)λ′ (z))′ (Rxz vx(0) + Rx vxz o (0) (0) (0) (0) (0) + λ(z)λ′ (z) ·(Rxzz vx(0) + 2Rxz vxz + Rx vxzz + Rzz vxx + 2Rz vxxz + Rvxxzz ) n ≤ (T − t)vx(0) |(λ(z)λ′ (z))′′ | (K0 + 1) + 2 |(λ(z)λ′ (z))′ | (de11 (z) + K0 d11 (z) + de01 (z) + d21 (z)) o e 12 (z) + 2de11 (z)d11 (z) + K0 d12 (z) + de02 (z) + 2de01 (z)d21 (z) + d22 (z)) . + λ(z)λ′ (z) ·(K e 12 (z) is a bound uniform in t and x d13 (z) is easy to identify from the above inequality, in which K e 12 (z). |Rxzz | ≤ K e 12 (z) exists by the following derivation: Such K Rxzz = (T − t)(λ(z)λ′ (z))′ R2 Rxxx + 2RRx Rxx (C.5) + (T − t)λ(z)λ′ (z) R2 Rxxxz + 2RRz Rxxx + 2RRx Rxxz + 2Rx Rz Rxx + 2RRxz Rxx . Every term is bounded (by a function of z) by previous results, except R2 Rxxxz . For this term, we derive: 2 2 R Rxxxz = (T − t) λ(z)λ′ (z) R4 R(5) + 8R3 Rxx Rxxx + 6R2 Rx2 Rxxx + 6R3 Rx R(4) + 6R2 Rx Rxx ≤ (T − t) |λ(z)λ′ (z)| (K4 + 8K1 K2 + 6K02 K2 + 6K0 K3 + 6K0 K12 ). D Proof of Boundedness for Theorem 3.1 We first analyze term I in (3.13). The boundedness for the z-derivatives of v (0) is given by Proposition 3.6. (0) To bound the L2 norm of v (0) (·, X·π , Z· ) we rely on Lemma 2.13. In the following we omit the arguments (0) of v (0) (s, Xsπ , Zs ) and its derivatives. "Z # T 1 1 . (0) I = E(t,x,z) c(Zs )vz(0) + g 2 (Zs )vzz ds = I(1) + I(2) . 2 2 t "Z (1) I ≤ E(t,x,z) ≤ 1/2 E(t,z) "Z t t T T # "Z (0) c(Zs )vz ds ≤ E(t,x,z) c2 (Zs )d201 (Zs ) ds # 1/2 E(t,x,z) ≤ C(T, z)C3 (T, x, z). 31 "Z t t T T |c(Zs )d01 (Zs )| v v (0) 2 ds # (0) ds # (0) In the calculation above, vz is replaced by its bound d01 (z)v (0) derived in Proposition 3.6. By CauchySchwarz inequality, it suffices to bound two expectations. For the first one, we have used the facts that c(z) and d01 (z) have at most polynomial growth and Zt admits moments of any order uniformly in δ. Proposition 2.13 gives the bound for the second expectation. The bounds of remaining terms are obtained by the same procedure, and we will sketch the calculation (0) without detailed explanation. Also, in what follows, we omit the arguments of R(s, Xsπ ; λ(Zs )). "Z # "Z # T T (2) (0) 2 2 (0) g (Zs ) vzz ds ≤ E(t,x,z) g (Zs )d02 (Zs )v ds I ≤ E(t,x,z) ≤ 1/2 E(t,z) "Z t T g 4 t (Zs )d202 (Zs ) ds # 1/2 E(t,x,z) "Z t T t ≤ C(T, z)C3 (T, x, z). v (0) 2 ds # Term II in (3.14) and term III in (3.15) contain derivatives in z of v (1) . To deal with it, we recall the following relation between v (1) and v (0) given by equation (2.16): (0) (0) vx vxz 1 1 (0) . v (1) = − (T − t)ρλ(z)g(z) (0) = (T − t)ρλ(z)g(z)Rvxz 2 2 vxx (1) (1) (1) Differentiating the above equation with respect to z, we are able to rewrite vxz , vz and vzz in terms of the risk tolerance function R and the leading order term v (0) . Then, as before, the derivations are mainly based on Proposition 3.6 and Lemma 2.13, and given as follows: II = E(t,x,z) "Z t T 1 (1) + g 2 (Zs )vzz ds 2 c(Zs )vz(1) 1 = (T − t)ρE(t,x,z) 2 "Z 1 + (T − t)ρE(t,x,z) 4 T c(Zs ) t "Z t T # (0) λ(Zs )g(Zs )Rvxz z ds # (0) g (Zs ) λ(Zs )g(Zs )Rvxz 2 zz ds # 1 . 1 = (T − t)ρ II(1) + (T − t)ρ II(2) . 2 4 II (1) = E(t,x,z) "Z T t + E(t,x,z) "Z t (0) c(Zs ) (λ(Zs )g(Zs ))z Rvxz ds T # (0) (0) (0) c(Zs )λ(Zs )g(Zs ) Rz vxz + Rvxzz (s, Xsπ , Zs ) ds # . = II(1,1) + II(1,2) + II(1,3) . where they are uniformly bounded since "Z # T (1,1) π (0) (0) |c(Zs ) (λ(Zs )g(Zs ))z | K0 Xs d11 (Zs )vx ds II ≤ E(t,x,z) t ≤ 1/2 K0 E(t,z) "Z t T c 2 2 (Zs )d211 (Zs ) (λ(Zs )g(Zs ))z 32 ds # 1/2 E(t,x,z) "Z t T v (0) 2 # ds ≤ C(T, z)C3 (T, x, z). "Z (1,2) II ≤ E(t,x,z) T |c(Zs )λ(Zs )g(Zs )| de01 (Zs )Rd11 (Zs )vx(0) t ≤ 1/2 K0 E(t,z) "Z T "Z # 1/2 2 2 2 2 e c (Zs )d01 (Zs )d11 (Zs )λ (Zs )g (Zs ) ds E(t,x,z) t "Z (1,3) II ≤ E(t,x,z) T (0) |c(Zs )λ(Zs )g(Zs )| K0 Xsπ d12 (Zs )vx(0) ds t ≤ II (2) T 2 t 1/2 K0 E(t,z) ds # "Z T c 2 t = E(t,x,z) "Z T g 2 t + 2E(t,x,z) + E(t,x,z) (Zs )d212 (Zs )λ2 (Zs )g 2 (Zs ) ds "Z (0) (Zs ) (λ(Zs )g(Zs ))zz Rvxz T 1/2 E(t,x,z) ds T 2 g (Zs )λ(Zs )g(Zs ) t "Z T t v (0) 2 # 2 ds ≤ C(T, z)C3 (T, x, z). # ds ≤ C(T, z)C3 (T, x, z). # (0) Rzz vxz v (0) # (0) (0) (s, Xsπ g 2 (Zs ) (λ(Zs )g(Zs ))z Rz vxz + Rvxzz t "Z # + (0) 2Rz vxzz + (0) , Zs ) ds # . = II(2,1) + 2II(2,2) + 2II(2,3) + II(2,4) + 2II(2,5) + II(2,6) . "Z (2,1) II ≤ E(t,x,z) T g t ≤ 1/2 K0 E(t,z) "Z T t 2 (0) (Zs ) |(λ(Zs )g(Zs ))zz | K0 Xsπ d11 (Zs )vx(0) g 4 (Zs )d211 (Zs ) (λ(Zs )g(Zs ))2zz ds # T g t ≤ 1/2 K0 E(t,z) "Z 2 1/2 E(t,x,z) ds "Z T t T 4 t v (0) t ≤ C(T, z)C3 (T, x, z). "Z (2,3) II ≤ E(t,x,z) T t ≤ 1/2 K0 E(t,z) 2 (0) (Zs ) |(λ(Zs )g(Zs ))z | de01 (Zs )R(x, Xsπ ; λ(Zs ))d11 (Zs )vx(0) "Z # 1/2 2 2 2 e g (Zs )d01 (Zs )d11 (Zs ) (λ(Zs )g(Zs ))z ds E(t,x,z) T "Z t g 2 (Zs ) |(λ(Zs )g(Zs ))z | K0 Xsπ T g 4 (0) 2 (Zs )d212 (Zs ) (λ(Zs )g(Zs ))z ≤ C(T, z)C3 (T, x, z). 33 d12 (Zs )vx(0) ds ds # ds ds # # ≤ C(T, z)C3 (T, x, z). "Z (2,2) ≤ E(t,x,z) II (0) (0) Rvxzzz (s, Xsπ , Zs ) 1/2 E(t,x,z) "Z v ds (0) t T v (0) 2 # 2 # ds # # ds # "Z (2,4) II ≤ E(t,x,z) T g t ≤ 1/2 K0 E(t,z) "Z 2 (Zs ) |λ(Zs )g(Zs )| de02 (Zs )Rd11 (Zs )vx(0) ds # "Z # 1/2 2 2 2 2 e g (Zs )d02 (Zs )d11 (Zs )λ (Zs )g (Zs ) ds E(t,x,z) T T 4 t t ≤ C(T, z)C3 (T, x, z). "Z (2,5) II ≤ E(t,x,z) T t ≤ 1/2 K0 E(t,z) "Z g 2 (Zs ) |λ(Zs )g(Zs )| de01 (Zs )Rd12 (Zs )vx(0) ds T 4 t t ≤ C(T, z)C3 (T, x, z). "Z (2,6) II ≤ E(t,x,z) T g t 1/2 K0 E(t,z) ≤ "Z 2 (0) (Zs ) |λ(Zs )g(Zs )| K0 Xsπ d13 (Zs )vx(0) T g 4 t (Zs )d213 (Zs )λ2 (Zs )g 2 (Zs ) ds # v (0) v (0) 2 ds # ds # # "Z # 1/2 2 2 2 2 e g (Zs )d01 (Zs )d12 (Zs )λ (Zs )g (Zs ) ds E(t,x,z) T ds 1/2 E(t,x,z) 2 # "Z T t ≤ C(T, z)C3 (T, x, z). v (0) 2 ds # Similarly the term III defined in (3.15) becomes: III = E(t,x,z) »Z T t 1 (T − t)ρE(t,x,z) 2 »Z 1 = (T − t)ρE(t,x,z) 2 »Z = – « 1 (0) ds (T − t)ρλ(Zs )g(Zs )Rvxz 2 xz – “ “ ”” ′ (0) (0) (0) λ(Zs )g(Zs )R (λ(Zs )g(Zs )) Rvxz + λ(Zs )g(Zs ) Rz vxz + Rvxzz ds λ(Zs )g(Zs )R T „ x t T (0) (0) λ(Zs )g(Zs ) (λ(Zs )g(Zs ))′ RRx vxz + λ(Zs )g(Zs ) (λ(Zs )g(Zs ))′ RRvxxz ds t – »Z T “ ” – 1 (0) (0) (0) (0) + (T − t)ρE(t,x) λ2 (Zs )g 2 (Zs ) RRxz vxz + RRz vxxz + RRx vxzz + R2 vxxzz ds 2 t “ “ ” 1 ” 1 ≡ (T − t)ρ III(1) + III(2) + (T − t)ρ III(3) + III(4) + III(5) + III(6) . 2 2 Now we analyze them one by one: »Z T ˛ ˛ ˛ (1) ˛ ˛III ˛ ≤ E(t,x,z) t »Z T ≤ E(t,x,z) t ≤ 1/2 K02 E(t,x,z) ˛ – ˛ ˛ ′ (0) ˛ ˛λ(Zs )g(Zs )(λ(Zs )g(Zs )) RRx vxz ˛ ds – ˛ ˛ (0) ˛λ(Zs )g(Zs )(λ(Zs )g(Zs ))′ ˛ K0 Xsπ K0 d11 (Zs )vx(0) ds (Propositions 3.4 and 3.6) »Z 1/2 T t ≤ C(T, z)E(t,x,z) »Z – ` ´2 1/2 λ(Zs )g(Zs )(λ(Zs )g(Zs ))′ d11 (Zs ) ds E(t,x,z) t »Z t T “ v (0) (t, Xsπ (0) , Zs ) ”2 – T “ Xsπ (0) vx(0) ”2 – ds (CS inequality) ds (Bounded moments of Zs and concavity of v (0) ) ≤ C(T, z)C3 (T, x, z) (Lemma 2.13) . 34 »Z T ˛ ˛ ˛ (2) ˛ ˛III ˛ ≤ E(t,x,z) t »Z T ≤ E(t,x,z) t ≤ 1/2 CE(t,x,z) »Z ˛ – ˛ ˛ ′ (0) ˛ ˛λ(Zs )g(Zs )(λ(Zs )g(Zs )) RRvxxz ˛ ds ˛ ˛ (0) (0) ˛λ(Zs )g(Zs )(λ(Zs )g(Zs ))′ ˛ CXsπ Rd(Zs )vxx ds T t 1/2 ≤ C(T, z)E(t,x,z) ` λ(Zs )g(Zs )(λ(Zs )g(Zs ))′ d(Zs ) »Z T “ t v (0) ”2 ds – ´2 – »Z – 1/2 ds E(t,x,z) T t “ (0) ”2 – Xsπ vx(0) ds ≤ C(T, z)C3 (T, x, z). »Z T ˛ ˛ ˛ (3) ˛ ˛III ˛ ≤ E(t,x,z) t »Z T ≤ E(t,x,z) ˛ – ˛ ˛ 2 2 (0) ˛ ˛λ (Zs )g (Zs )RRxz vxz ˛ ds λ2 (Zs )g 2 (Zs )CXsπ (0) d(Zs )Rx d(Zs )vx(0) ds t 1/2 ≤ CE(t,x,z) ≤ »Z T – »Z 1/2 λ4 (Zs )g 4 (Zs )d4 (Zs ) ds E(t,x,z) t 1/2 C(T, z)E(t,x,z) »Z T t “ v (0) ”2 ds T t “ – Xsπ (0) Cvx(0) ”2 ds – – ≤ C(T, z)C3 (T, x, z). »Z T ˛ ˛ ˛ (4) ˛ ˛III ˛ ≤ E(t,x,z) t »Z T ≤ E(t,x,z) ˛ – ˛ ˛ 2 2 (0) ˛ ˛λ (Zs )g (Zs )RRz vxxz ˛ ds 2 λ (Zs )g 2 (0) (0) (Zs )CXsπ d(Zs )Rd(Zs )vxx ds t 1/2 ≤ CE(t,x,z) T »Z – 1/2 λ4 (Zs )g 4 (Zs )d4 (Zs ) ds E(t,x,z) t 1/2 ≤ C(T, z)E(t,x,z) T »Z t “ v (0) ”2 ds »Z T t “ – Xsπ (0) vx(0) ”2 ds – – ≤ C(T, z)C3 (T, x, z). »Z T ˛ ˛ ˛ (5) ˛ ˛III ˛ ≤ E(t,x,z) t »Z T ≤ E(t,x,z) ˛ – ˛ ˛ 2 2 (0) ˛ ˛λ (Zs )g (Zs )RRx vxzz ˛ ds λ2 (Zs )g 2 (Zs )CXsπ (0) t 1/2 ≤ C 2 E(t,x,z) ≤ »Z T t 1/2 C(T, z)E(t,x,z) Cd2 (Zs )vx(0) ds – »Z – 1/2 λ4 (Zs )g 4 (Zs )d4 (Zs ) ds E(t,x,z) »Z T t “ v (0) ”2 ds – ≤ C(T, z)C3 (T, x, z). 35 T t “ (0) ”2 – Xsπ vx(0) ds »Z T ˛ ˛ ˛ (6) ˛ ˛III ˛ ≤ E(t,x,z) t »Z T ≤ E(t,x,z) ˛ – ˛ ˛ 2 2 (0) ˛ ˛λ (Zs )g (Zs )RRvxxzz ˛ ds λ2 (Zs )g 2 (Zs )CXsπ (0) t ≤ 1/2 CE(t,x,z) »Z T t 1/2 ≤ C(T, z)E(t,x,z) (0) Rd2 (Zs )vxx ds – »Z – 1/2 λ4 (Zs )g 4 (Zs )d4 (Zs ) ds E(t,x,z) »Z T t “ v (0) ”2 ds T t “ Xsπ (0) vx(0) ”2 ds – – ≤ C(T, z)C3 (T, x, z). E Assumptions in Section 4.2 This set of assumptions is used in deriving Proposition 4.4 where we establish the accuracy of approximation of Ve δ summarized in Table 3. 0 1 Assumption E.1. Let A0 (t, x, z) π e ,π e , α be the family of trading strategies defined in (4.1). Recall that X π is the wealth generated by the strategy π = π e0 + δ α π e1 as defined in (4.3). In order to condense the π notation, we systematically omit the argument (s, Xs , Zs ) in what follows. According to the different cases, we further require: (i) If π e0 ≡ π (0) , (a) If α > 1/4, the following quantities are uniformly bounded in δ: 2 (0) RT RT RT E(t,x,z) t Mv (0) ds, E(t,x,z) t Mv (1) ds, E(t,x,z) t σ 2 (Zs ) π e1 vxx ds, 2 (1) RT RT RT (1) (1) E(t,x,z) t σ 2 (Zs ) π e1 vxx ds, E(t,x,z) t σ 2 (Zs )π (0) π e1 vxx ds, E(t,x,z) t µ(Zs )e π 1 vx ds, RT R R T T (1) (0) (1) E(t,x,z) t g(Zs )σ(Zs )π (0) vxz ds, E(t,x,z) t g(Zs )σ(Zs )e π 1 vxz ds, E(t,x,z) t g(Zs )σ(Zs )e π 1 vxz ds. Here, recall that v (0) and v (1) are the leading order term and first order correction of V δ as well (0) as of V π ,δ , and they satisfy (2.11) and (2.13) respectively. (b) In the case 0 < α < 1/4, if (t, x, z) ∈ K1 , we need RT RT RT 2 2α E(t,x,z) t Mv (0) ds, E(t,x,z) t Me v 2α ds, E(t,x,z) t σ 2 (Zs ) π e1 vexx ds, RT 2 RT RT (0) (0) 1 2α 1 2α E(t,x,z) t σ (Zs )π π e vexx ds, E(t,x,z) t µ(Zs )e π vex ds, E(t,x,z) t g(Zs )σ(Zs )π (0) vxz ds, RT RT RT (0) 2α 2α π 1 vexz ds, E(t,x,z) t g(Zs )σ(Zs )π (0) vexz ds, E(t,x,z) t g(Zs )σ(Zs )e π 1 vxz ds, E(t,x,z) t g(Zs )σ(Zs )e 2α 2α δ to be uniformly bounded in δ, where ve is the coefficient of δ in the expansion of Ve in the case 0 < α < 1/4, and satisfies the linear PDE (4.12). Otherwise if (t, x, z) ∈ C1 , we only need RT RT RT (1) E(t,x,z) t Mv (0) ds, E(t,x,z) t Mv (1) ds, E(t,x,z) t g(Zs )σ(Zs )π (0) vxz ds, to be uniformly bounded. (c) For the critical case α = 1/4, we need all the above assumptions in part (ib) with α replaced by 1/4, except the v 2α becomes ve(1) , which is the √ sixth one which is not needed. Then, the term e coefficient of δ in the expansion and satisfies (4.11). (ii) If π e0 6≡ π (0) , (a) For (t, x, z) in the region K, the following quantities need to be uniformly bounded in δ: 2 (0) 2 (0) RT RT RT e1 vexx ds, E(t,x,z) t Me v (0) ds, E(t,x,z) t g(Zs )σ(Zs ) π e0 + δ α π e1 e vxz ds, E(t,x,z) t σ 2 (Zs ) π RT RT (0) (0) π0 π e1 vexx ds, E(t,x,z) t µ(Zs )e π 1 vex ds, where ve(0) is the leading order solution E(t,x,z) t σ 2 (Zs )e that satisfies (4.8). (b) For (t, x, z) in the region C, where π e0 and π (0) are identical, requirements are the same as in part (i) with (t, x, z) restricted to be in C. 36 Uniform Bound for Aδ (τ ) Solution of (5.18) F In order to apply Lemma 5.1 to (5.18), we identify q0 , q1 , q2 as follows and compute ∆: √ 2 δρµβ δ (3 − 2γ)µ2 , q1 = − δ, q2 = β 2 , q0 = (1 − γ)2 1−γ 2 ρµβ 2δβ 2 µ2 2ρ2 − 3 + 2γ − 4δ 3/2 + δ2. ∆= (1 − γ)2 1−γ (F.1) (F.2) Note that if ∆ > 0, by Lemma 5.1 (i), we have α+ α− > 0 and α+ + α− = sgn(−ρ) if ρ 6= 0 and positive otherwise. It remains to prove i) there exists δ, such that τ ⋆ (δ) > T for δ ≤ δ, (recall that τ ⋆ (δ) is defined to be the explosion time) and ii) limδ→0 Aδ (τ ) exists. This is done in the various cases depending on the values of the exponent γ of the power utility and the correlation coefficient ρ. Denote by δ0 a constant such that when δ < δ0 , the first term of ∆ is dominant, and in case the first term is zero, the second term is dominant. Recall that 0 < γ < 1, we now examine the following cases: A) 1 2 ≤ γ < 1. √ a) −1 ≤ ρ < − 1.5 − γ. If γ = 12 , this case is empty. In this case, ∆ is positive, Aδ (τ ) lies in type (5.4), and τ ⋆ (δ) = ∞ for δ ∈ (0, δ0 ]. The last claim − −ατ follows by α+ > α− > 0 and 1 − α > 0, ∀ τ ∈ [0, T ]. Therefore we could define δ to be δ0 , α+ e δ and the limit of A (τ ) is obtained by straightforward calculation: Aδ (τ ) = = −q1 − α 2q2 1 − ατ + O(δ) − ατ + O(δ)) −q1 −α −q1 +α (1 (−q1 − α) (−q1 + α)ατ + O(δ) → q0 τ, as δ → 0. 2q2 2α + O(δ) (F.3) (F.4) √ b) ρ = − 1.5 − γ. √ Under the situation ρ = − 1.5 − γ, the first term in ∆ vanishes and the second term is dominant, which still gives the positiveness of ∆. So this case is similar to the previous one: positive ∆, α+ > . α− > 0 and τ ⋆ (δ) = ∞ for δ ≤ δ0 = δ, except for a slight difference in computing limδ→0 Aδ (τ ) = q0 τ . √ c) − 1.5 − γ < ρ < 0. In case c) - f), ∆ is negative, thus Aδ (τ ) lies in type (5.5). In addition, discussion in e)-f) are parallel to case c) and d). For ρ negative, we have a = q1 /2 negative, and τ ⋆ (δ) = πb + 1b arctan ab . Now we further restrict the value of δ to [0, δ], so that τ ⋆ (δ) > T . In other words, for sufficient small δ ≤ δ, Aδ (τ ) and B δ (τ ) still stay finite on [0, T ]. To this end, it suffices to show limδ→0 τ ⋆ (δ) = +∞: ! p 2(3 − 2γ − 2ρ2 ) b = arctan , (F.5) lim arctan δ→0 a 2ρ π + arctan(b/a) 1 b π = lim = +∞. (F.6) + arctan lim τ ⋆ (δ) = lim δ→0 δ→0 δ→0 b b a b Therefore Aδ (τ ) is continuous on [0, T ] × (0, δ], and the problem is reduced to show the finiteness of limδ→0 Aδ (τ ). Straightforward calculation yields: 3 bτ + O(δ 2 ) lim A (τ ) = lim q0 = q0 τ. δ→0 δ→0 b + O(δ) δ d) ρ = 0. 37 (F.7) If ρ = 0, a = −δ/2 is still negative, and we still have τ ⋆ (δ) = πb + 1b arctan ab . The argument here is the same as in case c) except calculations in (F.5) and (F.6). 2βµ p b 2(3 − 2γ − 2ρ2 )) = −π/2, (F.8) = arctan −δ −1/2 lim arctan δ→0 a 1−γ π + arctan(b/a) 1 b π = lim = +∞. (F.9) + arctan lim τ ⋆ (δ) = lim δ→0 δ→0 δ→0 b b a b (F.10) The last step follows by that the top converge to π/2 and the bottom converges to 0. √ e) 0 < ρ < 1.5 − γ. Now both a and b are positive, and τ ⋆ (δ) = 1b arctan ab . Similar calculation as (F.5) and (F.6) yields limδ→0 τ ⋆ (δ) = +∞. So we could still choose δ, such that τ ⋆ (δ) > T for δ ∈ (0, δ]. Then the problem is still reduced to the finiteness of limδ→0 Aδ (τ ), which is proved by (F.7). √ f) ρ = 1.5 − γ. In this case the first term of ∆ vanishes and the second term is dominant. Therefore ∆ is negative, and solutions to Aδ (τ ) and B δ (τ ) lie in type (5.5). This case is similar to case e), where both a and b are positive and τ ⋆ (δ) = 1b arctan ab , except the calculation in (F.5) and (F.6). √ 2 b −∆ = = a q1 q 5 ρµβ 34 4 1−γ δ + O(δ ) √ 2ρµβ δ 1−γ − δ r 1−γ 1 3 δ 4 + O(δ 4 ), ρµβ r 1 b 1−γ 1 3 1 ⋆ 4 4 = lim arctan δ + O(δ ) lim τ (δ) = lim = +∞. δ→0 δ→0 δ→0 b a b ρµβ = (F.11) (F.12) 1 3 The last step holds since b is of order δ 4 and arctan ab is of order δ 4 . √ g) 1.5 − γ < ρ ≤ 1. If γ = 12 , this case is empty. δ In this case, ∆ becomes positive again, and A (τ ) is of type (5.4). However here α− < α+ < 0, and τ ⋆ (δ) = − α1 ln α+ α− . If we could pick δ, such that τ ⋆ (δ) > T , for δ ≤ δ, then combining the computation of limδ→0 Aδ (τ ) in a), we still capable to achieve the conclusion that Aδ (τ ) is bounded on [0, T ] × [0, δ]. Such δ exists since 1 τ (δ) = − ln α ⋆ −q1 + α −q1 − α = κ + O(δ 1/2 ) → +∞, as δ → 0 α (F.13) where κ is a positive constant free of δ defined as: κ = lim ln δ→0 α+ α− . B) 0 ≤ γ < 12 , where ∆ is always negative. a) −1 ≤ ρ < 0. The discussion here is the same as A)c). b) ρ = 0. The discussion here is the same as A)d). c) 0 < ρ ≤ 1. The discussion here is the same as A)e). We have shown that for all cases Aδ (τ ) is uniformly bounded in [0, δ] × [0, T ], say by C(T), then again via (5.19), we achieve the uniform bound for B δ (τ ): B δ (τ ) ≤ δmC(T )T ≤ mC(T )T. That concludes the verification of Assumption 2.11 (iv). 38 References L. B. Andersen and V. V. Piterbarg. Moment explosions in stochastic volatility models. Finance and Stochastics, 11(1):29–50, 2007. G. Chacko and L. M. Viceira. Dynamic consumption and portfolio choice with stochastic volatility in incomplete markets. Review of Financial Studies, 18(4):1369–1402, 2005. J. C. Cox and C. Huang. Optimal consumption and portfolio policies when asset prices follow a diffusion process. Journal of economic theory, 49:33–83, 1989. J. Cvitanic and I. Karatzas. On portfolio optimization under ”drawdown” constraints. IMA volumes in mathematics and its applications, 65:35–35, 1995. R. Elie and N. Touzi. Optimal lifetime consumption and investment under a drawdown constraint. Finance and Stochastics, 12:299–330, 2008. J.-P. Fouque and R. Hu. Asymptotic methods for portfolio optimization problem in multiscale stochastic enviroments, 2016. In preparation. J.-P. Fouque, G. Papanicolaou, R. Sircar, and K. Solna. Multiscale Stochatic Volatility for Equity, InterestRate and Credit Derivatives. Cambridge University Press, 2011. J.-P. Fouque, R. Sircar, and T. Zariphopoulou. Portfolio optimization & stochastic volatility asymptotics. Mathematical Finance, 2016. S. J. Grossman and Z. Zhou. Optimal investment strategies for controlling drawdowns. Mathematical Finance, 3:241–276, 1993. P. Guasoni and J. Muhle-Karbe. Portfolio choice with transaction costs: a users guide. In Paris-Princeton Lectures on Mathematical Finance 2013, pages 169–201. Springer, 2013. F. John. Partial differential equations, volume 1 of Applied Mathematical Sciences. Springer-Verlag, New York, 1982. S. Källblad and T. Zariphopoulou. Qualitative analysis of optimal investment strategies in log-normal markets. Available at SSRN 2373587, 2014. I. Karatzas and S. E. Shreve. Methods of Mathematical Finance. Springer Science & Business Media, 1998. I. Karatzas, J. P. Lehoczky, and S. E. Shreve. Optimal portfolio and consumption decisions for a “small investor” on a finite horizon. SIAM journal on control and optimization, 25(6):1557–1586, 1987. M. J. Magill and G. M. Constantinides. Portfolio selection with transactions costs. Journal of Economic Theory, 13:245–263, 1976. R. C. Merton. Lifetime portfolio selection under uncertainty: The continuous-time case. Review of Economics and statistics, 51:247–257, 1969. R. C. Merton. Optimum consumption and portfolio rules in a continuous-time model. Journal of economic theory, 3(4):373–413, 1971. W. Schachermayer. Portfolio optimization in incomplete financial markets. Cattedra Galileiana, 2004. T. Zariphopoulou. Optimal investment and consumption models with non-linear stock dynamics. Mathematical Methods of Operations Research, 50(2):271–296, 1999. 39
© Copyright 2026 Paperzz