Rice formula for a random field from Rd to Rd. A complete proof. Jean-Marc Azaı̈s Laboratoire de Statistique et Probabilités. Université Paul Sabatier, Toulouse, France. [email protected] Mario Wschebor Centro de Matemática, Facultad de Ciencias, Universidad de la República, Montevideo, Uruguay. [email protected] March 28, 2006 Let I be a d-dimensional compact manifold and X : I → R a Gaussian process with regular paths defined on some probability space (Ω, A, P ). Our purpose here is to give a complete proof of Rice Formula for the moments of the number of roots NuZ (I) of the equation Z(t) = u on the set I, where {Z(t) : t ∈ I} is an Rd -valued Gaussian field, I is a subset of Rd and u a given point in Rd . We have also included the proof of the formula for higher moments, which in fact follows easily from the first moment. This text follows closely the proof in Azaı̈s and Wschebor (2005), but this one is not complete. It should be pointed out that the validity of Rice Formula for Lebesguealmost every u ∈ Rd is easy to prove (Brillinger, 1972) but this is insufficient for a certain number of standard applications. For example, assume X : I R is a real-valued random process and one is willing to compute the moments of the number of critical points of X. Then, we must take for Z the random field Z(t) = X 0 (t) and the formula one needs is for the precise value u = 0 so that a formula for almost every u does not solve the problem. We use the following notations: 0 If Z is a smooth function U Rd , U a subset of Rd , its successive derivatives are denoted Z 0 , Z 00 ,...Z (k) and considered respectively as linear, bilinear, ..., k−linear forms on Rd . ˙ ∂I and I¯ are respectively the interior, the boundary and the closure of the I, set I. If ξ is a random vector with values in Rd , we denote by pξ (x) the value of the density of ξ at the point x, by E(ξ) its expectation and by V ar(ξ) its variance-covariance matrix, whenever they exist. λ is Lebesgue measure. In all formulas below, when we write E [(g(ξ) /η] , where the pair of random vectors has joint Gaussian distribution and g si some function, we will mean the 1 version of the conditional expectation that comes from the Gaussian regression of ξ on η. Theorem 1 Let Z : I Rd , I a compact subset of Rd , be a random field and d u∈R . Assume that: A0: Z is Gaussian, A1: t Z(t) is a.s. of class C 1 , A2: for each t ∈ I, Z(t) has a non degenerate distribution (i.e. V ar (Z(t)) 0), ˙ Z(t) = u, det (Z 0 (t)) = 0}) = 0 A3: P ({∃t ∈ I, A4: λ(∂I) = 0. Then Z E NuZ (I) = E (| det(Z 0 (t))|/Z(t) = u) pZ(t) (u)dt, (1) I and both members are finite. Theorem 2 Let k, k ≥ 2 be an integer. Assume the same hypotheses as in Theorem (1) excepting for A2, which is replaced by A’2 : for t1 , ..., tk ∈ I pairwise different values of the parameter, the distribution of Z(t1 ), ..., Z(tk ) does not degenerate in (Rd )k . Then E NuZ (I) NuZ (I) − 1 ... NuZ (I) − k + 1 Z k Y = E | det Z 0 (tj ) |/Z(t1 ) = ... = Z(tk ) = u Ik j=1 pZ(t1 ),...,Z(tk ) (u, ..., u)dt1 ...dtk , (2) where both members may be infinite. Remark. Note that Theorem 1 (resp 2) remains valid, excepting for the finiteness of the expectation in Theorem 1, if I is open and hypotheses A0,A1,A2 (resp A’2) and A3 are verified. This follows immediately from the above statements. A standard extension argument shows that (1) holds true if one replaces I by any ˙ Borel subset of I. Sufficient conditions for hypotheses A3 to hold are given by the next proposition. 2 Proposition 3 Let Let Z : I Rd , I a compact subset of Rd be a random 1 field with paths of class C and u ∈ Rd . Assume that • pZ(t) (x) ≤ C for all t ∈ I and x in some neighbourhood of u. • at least one of the two following hypotheses is satisfied: a) a.s. t b) Z(t) is of class C 2 α(δ) = sup P | det(Z 0 (t))| < δ/Z(t) = x → 0 t∈I,x∈V (u) as δ → 0, where V (u) is some neighbourhood of u. Then A3 holds true. Proof. If condition a) holds true, the result is Lemma 5 in Cucker and Wschebor (2003). To prove it under condition b), assume with no loss of generality that I = [0, 1]d and that u = 0. Put GI = ∃t ∈ I, Z(t) = 0, det Z 0 (t) = 0 Choose ε > 0, η > 0; there exists a positive number M such that P(EM ) = P sup kZ 0 (t)k > M ≤ ε. t∈I Denote by ωdet the modulus of continuity of | det(X 0 (.))| and choose m large enough so that √ d ) ≥ η ≤ ε. P(Fm,η ) = P ωdet ( m Consider the partition of I into md small cubes with sides of length 1/m. Let Ci1 ...id such a cube and ti1 ...id its centre (1 ≤ i1 , ..., id ≤ m). Then X c c P(GI ) ≤ P(EM ) + P(Fm,η ) + P GCi1 ...id ∩ EM ∩ Fm,η (3) 1≤i1 ...id ≤m When the event in the term corresponding to i1 ...id of the last sum occurs, we have: M√ d j = 1, ..., d |Zj (ti1 ...id )| ≤ m where Zj denotes the j-th coordinate of Z, and: det Z 0 (ti1 ...i ) < η. d So, if m is chosen sufficiently large so that V (0) contains the ball centred at 0 √ with radius Mm d , one has: P(GI ) ≤ 2 + md ( 2M √ d d) Cα(η) m Since ε and η are arbitrarily small, the result follows. 3 Lemma 4 With the notations of Theorem 1, suppose that A1 and A4 hold true and that pZ(t) (x) ≤ C for all t ∈ I and x in some neighbourhood of u Z Then P Nu (∂I) 6= 0 = 0 Proof: We use the notation of Proposition 3, with the same definition of EM excepting that we do not suppose that I = [0, 1]d . Since ∂I has zero measure, for each positive integer m, it can be covered by h(m) cubes C1 , ..., Ch(m) with centres t1 , ...th(m) and side lengths s1 , ...sh(m) smaller than 1/m, such that h(m) X (si )d → 0 as m → +∞. i=1 So, P NuZ (∂I) 6= h(m) X c 0 ≤ P(EM ) + P NuZ (Ci ) 6= 0 ∩ EM i=1 √ h(m) ≤ ε+ X P |Zj (ti ) − uj | ≤ M si i=1 h(m) X √ d ∀ j = 1, ..., d ≤ ε + C ( dM si )d 2 i=1 This gives the result. Lemma 5 Let Z : I → Rd , I a compact subset of Rd , be a C 1 function and u a point in Rd . Assume that a) inf t∈Z −1 ({u}) λmin Z 0 (t) ≥ ∆ > 0 b) ωZ 0 (η) < ∆/d where ωZ 0 is the continuity modulus of Z 0 , defined as the maximum of the continuity moduli of its entries and η a positive number. Then, if t1 , t2 are two distinct roots of the equation Z(t) = u such that the segment [t1 , t2 ] is contained in I, the Euclidean distance between t1 and t2 is greater than η. −1 Recall that λmin Z 0 (t) is the inverse of k Z 0 (t) k. 2 . Using the mean value theorem, for Proof: Set η̃ = kt1 − t2 k , v = ktt11 −t −t2 k i = 1, ..., d, there exists ξi ∈ [t1 , t2 ] such that Z 0 (ξi )v i = 0 Thus | Z 0 (t1 )v i | = | Z 0 (t1 )v i − Z 0 (ξi )v i | ≤ d X |Z 0 (t1 )ik − Z 0 (ξi )ik ||vk | ≤ ωZ 0 (η̃) k=1 d X k=1 4 √ |vk | ≤ ωZ 0 (η̃) d In conclusion ∆ ≤ λmin Z 0 (t1 ) ≤ kZ 0 (t1 )vk ≤ ωZ 0 (η̃)d, that implies η̃ > η. Proof of Theorem 1: Consider a continuous non-decreasing function F such that for x ≤ 1/2 for x ≥ 1. F (x) = 0 F (x) = 1 Let ∆ and η be positive real numbers. Define the random function α∆,η (u) = F 1 d inf λmin Z 0 (s) + kZ(s) − uk × 1 − F ωZ 0 (η) , (4) 2∆ s∈I ∆ If α∆,η (u) > 0 and NuZ (I−η ) does not vanish, conditions a) and b) in Lemma 5 are satisfied. Hence, in each ball with diameter η2 centred at a point in I−η = {t ∈ I, kt − sk ≥ η for all s ∈ / I} there is at most one root of the equation Z(t) = u, and a compactness argument shows that NuZ (I−η ) is bounded by a constant C(η, I), depending only on η and on the set I. Take now any real-valued non-random continuous function f : Rd → R with compact support. Because of the coarea formula (Federer, 1969, Th 3.2.3), since a.s. Z is Lipschitz and α∆,η (u).f (u) is integrable: Z Z f (u)NuZ (I−η )α∆,η (u)du = | det(Z 0 (t))|f Z(t) α∆,η Z(t) dt. Rd I−η Taking expectations in both sides, Z Rd f (u)E NuZ (I−η )α∆,η (u) du = Z Z f (u)du E (| det(Z 0 (t))|α∆,η (u)/Z(t) = u) pZ(t) (u)dt. Rd I−η It follows that the two functions (i) : E NuZ (I−η )α∆,η (u) Z (ii) : E (| det(Z 0 (t))|α∆,η (u)/Z(t) = u) pZ(t) (u)dt, I−η coincide Lebesgue-almost everywhere as functions of u. Let us prove that both functions are continuous, hence they are equal for every u ∈ Rd . Fix u = u0 and let us show that the function in (i) is continuous at u = u0 . Consider the random variable inside the expectation sign in (i). Almost surely, 5 there is no point t in Z −1 ({u0 }) such that det(Z 0 (t)) =0. By the local inversion theorem, Z(.) is invertible in some neighbourhood of each point belonging to Z −1 ({u0 }) and the distance from Z(t) to u0 is bounded below by a positive number for t ∈ I−η outside of the union of these neighbourhoods. This implies that, a.s., as a function of u, NuZ (I−η ) is constant in some (random) neighbourhood of u0 . On the other hand, it is clear from its definition that the function u α∆,η (u) is continuous and bounded. We may now apply dominated convergence as u → u0 , since NuZ (I−η )α∆,η (u) is bounded by a constant that does not depend on u. For the continuity of (ii), it is enough to prove that, for each t ∈ I the conditional expectation in the integrand is a continuous function of u. Note that the random variable | det(Z 0 (t))|α∆,η (u) is a functional defined on {(Z(s), Z 0 (s)) : s ∈ I}. Perform a Gaussian regression of (Z(s), Z 0 (s)) : s ∈ I with respect to the random vector Z(t), that is, write Z(s) = Y t (s) + αt (s)Z(t) Zj0 (s) = Yjt (s) + βjt (s)Z(t), j = 1, ..., d where Zj0 (s) (j = 1, ..., d) denote the columns of Z 0 (s), Y t (s) and Yjt (s) are Gaussian vectors, independent of Z(t) for each s ∈ I, and the regression matrices αt (s), βjt (s) ( j = 1, ..., d) are continuous functions of s, t (take into account A2). Replacing in the conditional expectation we are now able to get rid of the conditioning, and using the fact that the moments of the supremum of an a.s. bounded Gaussian process are finite, the continuity in u follows by dominated convergence. So, now we fix u ∈ Rd and make η ↓ 0, ∆ ↓ 0 in that order, both in (i) and (ii). For (i) one can use Beppo Levi’s Theorem. Note that almost surely ˙ = NuZ (I), NuZ (I−η ) ↑ NuZ (I) where the last equality follows from Lemma 4. On the other hand, the same Lemma 4 plus A3 imply together that,almost surely: h i inf λmin Z 0 (s) + kZ(s) − uk > 0 s∈I so that the first factor in the right-hand member of (4) increases to 1 as ∆ decreases to zero. Hence by Beppo Levi’s Theorem: lim lim E NuZ (I−η )α∆,η (u) = E NuZ (I) . ∆↓0 η↓0 For (ii), the argument is somewhat more involved, as we see below. Once this is done, to finish the proof, remark that standard Gaussian calculations show the finiteness of the right-hand member of (1). So, let us turn to (ii). Fix u during the remaining of the proof, and let us introduce some additional notations. 1.- We denote M t (s) the matrix having as j−th column Yjt (s) + βjt (s)u. Clearly, M t (s) is uniformly continuous as a function of the pair t, s ∈ I. 6 2.- We are going to change slightly the notation: instead of α∆,η (u) we put α∆,η (u, Z, I) to remember that α depends also on Z and I. We know - with the notations we have just added - that Z Z E Nu (I−η )α∆,η (u, Z, I) = E (|det (Z 0 (t))| α∆,η (u, Z, I)/Z(t) = u) pZ(t) (u) dt I−η d ωM (η) pZ(t) (u) dt. E det(M t (t)) ζ 1 − F ∆ I−η Z = where ζ=F 1 min λmin M t (s) + Y t (s) + αt (s)u − u 2∆ s∈I There is no problem in passing to the limit as η ↓ 0. We get: Z Z E Nu (I)β∆ (u, Z, I) = E det(M t (t)) ζ pZ(t) (u) dt (5) I where β∆ (u, Z, I) = F 1 min [λmin (Z 0 (s)) + kZ(s) − uk] 2∆ s∈I In the left-hand member of (5) we use that β∆ (u, Z, I) ↑ 1 almost surely as ∆ ↓ 0, as we have seen. Let us look at the right-hand member. Let ε > 0 and consider the functions: T γ t (s) = λmax M t (s) = max xT M t (s) M t (s)x kxk=1 γ t (s) = λmin T M t (s) = min xT M t (s) M t (s)x kxk=1 which are obviously continuous as functions of the pair s, t ∈ I. Also, using standard Gaussian bounds, one can see that supt,s∈I |det(M t (s))| ∈ 1 L . Choose now: • λ > 0 small enough so that E sup det(M t (t)) .1{sup < ε. t s,t∈I γ (s)>λ} t∈I • δ > 0 small enough so that E sup det(M t (t)) .1nω 1 ε2 γ (δ)≥ 2 d−1 λ t∈I o <ε where ωγ (δ) denotes the continuity modulus of γ on I × I. 7 Assume now that instead of of the compact cube I we consider a compact cube J ⊂ I, such that diam(J) < δ. We apply (5) with J instead of I : Z (6) E NuZ (J)β∆ (u, Z, J) ≥ E det(M t (t)) ζ 1Fε pZ(t) (u) dt J where we denote: 1 ε2 t t Fε = sup γ (s) ≤ λ, ωγ (δ) < , det(M (t)) > ε 2 λd−1 s,t∈I We have: ω h io 21 n T 1 Fε =⇒ det(M t (t)) = det M t (t) M t (t) ≤ λmin M t (t) λd−1 2 ∈ |det(M t (t))|2 ε2 =⇒ γ t (t) = λmin M t (t) ≥ > λd−1 λd−1 and since in the integral in the right-hand member of (6) one has ks − tk < δ, it follows that: ω ∈ Fε =⇒ γ t (s) > 1 ε2 ∀s, t ∈ J =⇒ ∀t ∈ J, min λmin M t (s) + Y t (s) + αt (s)u − u > 0. d−1 s∈J 2λ Hence, ω ∈ Fε =⇒ F 1 min λmin M t (s) + Y t (s) + αt (s)u − u ↑ 1 as ∆ ↓ 0 2∆ s∈J and passing to the limit in (6): Z Z E NuZ (J) ≥ E det(M t (t)) 1Fε pZ(t) (u) dt = E det(M t (t)) pZ(t) (u) dt−R(ε) J J where Z R(ε) = J E det(M t (t)) 1FεC pZ(t) (u) dt Now, recalling the definition of Fε : 0 ≤ E det(M t (t)) 1FεC h i ≤ E det(M t (t)) 1{sup t s,t∈I γ (s)>λ} +E det(M t (t)) 1nω (δ)≥ 1 ε2 o γ 2 λd−1 t +E det(M (t)) 1{|det(M t (t))|≤ε} < 3ε. So, E NuZ (J) Z ≥ E det(M t (t)) pZ(t) (u) dt − 3ε J Z pZ(t) (u) dt. J 8 (7) To finish, perform a partition of the original cube I into small cubes Ik each having diameter smaller than δ. It is then clear, using additivity, that: Z Z t Z (8) E Nu (I) ≥ E det(M (t)) pZ(t) (u) dt − 3ε pZ(t) (u) dt. I I Since ε can be chosen arbitrarily small, it follows that: Z Z E Nu (I) ≥ E [|det(Z 0 (t))| /Z(t) = u] pZ(t) (u) dt I The converse inequality is immediate, since α∆,η ≤ 1. Remark on the proof. One should pay attention to the fact that, in the last part of the proof, one can not pass to the limit as ε ↓ 0 in (7), since the set J must have diameter smaller than δ and δ depends on ε. But one can do it in (8). Proof of Theorem 2: For each δ > 0, define the domain Dk,δ (I) = {(t1 , ..., tk ) ∈ I k , kti − tj k ≥ δ if i 6= j, i, j = 1, ..., k} e and the process Z (t1 , ..., tk ) ∈ Dk,δ (I) e 1 , ..., tk ) = Z(t1 ), ..., Z(tk ) . Z(t e satisfies the hypotheses of Theorem (1) for every value (u, ..., u) ∈ It is clear that Z d k (R ) . So, h i e Z E N(u,...,u) Dk,δ (I) Z = Dk,δ (I) k Y E | det Z 0 (tj ) |/Z(t1 ) = ... = Z(tk ) = u pZ(t1 ),...,Z(tk ) (u, ..., u)dt1 ...dtk j=1 (9) To finish, let δ ↓ 0, note that NuZ (I) NuZ (I) − 1 ... NuZ (I) − k + 1 is the monotone limit of e Z N(u,...,u) Dk,δ (I) , and that the diagonal Dk (I) = (t1 , ..., tk ) ∈ I k , ti = tj for some pair i, j, i 6= j has zero Lebesgue measure in (Rd )k . Remark It is easy to adapt the proofs of Theorems 1 and 2 to certain classes of non-Gaussian processes. 9 For example, the statement of Theorem 1 remains valid if one replaces hypotheses A0 and A2 respectively by the following B0 and B2: B0 : Z(t) = H(Y (t)) for t ∈ I where Y : I → Rn is a Gaussian process with 1 paths such that for each t ∈ I, Y (t) has a non-degenerate distribution and H : Rn → Rd is a C 1 function. B2 : for each t ∈ I, Z(t) has a density pZ(t) which is continuous as a function of (t, u). Note that B0 and B2 together imply that n ≥ d. The only change to be introduced in the proof of the theorem is in the continuity of (ii) where the regression is performed on Y (t) instead of Z(t) Similarly, the statement of Theorem 2 remains valid if we replace A0 by B0 and add the requirement that the joint density of Z(t1 ), ..., Z(tk ) to be a continuous function of t1 , ..., tk , u for pairwise different t1 , ..., tk Now consider a process X from I to R and define X Mu,1 (I) = ] {t ∈ I, X(.) has a local maximum at the point t, X(t) > u} X Mu,2 (I) = ] {t ∈ I, X 0 (t) = 0, X(t) > u} The problem of writing Rice Formulae for the factorial moments of these random variables can be considered as a particular case of the previous one and the proofs are the same, mutatis mutandis. For further use, we state as a theorem, Rice Formula for the expectation. For short we do not state the equivalent of Theorem 2 that holds true similarly. Theorem 6 Let X : I R , I a compact subset of Rd , be a random field. Let X u ∈ R, define Mu,i (I), i = 1, 2 as above. For each d × d real symmetric matrix M , we put δ 1 (M ) := | det(M )|1IM ≺0 , δ 2 (M ) := | det(M )|. Assume: A0: X is Gaussian, A”1: a.s. t X(t) is of class C 2 , A”2: for each t ∈ I, X(t), X 0 (t) has a non degenerate distribution in R1 ×Rd , A”3: either a.s. t X(t) is of class C 3 or α(δ) = sup P | det X 00 (t) | < δ/X 0 (t) = x0 → 0 t∈I,x0 ∈V (0) as δ → 0, where V (0) denotes some neighbourhood of 0, A4: ∂I has zero Lebesgue measure. Then, for i = 1, 2 : Z ∞ Z X E Mu,i (I) = dx E δ i X 00 (t) /X(t) = x, X 0 (t) = 0 pX(t),X 0 (t) (x, 0)dt u I and both members are finite. 10 References. Azaı̈s, J-M; Wschebor, M. ”On the Distribution of the Maximum of a Gaussian Field with d Parameters.”, Annals of Applied Probability, Vol. 15, No. 1A, 254-278, 2005. Brillinger D. R. On the number of solutions of systems of random equations. The Annals of Math. Statistics, 43, 534–540, 1972. Cucker, F.; Wschebor, M. ”On the Expected Condition Number of Linear Programming Problems”, Numerische Mathematik, 94, 3, 419-478, 2003. Federer, H. Geometric measure theory. Springer-Verlag, New York, 1969. 11
© Copyright 2026 Paperzz