Hindawi Publishing Corporation Journal of Probability and Statistics Volume 2014, Article ID 403764, 12 pages http://dx.doi.org/10.1155/2014/403764 Research Article A General Result on the Mean Integrated Squared Error of the Hard Thresholding Wavelet Estimator under đź-Mixing Dependence Christophe Chesneau Laboratoire de MatheĚematiques Nicolas Oresme, UniversiteĚ de Caen, BP 5186, 14032 Caen Cedex, France Correspondence should be addressed to Christophe Chesneau; [email protected] Received 10 April 2013; Accepted 22 October 2013; Published 19 January 2014 Academic Editor: Zhidong Bai Copyright © 2014 Christophe Chesneau. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider the estimation of an unknown function đ for weakly dependent data (đź-mixing) in a general setting. Our contribution is theoretical: we prove that a hard thresholding wavelet estimator attains a sharp rate of convergence under the mean integrated squared error (MISE) over Besov balls without imposing too restrictive assumptions on the model. Applications are given for two types of inverse problems: the deconvolution density estimation and the density estimation in a GARCH-type model, both improve existing results in this dependent context. Another application concerns the regression model with random design. 1. Introduction A general nonparametric problem is adopted: we aim to estimate an unknown function đ via đ random variables đ1 , . . . , đđ from a strictly stationary stochastic process (đđĄ )đĄâZ . We suppose that (đđĄ )đĄâZ has a weak dependence structure; the đź-mixing case is considered. This kind of dependence naturally appears in numerous models as Markov chains, GARCH-type models, and discretely observed diffusions (see, e.g., [1â3]). The problems where đ is the density of đ1 or a regression function have received a lot of attention. A partial list of related works includes Robinson [4], Roussas [5, 6], Truong and Stone [7], Tran [8], Masry [9, 10], Masry and Fan [11], Bosq [12], and Liebscher [13]. For an efficient estimation of đ, many methods can be considered. The most popular of them are based on kernels, splines and wavelets. In this note we deal with wavelet methods that have been introduced in i.i.d.setting by Donoho and Johnstone [14, 15] and Donoho et al. [16, 17]. These methods enjoy remarkable local adaptivity against discontinuities and spatially varying degree of oscillations. Complete reviews and discussions on wavelets in statistics can be found in, for example, Antoniadis [18] and HaĚrdle et al. [19]. In the context of đź-mixing dependence, various wavelet methods have been elaborated for a wide variety of nonparametric problems. Recent developments can be found in, for example, Leblanc [20], Tribouley and Viennet [21], Masry [22], Patil and Truong [23], Doosti et al. [24], Doosti and Niroumand [25], Doosti et al. [26], Cai and Liang [27], Niu and Liang [28], Benatia and Yahia [29], Chesneau [30â 32], Chaubey and Shirazi [33], and Abbaszadeh and Emadi [34]. In the general dependent setting described above, we provide a theoretical contribution to the performance of a wavelet estimator based on a hard thresholding. This nonlinear wavelet procedure has the features to be fully adaptive and efficient over a large class of functions đ (see, e.g., [14â17, 35]). Following the spirit of Kerkyacharian and Picard [36], we determine necessary assumptions on (đđĄ )đĄâZ and the wavelet basis to ensure that the considered estimator attains a fast rate of convergence under the MISE over Besov balls. The obtained rate of convergence often corresponds to the near optimal one in the minimax sense for the standard i.i.d. case. The originality of our result is to be general and sharp; it can be applied for nonparametric models of different natures and improves some existing results. This fact is illustrated by the consideration of the density deconvolution estimation problem and the density estimation problem in a 2 Journal of Probability and Statistics GARCH-type model, improving ([30], Proposition 5.1) and ([31], Theorem 2), respectively. A last part is devoted to the regression model with random design. The obtained result completes the one of Patil and Truong [23]. The organization of this note is as follows. In the next section we describe the considered wavelet setting. The hard thresholding estimator and its rate of convergence under the MISE over Besov balls are presented in Section 3. Applications of our general result are given in Section 4. The proofs are carried out in Section 5. đ (đ) with đ > 0, đ, đ ⼠1 2.2. Besov Balls. We say that đ â đľđ,đ and đ > 0 if and only if there exists a constant đś > 0 such that the wavelet coefficients of đ given by (4) satisfy đ(1/2â1/đ) 2 óľ¨ óľ¨đ ( â óľ¨óľ¨óľ¨đđ,đ óľ¨óľ¨óľ¨ ) 1/đ đâÎ đ â đ(đ +1/2â1/đ) + ( â (2 đ=đ óľ¨ óľ¨đ ( â óľ¨óľ¨óľ¨óľ¨đđ,đ óľ¨óľ¨óľ¨óľ¨ ) 1/đ đ )) 1/đ ⤠đś, đâÎ đ 2. Wavelets and Besov Balls (5) In this section we introduce some notations corresponding to wavelets and Besov balls. 2.1. Wavelet Basis. We consider the wavelet basis on [0, 1] constructs from the Daubechies wavelets db2N with đ ⼠1 (see, e.g., [37]). A brief description of this basis is given below. Let đ and đ be the initial wavelet functions of the family db2N. These functions have the particularity to be compactly supported and to belong to the class Cđ for đ > 5đ. For any đ ⼠0, we set Î đ = {0, . . . , 2đ â 1} and, for đ â Î đ , đđ,đ (đĽ) = 2đ/2 đ (2đ đĽ â đ) , đđ,đ (đĽ) = 2đ/2 đ (2đ đĽ â đ) . (1) With appropriated treatments at the boundaries, there exists an integer đ such that, for any integer â ⼠đ, B = {đâ,đ , đ â Î â ; đđ,đ ; đ â N â {0, . . . , â â 1}, đ â Î đ } is an orthonormal basis of L2 ([0, 1]), where L2 ([0, 1]) 1 óľ¨2 óľ¨ óľŠ óľŠ = {đ : [0, 1] ół¨â R; óľŠóľŠóľŠđóľŠóľŠóľŠ2 = (⍠óľ¨óľ¨óľ¨đ (đĽ)óľ¨óľ¨óľ¨ đđĽ) 0 â đ (đĽ) = â đâ,đ đâ,đ (đĽ) + â â đđ,đ đđ,đ (đĽ) , đ=â đâÎ đ (3) đĽ â [0, 1] , where đđ,đ and đđ,đ denote the wavelet coefficients of đ defined by 0 3.1. Statistical Framework. As mentioned in Section 1, a nonparametric estimation setting as general as possible is adopted: we aim to estimate an unknown function đ â L2 ([0, 1]) via đ random variables (or vectors) đ1 , . . . , đđ from a strictly stationary stochastic process (đđĄ )đĄâZ defined on a probability space (Ί, A, P). We suppose that (đđĄ )đĄâZ has a đź-mixing dependence structure with exponential decay rate; that is, there exist two constants đž > 0 and đ > 0 such that đâĽ1 < â} . For any integer â ⼠đ and đ â L ([0, 1]), we have the following wavelet expansion: 1 3. Statistical Framework, Estimator and Result sup (đđđ đźđ ) ⤠đž, 2 đđ,đ = ⍠đ (đĽ) đđ,đ (đĽ) đđĽ, Remark 1. We have chosen a wavelet basis on [0, 1] to fix the notations; wavelet basis on another interval can be considered in the rest of the study without affecting the results. 1/2 (2) đâÎ â with the usual modifications if đ = â or đ = â. Note that, for đ particular choices of đ , đ, and đ, đľđ,đ (đ) contains the classical HoĚlder and Sobolev balls (see, e.g., [40] and [19]). 1 where đźđ = sup(đ´,đľ)âFđââ,0 ×Fđđ,â |P(đ´ ⊠đľ) â P(đ´)P(đľ)|, Fđ ââ,0 is the đ-algebra generated by the random variables (or vectors) . . . , đâ1 , đ0 and Fđ đ,â is the đ-algebra generated by the random variables (or vectors) đđ , đđ+1 , . . .. The đź-mixing dependence is reasonably weak; it is satisfied by a wide variety of models including Markov chains, GARCH-type models, and discretely observed diffusions (see, for instance, [1â3, 41]). The considered estimator for đ is presented below. 3.2. Estimator. We define the hard thresholding wavelet estimator đĚ by đđ,đ = ⍠đ (đĽ) đđ,đ (đĽ) đđĽ. 0 (4) Technical details can be found in, for example, Cohen et al. [38] and Mallat [39]. In the main result of this paper, we will investigate the MISE rate of the proposed estimator by assuming that the unknown function of interest đ belongs to a wide class of functions: the Besov class. Its definition in terms of wavelet coefficients is presented in the following. (6) đ1 đĚ (đĽ) = â đĚđ0 ,đ đđ0 ,đ (đĽ) + â â đĚđ,đ 1{|đĚđ,đ |âĽđ đ đ } đđ,đ (đĽ) , đâÎ đ0 đ=đ0 đâÎ đ (7) where đĚđ,đ = 1 đ âđ (đđ,đ , đđ ) , đ đ=1 1 đ đĚđ,đ = âđ (đđ,đ , đđ ) , đ đ=1 (8) Journal of Probability and Statistics 3 1 is the indicator function, đ > 0 is a large enough constant, đ0 is the integer satisfying đ0 2 = [đ ln đ] , (9) where [đ] denotes the integer part of đ and đ1 is the integer satisfying 2đ1 = [( 1/(2đ+1) đ ) ], (ln đ)3 đ đ = 2đđ â ln đ . đ (10) (11) Here it is supposed that there exists a function đ : L2 ([0, 1]) × đ1 (Ί) â C such that (H1) for đž â {đ, đ}, any integer đ ⼠đ0 and đ â Î đ , 1 E (đ (đžđ,đ , đ1 )) = ⍠đ (đĽ) đžđ,đ (đĽ) đđĽ, 0 Lemma 2. We make the following assumptions. (F1) Let đ˘ be the density of đ1 and let đ˘(đ1 ,đđ+1 ) be the density of (đ1 , đđ+1 ) for any đ â Z. We suppose that there exists a constant đś > 0 such that óľ¨óľ¨ óľ¨óľ¨đ˘(đ1 ,đđ+1 ) (đĽ, đŚ) sup sup óľ¨ đâ{1,...,đâ1}Z (đĽ,đŚ)âđ1 (Ί)×đđ+1 (Ί) (14) óľ¨óľ¨ âđ˘ (đĽ) đ˘ (đŚ)óľ¨óľ¨ ⤠đś. (F2) There exist two constants, đś > 0 and đ ⼠0, satisfying, for đž â {đ, đ}, for any integer đ ⼠đ0 and đ â Î đ , óľ¨óľ¨ óľ¨ óľ¨óľ¨đ (đžđ,đ , đĽ)óľ¨óľ¨óľ¨ đđĽ ⤠đś2đđ 2âđ/2 . óľ¨ óľ¨ âŤ đ1 (Ί) (15) Then, under (F1) and (F2), (H2)-(iii) is satisfied. (12) where E denotes the expectation, (H2) there exist two constants, đś > 0 and đ ⼠0, satisfying, for đž â {đ, đ}, for any integer đ ⼠đ0 and đ â Î đ , (i) supđĽâđ1 (Ί) |đ(đžđ,đ , đĽ)| ⤠đś2đđ 2đ/2 , (ii) E(|đ(đžđ,đ , đ1 )|2 ) ⤠đś22đđ , (iii) for any đ â {1, . . . , đ â 1} ⼠1, óľ¨óľ¨óľ¨C (đ (đž , đ ) , đ (đž , đ ))óľ¨óľ¨óľ¨ ⤠đś22đđ 2âđ , (13) óľ¨óľ¨ đV đ,đ đ+1 đ,đ 1 óľ¨óľ¨ where CđV denotes the covariance; that is, CđV (đ, đ) = E(đđ) â E(đ)E(đ), đ denotes the complex conjugate of đ. For well-known nonparametric models in the i.i.d. setting, hard thresholding wavelet estimators and important results can be found in, for example, Donoho and Johnstone [14, 15], Donoho et al. [16, 17], Delyon and Juditsky [35], Kerkyacharian and Picard [36], and Fan and Koo [42]. In the đź-mixing context, đĚ defined by (7) is a general and improved version of the estimator considered in Chesneau [30, 31]. The main differences are the presence of the tuning parameter đ and the global definition of the function đ offering numerous possibilities of applications. Three of them are explored in Section 4. Comments on the Assumptions. The assumption (H1) ensures that (8) are unbiased estimators for đđ,đ and đđ,đ given by (4), whereas (H2) is related to their good performance. See Proposition 10. These assumptions are not too restrictive. For instance, if we consider the standard density estimation problem where (đđĄ )đĄâZ are i.i.d. random variables with bounded density đ, the function đ(đž, đĽ) = đž(đĽ) satisfies (H1) and (H2) with đ = 0 (note that, thanks to the independence of (đđĄ )đĄâZ , the covariance term in (H2)-(iii) is zero). The technical details are given in Donoho et al. [17]. Lemma 2 describes a simple situation in which assumption (H2)-(iii) is satisfied. 3.3. Result. Theorem 3 determines the rate of convergence attained by đĚ under the MISE over Besov balls. Theorem 3. We consider the general statistical setting described in Section 3.1. Let đĚ be (7) under (H1) and (H2). đ Suppose that đ â đľđ,đ (đ) with đ ⼠1, {đ ⼠2 and đ â (0, đ)}, or {đ â [1, 2) and đ â ((2đ + 1)/đ, đ)}. Then there exists a constant đś > 0 such that ln đ 2đ /(2đ +2đ+1) óľŠ2 óľŠ . ) E (óľŠóľŠóľŠóľŠđĚ â đóľŠóľŠóľŠóľŠ2 ) ⤠đś( đ (16) The rate of convergence â((ln đ)/đ)2đ /(2đ +2đ+1) â is often the near optimal one in the minimax sense for numerous statistical problems in a i.i.d. setting (see, e.g., [19, 43]). Moreover, note that Theorem 3 is flexible; the assumptions on (đđĄ )đĄâZ , related to the definition of đ in (H1) and (H2), are mild. In the next section, this flexibility is illustrated for three sophisticated nonparametric estimation problems: the density deconvolution estimation problem, the density estimation problem in a GARCH-type model, and the regression function estimation in the regression model with random design. 4. Applications 4.1. Density Deconvolution. Let (đđĄ )đĄâZ be a strictly stationary stochastic process such that đđĄ = đđĄ + đđĄ , đĄ â Z, (17) where (đđĄ )đĄâZ is a strictly stationary stochastic process with unknown density đ and (đđĄ )đĄâZ is a strictly stationary stochastic process with known density đ. It is supposed that đđĄ and đđĄ are independent for any đĄ â Z and (đđĄ )đĄâZ is a đź-mixing process with exponential decay rate (see Section 3.1 for a precise definition). Our aim is to estimate đ via đ1 , . . . , đđ from (đđĄ )đĄâZ . Some related works are Masry [44], Kulik [45], Comte et al. [46], and Van Zanten and Zareba [47]. We formulate the following assumptions. 4 Journal of Probability and Statistics (G1) The support of đ is [0, 1]. (18) (i) exactly the rate of convergence attained by the hard thresholding wavelet estimator, (ii) the near optimal rate of convergence in the minimax sense. (G3) Let đ˘ be the density of đ1 . We suppose that there exists a constant đś > 0 such that The details can be found in Fan and Koo [42]. Thus, Theorem 4 can be viewed as an extension of this existing result to the weak dependent case. (G2) There exists a constant đś > 0 such that supđ (đĽ) ⤠đś < â. đĽâR sup đ˘ (đĽ) ⤠đś. (19) đĽâR (G4) For any đ â Z, let đ˘(đ1 ,đđ+1 ) be the density of (đ1 , đđ+1 ). We suppose that there exists a constant đś > 0 such that sup sup đ˘(đ1 ,đđ+1 ) (đĽ, đŚ) ⤠đś. (20) đâZ (đĽ,đŚ)âR2 (G5) For any integrable function đž, we define its Fourier transform by F (đž) (đĽ) = ⍠â ââ đž (đŚ) đâđđĽđŚ đđŚ, đĽ â R. (21) We suppose that there exist three known constants đś > 0, đ > 0, and đż > 1 such that, for any đĽ â R, (i) the Fourier transform of đ satisfies óľ¨ óľ¨óľ¨ óľ¨óľ¨F (đ) (đĽ)óľ¨óľ¨óľ¨ ⼠đ đż/2 (1 + đĽ2 ) , (22) (ii) for any â â {0, 1, 2}, the âth derivative of the Fourier transform of đ satisfies (23) Theorem 4. We consider the model (17). Suppose that (G1)â (G5) are satisfied. Let đĚ be defined as in (7) with 1 â F (đž) (đŚ) âđđŚđĽ đ đđŚ, ⍠2đ ââ F (đ) (đŚ) (24) where F(đž)(đŚ) denotes the complex conjugate of F(đž)(đŚ) and đ = đż (appearing in(G5)). đ (đ) with đ ⼠1, {đ ⼠2, and đ â Suppose that đ â đľđ,đ (0, đ)} or {đ â [1, 2) and đ â ((2đż + 1)/đ, đ)}. Then there exists a constant đś > 0 such that 2đ /(2đ +2đż+1) (26) (đđĄ2 )đĄâZ is a strictly stationary stochastic process with unknown density đ, and (đđĄ )đĄâZ is a strictly stationary stochastic process with known density đ. It is supposed that đđĄ2 and đđĄ are independent for any đĄ â Z and (đđĄ )đĄâZ is a đźmixing process with exponential decay rate (see Section 3.1 for a precise definition). Our aim is to estimate đ via đ1 , . . . , đđ from (đđĄ )đĄâZ . Some related works are Comte et al. [46] and Chesneau [31]. We formulate the following assumptions. (J1) There exists a positive integer đż such that đ (đĽ) = 1 (â ln đĽ)đżâ1 , (đż â 1)! đĽ â [0, 1] . (27) Let us remark that đ is the density of âđżđ=1 đđ , where đ1 , . . . , đđż are đż i.i.d. random variables having the common distribution U([0, 1]). (J2) The support of đ is [0, 1] and đ â L2 ([0, 1]). (J3) Let đ˘ be the density of đ1 . We suppose that there exists a constant đś > 0 such that . (28) đĽâR (J4) For any đ â Z, let đ˘(đ1 ,đđ+1 ) be the density of (đ1 , đđ+1 ). We suppose that there exists a constant đś > 0 such that We are now in the position to present the result. ln đ óľŠ2 óľŠ E (óľŠóľŠóľŠóľŠđĚ â đóľŠóľŠóľŠóľŠ2 ) ⤠đś( ) đ đđĄ = đđĄ2 đđĄ , sup đ˘ (đĽ) ⤠đś. óľ¨ óľ¨óľ¨ đś óľ¨óľ¨(F (đ) (đĽ))(â) óľ¨óľ¨óľ¨ ⤠. óľ¨óľ¨ óľ¨óľ¨ (1 + |đĽ|)đż+â đ (đž, đĽ) = 4.2. GARCH-Type Model. We consider the strictly stationary stochastic process (đđĄ )đĄâZ where, for any đĄ â Z, (25) Theorem 4 improves ([30], Proposition 5.1) in terms of rate of convergence; we gain a logarithmic term. Moreover, it is established that, in the i.i.d. setting, â((ln đ)/đ)2đ /(2đ +2đż+1) â is sup sup đ˘(đ1 ,đđ+1 ) (đĽ, đŚ) ⤠đś. (29) đâZ (đĽ,đŚ)âR2 We are now in the position to present the result. Theorem 5. We consider model (26). Suppose that (J1)â(J4) are satisfied. Let đĚ be defined as in (7) with đ (đž, đĽ) = đđż (đž) (đĽ) , (30) ó¸ where, for any positive integer â, đ(đž)(đĽ) = (đĽđž(đĽ)) and đâ (đž)(đĽ) = đ(đââ1 (đž))(đĽ) and đ = đż (appearing in (J1)). đ (đ) with đ ⼠1, {đ ⼠2 and đ â Suppose that đ â đľđ,đ (0, đ)}, or {đ â [1, 2) and đ â ((2đż + 1)/đ, đ)}. Then there exists a constant đś > 0 such that ln đ 2đ /(2đ +2đż+1) óľŠ2 óľŠ . E (óľŠóľŠóľŠóľŠđĚ â đóľŠóľŠóľŠóľŠ2 ) ⤠đś( ) đ (31) Theorem 5 significantly improves ([31], Theorem 2) in terms of rate of convergence; we gain an exponent 1/2. Journal of Probability and Statistics 5 4.3. Nonparametric Regression Model. We consider the strictly stationary stochastic process (đđĄ )đĄâZ where, for any đĄ â Z, đđĄ = (đđĄ , đđĄ ), đđĄ = đ (đđĄ ) + đđĄ , (32) (đđĄ )đĄâZ is a strictly stationary stochastic process with unknown density đ, (đđĄ )đĄâZ is a strictly stationary centered stochastic process, and đ is the unknown regression function. It is supposed that đđĄ and đđĄ are independent for any đĄ â Z and (đđĄ )đĄâZ is a đź-mixing process with exponential decay rate (see Section 3.1 for a precise definition). Our aim is to estimate đ via đ1 , . . . , đđ from (đđĄ )đĄâZ . Applications of this problem can be found in HaĚrdle [48]. Wavelet methods can be found in Patil and Truong [23], Doosti et al. [24], Doosti et al. [26], and Doosti and Niroumand [25]. We formulate the following assumptions. (K1) The support of đ and đ is [0, 1] and đ and đ â L2 ([0, 1]). (K2) đ1 (Ί) is bounded. óľ¨ óľ¨ sup óľ¨óľ¨óľ¨đ (đĽ)óľ¨óľ¨óľ¨ ⤠đś. (33) (K4) There exist two constants đâ > 0 and đś > 0 such that đâ ⤠inf đ (đĽ) , sup đ (đĽ) ⤠đś. đĽâ[0,1] đĽâ[0,1] (34) (K5) Let đ˘ be the density of đ1 . We suppose that there exists a constant đś > 0 such that sup đ˘ (đĽ) ⤠đś. đĽâR×[0,1] (35) (K6) For any đ â Z, let đ˘(đ1 ,đđ+1 ) be the density of (đ1 , đđ+1 ). We suppose that there exists a constant đś > 0 such that sup sup đ˘(đ1 ,đđ+1 ) (đĽ, đŚ) ⤠đś. đâZ (đĽ,đŚ)â(R×[0,1])×(R×[0,1]) Theorem 6. We consider the model (32). Suppose that (K1)â (K6) are satisfied. Let đĚ be the truncated ratio estimator. Consider (37) where (i) ĚV is defined as in (7) with đ (đž, (đĽ, đĽâ )) = đĽđž (đĽâ ) and đ = 0, (39) and đ = 0, (iii) đâ is the constant defined in (K4). đ đ Suppose that đđ â đľđ,đ (đ) and đ â đľđ,đ (đ) with đ ⼠1, {đ ⼠2 and đ â (0, đ)} or {đ â [1, 2), and đ â (1/đ, đ)}. Then there exists a constant đś > 0 such that ln đ 2đ /(2đ +1) óľŠ2 óľŠ . ) E (óľŠóľŠóľŠóľŠđĚ â đóľŠóľŠóľŠóľŠ2 ) ⤠đś( đ (40) The estimator (37) is derived by combining the procedure of Patil and Truong [23] with the truncated approach of Vasiliev [49]. Theorem 6 completes Patil and Truong [23] in terms of rates of convergence under the MISE over Besov balls. Conclusion. Considering the weak dependent case on the observations, we prove a general result on the rate of convergence attains by a hard wavelet thresholding estimator under the MISE over Besov balls. This result is flexible; it can be applied for a wide class of statistical models. Moreover, the obtained rate of convergence is sharp; it can correspond to the near optimal one in the minimax sense for the standard i.i.d. case. Some recent results on sophisticated statistical problems are improved. Thanks to its flexibility, the perspectives of applications of our theoretical result in other contexts are numerous. 5. Proofs In this section, đś denotes any constant that does not depend on đ, đ, and đ. Its value may change from one term to another and may depend on đ or đ. (36) We are now in the position to present the result. ĚV (đĽ) đĚ (đĽ) = , 1 Ě â /2} đĚ (đĽ) {|đ(đĽ)|âĽđ đ (đž, đĽ) = đž (đĽ) Remark 7. The assumption(K2) can be relaxed with another strategy to the one developed in Theorem 6. Some technical elements are given in Chesneau [32]. (K3) There exists a constant đś > 0 such that đĽâ[0,1] (ii) đĚ is defined as in (7) with đđĄ instead of đđĄ , (38) 5.1. Key Lemmas. Let us present two lemmas which will be used in the proofs. Lemma 8 shows a sharp covariance inequality under the đź-mixing condition. Lemma 8 (see [50]). Let (đđĄ )đĄâZ be a strictly stationary đźmixing process with mixing coefficient đźđ , đ ⼠0, and let â and đ be two measurable functions. Let đ > 0 and đ > 0 satisfying 1/đ + 1/đ < 1 such that E(|â(đ1 )|đ ) and E(|đ(đ1 )|đ ) exist. Then there exists a constant đś > 0 such that óľ¨óľ¨óľ¨CđV (â (đ1 ) , đ (đđ+1 ))óľ¨óľ¨óľ¨ óľ¨ óľ¨ (41) óľ¨ óľ¨đ 1/đ óľ¨ óľ¨đ 1/đ 1â1/đâ1/đ ⤠đśđźđ (E (óľ¨óľ¨óľ¨â (đ1 )óľ¨óľ¨óľ¨ )) (E(óľ¨óľ¨óľ¨đ (đ1 )óľ¨óľ¨óľ¨ )) . Lemma 9 below presents a concentration inequality for đź-mixing processes. 6 Journal of Probability and Statistics Lemma 9 (see [13]). Let (đđĄ )đĄâZ be a strictly stationary process with the đth strongly mixing coefficient đźđ , đ ⼠0, let đ be a positive integer, let â : R â C be a measurable function, and, for any đĄ â Z, đđĄ = â(đđĄ ). We assume that E(đ1 ) = 0 and there exists a constant đ > 0 satisfying |đ1 | ⤠đ. Then, for any đ â {1, . . . , [đ/2]} and đ > 0, we have óľ¨ óľ¨ 1 óľ¨óľ¨ đ óľ¨óľ¨ P ( óľ¨óľ¨óľ¨óľ¨ â đđ óľ¨óľ¨óľ¨óľ¨ ⼠đ) đ óľ¨óľ¨đ=1 óľ¨óľ¨ ⤠4 exp (â đ2 đ đ ) + 32 đđźđ , đ 16 (đˇđ /đ + đđđ/3) (42) (c) Let đ đ be defined as in (11). There exists a constant đś > 0 such that, for any đ large enough, đ â {đ0 , . . . , đ1 } and đ â Î đ , we have 1 óľ¨ óľ¨ đ đ đ P (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ⼠) ⤠đś 4. 2 đ Proof of Proposition 10. (a) Using (H1) and the stationarity of (đđĄ )đĄâZ , we obtain óľ¨ óľ¨2 E (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ) = V (Ěđđ,đ ) where đˇđ = đâ{1,...,2đ} 1 đ 2 âV (đ (đđ,đ , đđ )) + 2 đ2 đ=1 đ = đ max V (âđđ ) . (43) đ Vâ1 đ=1 × â â Re (CđV (đ (đđ,đ , đV ) , đ (đđ,đ , đâ ))) V=2â=1 5.2. Intermediary Results Proof of Lemma 2. Using a standard expression of the covariance, and (F1) as well as(F2), we obtain óľ¨óľ¨ óľ¨ óľ¨óľ¨CđV (đ (đžđ,đ , đđ+1 ) , đ (đžđ,đ , đ1 ))óľ¨óľ¨óľ¨ óľ¨ óľ¨ óľ¨óľ¨ óľ¨ = óľ¨óľ¨óľ¨óľ¨âŤ đ (đžđ,đ , đĽ) đ (đžđ,đ , đŚ) ⍠óľ¨óľ¨ đ1 (Ί) đ1 (Ί) óľ¨óľ¨ óľ¨ × (đ˘(đ1 ,đđ+1 ) (đĽ, đŚ) â đ˘ (đĽ) đ˘ (đŚ)) đđĽđđŚóľ¨óľ¨óľ¨óľ¨ óľ¨óľ¨ 1 đ 2 âV (đ (đđ,k , đđ )) + 2 2 đ đ=1 đ = đâ1 × â (đ â đ) Re (CđV (đ (đđ,đ , đđ+1 ) , đ (đđ,đ , đ1 ))) đ=1 1 óľ¨2 óľ¨ (E (óľ¨óľ¨óľ¨óľ¨đ (đđ,đ , đ1 )óľ¨óľ¨óľ¨óľ¨ ) đ ⤠đâ1 óľ¨ óľ¨ +2 â óľ¨óľ¨óľ¨óľ¨CđV (đ (đđ,đ , đđ+1 ) , đ (đđ,đ , đ1 ))óľ¨óľ¨óľ¨óľ¨) . óľ¨óľ¨ óľ¨óľ¨ óľ¨ óľ¨óľ¨đ (đžđ,đ , đĽ)óľ¨óľ¨óľ¨ óľ¨óľ¨óľ¨đ (đžđ,đ , đŚ)óľ¨óľ¨óľ¨ â¤âŤ âŤ óľ¨ óľ¨ óľ¨ óľ¨ đ (Ί) đ (Ί) 1 đ=1 (49) 1 óľ¨ óľ¨ × óľ¨óľ¨óľ¨óľ¨đ˘(đ1 ,đđ+1 ) (đĽ, đŚ) â đ˘ (đĽ) đ˘ (đŚ)óľ¨óľ¨óľ¨óľ¨ đđĽ đđŚ â¤ đś(⍠đ1 (Ί) (48) By (H2)-(ii) we get 2 óľ¨óľ¨ óľ¨ óľ¨óľ¨đ (đžđ,đ , đĽ)óľ¨óľ¨óľ¨ đđĽ) ⤠đś22đđ 2âđ . óľ¨ óľ¨ óľ¨ óľ¨2 E (óľ¨óľ¨óľ¨óľ¨đ(đđ,đ , đ1 )óľ¨óľ¨óľ¨óľ¨ ) ⤠đś22đđ . (44) For the covariance term, note that This ends the proof of Lemma 2. Proposition 10 proves probability and moments inequalities satisfied by the estimators (8). Proposition 10. Let đźĚđ,đ and đ˝Ěđ,đ be defined as in (8) under (H1) and (H2), let đ0 be (9) and let đ1 be (10). (a) There exists a constant đś > 0 such that, for any đ â {đ0 , . . . , đ1 } and đ â Î đ , 1 óľ¨ óľ¨2 E (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ) ⤠đś22đđ , đ (45) 1 óľ¨2 óľ¨ E (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ) ⤠đś22đđ . đ (46) (b) There exists a constant đś > 0 such that, for any đ â {đ0 , . . . , đ1 } and đ â Î đ , óľ¨4 óľ¨ E (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ) ⤠đś24đđ . (50) (47) đâ1 óľ¨ óľ¨ â óľ¨óľ¨óľ¨óľ¨CđV (đ (đđ,đ , đđ+1 ) , đ (đđ,đ , đ1 ))óľ¨óľ¨óľ¨óľ¨ = đ´ + đľ, (51) đ=1 where đ´= [ln đ/đ]â1 â đ=1 đľ= óľ¨óľ¨ óľ¨ óľ¨óľ¨CđV (đ (đđ,đ , đđ+1 ) , đ (đđ,đ , đ1 ))óľ¨óľ¨óľ¨ , óľ¨ óľ¨ đâ1 â đ=[(ln đ)/đ] óľ¨óľ¨ óľ¨ óľ¨óľ¨CđV (đ (đđ,đ , đđ+1 ) , đ (đđ,đ , đ1 ))óľ¨óľ¨óľ¨ . óľ¨ óľ¨ (52) It follows from (H2)-(iii) and 2âđ ⤠2âđ0 < 2(ln đ)â1 that đ´ ⤠đś22đđ 2âđ [ ln đ ] ⤠đś22đđ . đ (53) Journal of Probability and Statistics 7 The Davydov inequality described in Lemma 8 with đ = đ = 4, (H2)-(i)-(ii), and 2đ ⤠2đ1 ⤠đ give óľ¨ óľ¨4 đľ ⤠đśâ E (óľ¨óľ¨óľ¨óľ¨đ(đđ,đ , đ1 )óľ¨óľ¨óľ¨óľ¨ ) đâ1 â đ=[(ln đ)/đ] óľ¨ óľ¨2 ⤠đś2đđ 2đ/2 â E (óľ¨óľ¨óľ¨óľ¨đ(đđ,đ , đ1 )óľ¨óľ¨óľ¨óľ¨ ) âđźđ â đâđđ/2 â (54) Owing to Lemma 9 applied with đ1 , . . . , đđ , đ = đ đ đ /2, đ = [âđ ln đ], đ = đś2đđ âđ/(ln đ)3 , and the bound (61), we obtain óľ¨ đ đ đ óľ¨ ) P (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ⼠2 đ 2 đ2đ đ ⤠đś (exp (âđś đ=[(ln đ)/đ] đˇđ /đ + đ đ đ đđ = đś22đđ âđđâ(ln đ)/2 ⤠đś22đđ . )+ đ âđđ đđ ) đđ ⤠đś ( exp (âđś ((đ 2 22đđ ln đ) Thus đâ1 óľ¨ óľ¨ â óľ¨óľ¨óľ¨óľ¨CđV (đ (đđ,đ , đđ+1 ) , đ (đđ,đ , đ1 ))óľ¨óľ¨óľ¨óľ¨ ⤠đś22đđ . (55) đ=1 × (22đđ + đ 2đđ â Putting (49), (50), and (55) together, the first point in (a) is proved. The proof of the second point is identical with đ instead of đ. ⤠(b) Thanks to (H2)-(i), we have |đĚđ,đ | supđĽâđ1 (Ί) |đ(đđ,đ , đĽ)| ⤠đś2đđ 2đ/2 . It follows from the triangular inequality and |đđ,đ | ⤠âđâ2 ⤠đś that óľ¨ óľ¨ óľ¨ óľ¨ óľ¨ óľ¨óľ¨ Ě óľ¨óľ¨đđ,đ â đđ,đ óľ¨óľ¨óľ¨ ⤠óľ¨óľ¨óľ¨đĚđ,đ óľ¨óľ¨óľ¨ + óľ¨óľ¨óľ¨đđ,đ óľ¨óľ¨óľ¨ ⤠đś2đđ 2đ/2 . óľ¨ óľ¨ óľ¨ óľ¨ óľ¨ óľ¨ 3 đ/(ln đ) âđ[âđ ln đ] +â ) đđ (ln đ) /đ 2 3/2 ⤠đś (đâđśđ /(1+đ 1 óľ¨ óľ¨4 óľ¨2 óľ¨ E (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ) ⤠đś22đđ 2đ E (óľ¨óľ¨óľ¨óľ¨đĚđ,đ â đđ,đ óľ¨óľ¨óľ¨óľ¨ ) ⤠đś24đđ 2đ . đ (57) Using 2đ ⤠2đ1 ⤠đ, the proof of (b) is completed. (c) We will use the Liebscher inequality described in Lemma 9. Let us set đđ = đ (đđ,đ , đđ ) â E (đ (đđ,đ , đ1 )) . â1 đ × [âđ ln đ] 2 â ) )) (ln đ)3 đđ (56) This inequality and the second result of (a) yield (58) We have E(đ1 ) = 0 and, by(H2)-(i) and 2đ ⤠2đ1 ⤠đ/(ln đ)3 , đ óľ¨ óľ¨ óľ¨óľ¨ óľ¨óľ¨ đđ đ/2 đđ , óľ¨óľ¨đđ óľ¨óľ¨ ⤠2 sup óľ¨óľ¨óľ¨óľ¨đ (đđ,đ , đĽ)óľ¨óľ¨óľ¨óľ¨ ⤠đś2 2 ⤠đś2 â đ)3 (ln đĽâđ (Ί) 1 (59) (so đ = đś2đđ âđ/(ln đ)3 ). Proceeding as for the proofs of the bounds in (a), for any integer đ ⤠đś ln đ, since 2âđ ⤠2âđ0 ⤠2(ln đ)â1 , we show that (ln đ) đ ) + đ2âđâđ ) . (62) Taking đ large enough, the last term is bounded by đś/đ4 . This completes the proof of(c). This completes the proof of Proposition 10. Proof of Theorem 3. Theorem 3 can be proved by combining arguments of ([36], Theorem 5.1) and ([51], Theorem 4.2). It is close to ([30], Proof of Theorem 2) by taking đ â â. The interested reader can find the details below. We consider the following wavelet decomposition for đ: â đ (đĽ) = â đđ0 ,đ đđ0 ,đ (đĽ) + â â đđ,đ đđ,đ (đĽ) , đ=đ0 đâÎ đ đâÎ đ0 1 (63) 1 where đđ0 ,đ = âŤ0 đ(đĽ)đđ0 ,đ (đĽ)đđĽ and đđ,đ = âŤ0 đ(đĽ)đđ,đ (đĽ)đđĽ. Using the orthonormality of the wavelet basis B, the MISE of đĚ can be expressed as óľŠ2 óľŠ E (óľŠóľŠóľŠóľŠđĚ â đóľŠóľŠóľŠóľŠ2 ) = đ + đ + đ , (64) where đ V (âđđ ) đ=1 (60) đ = V (âđ (đđ,đ , đđ )) ⤠đś22đđ (đ + đ2 2âđ ) ⤠đś22đđ đ. óľ¨2 óľ¨ đ = â E (óľ¨óľ¨óľ¨óľ¨đĚđ0 ,đ â đđ0 ,đ óľ¨óľ¨óľ¨óľ¨ ) , đâÎ đ0 đ1 óľ¨óľ¨2 óľ¨óľ¨ đ = â â E (óľ¨óľ¨óľ¨đĚđ,đ 1{|đĚđ,đ |âĽđ đ đ } â đđ,đ óľ¨óľ¨óľ¨ ) , óľ¨ óľ¨ đ=đ đâÎ đ=1 0 Therefore đˇđ = đ â 2đđ max V (âđđ ) ⤠đś2 đâ{1,...,2đ} đ=1 đ đ. (61) đ = â 2 . â đđ,đ đ=đ1 +1 đâÎ đ (65) 8 Journal of Probability and Statistics Let us now investigate sharp upper bounds for đ, đ and đ successively. Upper Bound for đ. The point (a) of Proposition 10 and 2đ /(2đ + 2đ + 1) < 1 yield đâ¤đś ln đ ln đ 2đ /(2đ +2đ+1) 2đ0 . â¤đś ⤠đś( ) đ đ đ (66) Upper Bound for đ . Upper Bound for đ1 + đ3 . Owing to the inequalities 1{|đĚđ,đ |<đ đ đ ,|đđ,đ |âĽ2đ đ đ } ⤠1{|đˇĚđ,đ |>đ đ đ /2} , 1{|đĚđ,đ |âĽđ đ đ ,|đđ,đ |<đ đ đ /2} ⤠1{|đˇĚđ,đ |>đ đ đ /2} and 1{|đĚđ,đ |<đ đ đ ,|đđ,đ |âĽ2đ đ đ } ⤠1{|đđ,đ |â¤2|đˇĚđ,đ |} , the Cauchy-Schwarz inequality, and the points(b) and (c) of Proposition 10, we have đ 1 óľ¨ Ě óľ¨óľ¨2 đ1 + đ3 ⤠đś â â E (óľ¨óľ¨óľ¨óľ¨đˇ Ěđ,đ |>đ đ đ /2} ) đ,đ óľ¨óľ¨óľ¨ 1{|đˇ đ=đ0 đâÎ đ đ đ (i) For đ ⼠1 and đ ⼠2, we have đ â đľđ,đ (đ) â đľ2,â (đ). Using 2đ /(2đ + 2đ + 1) < 2đ /(2đ + 1), we obtain â â2đđ đ â¤đś â 2 đ=đ1 +1 ⤠đś( (ln đ)3 ⤠đś( ) đ 1/2 óľ¨óľ¨ Ě óľ¨óľ¨4 1/2 óľ¨óľ¨ Ě óľ¨óľ¨ đ đ đ óľ¨ óľ¨ óľ¨ óľ¨ â¤ đś â â (E (óľ¨óľ¨đˇđ,đ óľ¨óľ¨ )) (P (óľ¨óľ¨đˇđ,đ óľ¨óľ¨ > )) 2 đ=đ0 đâÎ đ1 2đ /(2đ+1) đ đ â¤đś (67) ln đ 2đ /(2đ +2đ+1) . ) đ 1 1 đ(1+2đ) 1 ln đ 2đ /(2đ +2đ+1) 2 ⤠đś . ⤠đś( ) â đ2 đ=đ đ đ 0 (72) đ (đ) â (ii) For đ ⼠1 and đ â [1, 2), we have đ â đľđ,đ đ +1/2â1/đ đľ2,â (đ). The condition đ > (2đ + 1)/đ implies that (đ + 1/2 â 1/đ)/(2đ + 1) > đ /(2đ + 2đ + 1). Thus Upper Bound for đ2 . It follows from the point(a) of Proposition 10 that â đ ⤠đś â 2â2đ(đ +1/2â1/đ) đ 1 óľ¨ Ě óľ¨óľ¨2 đ2 ⤠â â E (óľ¨óľ¨óľ¨óľ¨đˇ đ,đ óľ¨óľ¨óľ¨ ) 1{|đđ,đ |âĽđ đ đ /2} đ=đ1 +1 đ=đ0 đâÎ đ 2(đ +1/2â1/đ)/(2đ+1) (ln đ)3 ⤠đś( ) đ ⤠đś( (68) ln đ 2đ /(2đ +2đ+1) . ) đ Let us now introduce the integer đâ defined by 2đâ = [( (69) Ěđ,đ = đĚđ,đ â đđ,đ , Upper Bound for đ. Adopting the notation đˇ đ can be written as đ 1/(2đ +2đ+1) ]. ) ln đ (74) Note that đâ â {đ0 , . . . , đ1 } for đ large enough. Then đ2 can be bounded as đ2 ⤠đ2,1 + đ2,2 , 4 đ = âđđ , đ 0 Hence, for đ ⼠1, {đ ⼠2 and đ > 0} or {đ â [1, 2), and đ > (2đ + 1)/đ}, we have ln đ 2đ /(2đ +2đ+1) . đ ⤠đś( ) đ (73) đ 1 1 ⤠đś â 22đđ â 1{|đđ,đ |>đ đ đ /2} . đ đ=j đâÎ (75) (70) where đ=1 where đ đ1 đ2,1 óľ¨ Ě óľ¨óľ¨2 đ1 = â â E (óľ¨óľ¨óľ¨óľ¨đˇ đ,đ óľ¨óľ¨óľ¨ 1{|đĚđ,đ |âĽđ đ đ ,|đđ,đ |<đ đ đ /2} ) , 0 đ=đ0 đâÎ đ đ1 đ2,2 đ=đ0 đâÎ đ (76) (71) On the one hand we have 2 E (đđ,đ 1{|đĚđ,đ |<đ đ đ ,|đđ,đ |âĽ2đ đ đ } ) , đ1 2 1{|đĚđ,đ |<đ đ đ ,|đđ,đ |<2đ đ đ } ) . đ4 = â â E (đđ,đ đ=đ0 đâÎ đ 1 1 = đś â 22đđ â 1{|đđ,đ |>đ đ đ /2} . đ đ=đâ +1 đâÎ đ đ=đ0 đâÎ đ đ3 = â â đ đ óľ¨ Ě óľ¨óľ¨2 đ2 = â â E (óľ¨óľ¨óľ¨óľ¨đˇ đ,đ óľ¨óľ¨óľ¨ 1{|đĚđ,đ |âĽđ đ đ ,|đđ,đ |âĽđ đ đ /2} ) , đ1 1 â = đś â 22đđ â 1{|đđ,đ |>đ đ đ /2} , đ đ=đ đâÎ đ đ2,1 ⤠đś ln đ â đ(1+2đ) ln đ 2đ /(2đ +2đ+1) ⤠đś( . ) â2 đ đ=đ0 đ On the other hand, we have the following. (77) Journal of Probability and Statistics 9 (i) For đ ⼠1 and đ ⼠2, the Markov inequality and đ â đ đ (đ) â đľ2,â (đ) yield đľđ,đ â đ đ2,2 â ln đ 1 2đđ 1 2 2 â¤đś đ ⤠đś â 2 â â â đđ,đ đ,đ đ đ=đâ +1 đ2đ đâÎ đ=đâ +1đâÎ đ đ ln đ 2đ /(2đ +2đ+1) ⤠đś( . ) đ â ⤠đś â 2â2đđ đ=đâ +1 đ đ2,2 ⤠đś( ⤠đś( ⤠đś( đ đ=đâ +1 đâÎ đ (ii) For đ ⼠1, đ â [1, 2) and đ > (2đ + 1)/đ, owing to the đ (đ) and (2đ + 2đ + 1)(2 â Markov inequality, đ â đľđ,đ đ)/2 + (đ + 1/2 â 1/đ + đ â 2đ/đ)đ = 2đ , we get đ1 ln đ ) đ óľ¨ óľ¨đ â óľ¨óľ¨óľ¨óľ¨đđ,đ óľ¨óľ¨óľ¨óľ¨ đâÎ đ đ . (79) ⤠đś( ln đ (2âđ)/2 â đđ(2âđ) âđ(đ +1/2â1/đ)đ 2 ) â 2 đ đ=đ +1 (2âđ)/2 ⤠đś( ln đ ) đ ⤠đś( ln đ 2đ /(2đ +2đ+1) . ) đ So, for đ ⼠1, {đ ⼠2 and đ > 0} or {đ â [1, 2), and đ > (2đ + 1)/đ}, we have đ4 ⤠đś( (81) đ ⤠đś( Let đâ be the integer (74). Then đ4 can be bound as (82) where đâ 2 1{|đđ,đ |<2đ đ đ } , đ4,1 = â â đđ,đ đ=đ0 đâÎ đ đ4,2 = â (83) â đ=đâ +1 đâÎ đ 2 đđ,đ 1{|đđ,đ |<2đ đ đ } . On the one hand, we have đâ đ4,1 ⤠đś â 2đ đ2đ = đś đ=đ0 đ ln đ â đ(1+2đ) ln đ 2đ /(2đ +2đ+1) ⤠đś( . ) â2 đ đ=đ đ 0 (84) On the other hand, we have the following. ln đ 2đ /(2đ +2đ+1) . ) đ (87) Putting (70), (72), (80), and (87) together, for đ ⼠1, {đ ⼠2 and đ > 0} or {đ â [1, 2) and đ > (2đ + 1)/đ}, we obtain đ=đ0 đâÎ đ đ1 (86) 2âđâ (đ +1/2â1/đ+đâ2đ/đ)đ (80) đ1 đ4 ⤠đ4,1 + đ4,2 , đ óľ¨óľ¨ óľ¨óľ¨đ óľ¨óľ¨đđ,đ óľ¨óľ¨ óľ¨ óľ¨ â Upper Bound for đ4 . We have 2 1{|đđ,đ |<2đ đ đ } . đ4 ⤠â â đđ,đ ln đ (2âđ)/2 1 đđ(2âđ) ) â 2 â đ đ=đ +1 đâÎ â . 2đ /(2đ +2đ+1) 2âđ đ4,2 ⤠đś â đ đ = đś( Therefore, for đ ⼠1, {đ ⼠2 and đ > 0} or {đ â [1, 2), and đ > (2đ + 1)/đ}, we have đ2 ⤠C( (85) đ=đâ +1 ln đ (2âđ)/2 âđâ (đ +1/2â1/đ+đâ2đ/đ)đ 2 ) đ ln đ ) đ đ=đâ +1 ln đ 2đ /(2đ +2đ+1) . ) đ (78) óľ¨óľ¨ óľ¨óľ¨đ óľ¨óľ¨đđ,đ óľ¨óľ¨ óľ¨ óľ¨ ln đ (2âđ)/2 â đđ(2âđ) âđ(đ +1/2â1/đ)đ 2 ) â 2 đ đ=đâ +1 2đ /(2đ +2đ+1) â 2 ⤠đś â 2â2đđ ⤠đś( â đđ,đ đ4,2 ⤠â (ii) For đ ⼠1, đ â [1, 2) and đ > (2đ + 1)/đ, the Markov đ (đ), and (2đ + 2đ + 1)(2 â đ)/2 + inequality, đ â đľđ,đ (đ + 1/2 â 1/đ + đ â 2đ/đ)đ = 2đ imply that ln đ 1 2đđ 1 â¤đś â 2 đ â đ đ=đâ +1 đ đ đâÎ đ đ (đ) â đľ2,â (đ), (i) For đ ⼠1 and đ ⼠2, since đ â đľđ,đ we have ln đ 2đ /(2đ +2đ+1) . ) đ (88) Combining (64), (66), (69), and (88), we complete the proof of Theorem 3. Proof of Theorem 4. The proof of Theorem 4 is a direct application of Theorem 3: under (G1)â(G5), the function đ defined by (24) satisfies (H1) see ([42], equation (2)) and (H2): (i) see ([42], Lemma 6), (ii) see, ([42], equation (11)) and (iii) see ([30], Proof of Proposition 6.1), with đ = đż. Proof of Theorem 5. The proof of Theorem 5 is a consequence of Theorem 3: under (J1)â(J4), the function đ defined by (30) satisfies (H1) and (H2): (i)-(ii) see ([31], Proposition 1) and (iii) see ([52], equation (26)), with đ = đż. Proof of Theorem 6. Set V(đĽ) = đ(đĽ)đ(đĽ). Following the methodology of [49], we have đĚ (đĽ) â đ (đĽ) = đ (đĽ) â đ (đĽ) , (89) 10 Journal of Probability and Statistics (ii) using the boundedness of đ1 (Ί), then (K4), we have where 2 óľ¨ óľ¨2 E (óľ¨óľ¨óľ¨óľ¨đ(đžđ,đ , đ1 )óľ¨óľ¨óľ¨óľ¨ ) = E (đ12 (đžđ,đ (đ1 )) ) đ (đĽ) = 1 (ĚV (đĽ) â V (đĽ) đĚ (đĽ) 2 ⤠đśE ((đžđ,đ (đ1 )) ) (90) +đ (đĽ) (đ (đĽ) â đĚ (đĽ))) 1{|đ(đĽ)|âĽđ Ě â /2} 1 1 0 (91) Ě Ě It follows from {|đ(đĽ)| < đâ /2} ⊠{|đ(đĽ)| > đâ } â {|đ(đĽ) â đ(đĽ)| > đâ /2}, (K3), (K4), and the Markov inequality that (iii) using the boundedness of đ1 (Ί) and making the change of variables đŚ = 2đ đĽ â đ, we obtain óľ¨óľ¨ óľ¨ óľ¨óľ¨đ (đžđ,đ , đĽ)óľ¨óľ¨óľ¨ đđĽ óľ¨ óľ¨ đ (Ί) ⍠1 (92) 1 óľ¨ óľ¨ |đĽ| đđĽ) (⍠óľ¨óľ¨óľ¨óľ¨đžđ,đ (đĽâ )óľ¨óľ¨óľ¨óľ¨ đđĽâ ) âđ 0 = (⍠The triangular inequality yields óľ¨ óľ¨óľ¨ Ě óľ¨óľ¨đ (đĽ) â đ (đĽ)óľ¨óľ¨óľ¨ ⤠đś (|ĚV (đĽ) â V (đĽ)| + óľ¨óľ¨óľ¨óľ¨đĚ (đĽ) â đ (đĽ)óľ¨óľ¨óľ¨óľ¨) . (93) óľ¨ óľ¨ The elementary inequality (đ + đ)2 ⤠2(đ2 + đ2 ) implies that óľŠ2 óľŠ óľŠ2 óľŠ E (óľŠóľŠóľŠóľŠđĚ â đóľŠóľŠóľŠóľŠ2 ) ⤠đś (E (âĚV â Vâ22 ) + E (óľŠóľŠóľŠđĚ â đóľŠóľŠóľŠ2 )) . 2 ⤠đś ⍠(đžđ,đ (đĽ)) đđĽ ⤠đś Using (K3) and the indicator function, we have óľ¨ óľ¨ â¤ đś óľ¨óľ¨óľ¨đĚ (đĽ) â đ (đĽ)óľ¨óľ¨óľ¨ . |đ (đĽ)| ⤠đś1{|đ(đĽ)âđ(đĽ)|>đ Ě â /2} (97) 0 đ (đĽ) = đ (đĽ) 1{|đ(đĽ)|<đ . Ě â /2} óľ¨ óľ¨ |đ (đĽ)| ⤠đś (|ĚV (đĽ) â V (đĽ)| + óľ¨óľ¨óľ¨đĚ (đĽ) â đ (đĽ)óľ¨óľ¨óľ¨) . 2 = đś ⍠(đžđ,đ (đĽ)) đ (đĽ) đđĽ (94) đ (98) 1 óľ¨ óľ¨ = đ2 ⍠óľ¨óľ¨óľ¨óľ¨đžđ,đ (đĽ)óľ¨óľ¨óľ¨óľ¨ đđĽ ⤠đś2âđ/2 . 0 We conclude by applying Lemma 2 with đ = 0; (K5) and (K6) imply (F1), and the previous inequality implies (F2). đ (đ) with đ ⼠1, {đ ⼠2 Therefore, assuming that V â đľđ,đ and đ â (0, đ)} or {đ â [1, 2) and đ â (1/đ, đ)}, Theorem 3 proves the existence of a constant đś > 0 satisfying We now bound this two MISEs via Theorem 3. Upper Bound for the MISE of ĚV. Under (K1)â(K6), the function đ defined by (38) satisfies the following. (H1) With V instead of đ: since đ1 and đ1 are independent with E(đ1 ) = 0, E (đ (đžđ,đ , đ1 )) = E (đ1 đžđ,đ (đ1 )) = E (đ (đ1 ) đžđ,đ (đ1 )) 1 1 0 0 (95) = ⍠đ (đĽ) đžđ,đ (đĽ) đ (đĽ) đđĽ = ⍠V (đĽ) đžđ,đ (đĽ) đđĽ, óľ¨óľ¨ óľ¨ óľ¨óľ¨đ (đžđ,đ , (đĽ, đĽâ ))óľ¨óľ¨óľ¨ óľ¨ óľ¨ sup (đĽ,đĽâ )â[âđ,đ]×[0,1] óľ¨óľ¨ óľ¨ óľ¨óľ¨đĽđžđ,đ (đĽâ )óľ¨óľ¨óľ¨ óľ¨ óľ¨ óľ¨ óľ¨ â¤ đ sup óľ¨óľ¨óľ¨óľ¨đžđ,đ (đĽâ )óľ¨óľ¨óľ¨óľ¨ ⤠đś2đ/2 đĽ â[0,1] â (99) Ě Under (K1)â(K6), proceeding Upper Bound for the MISE of đ. as the previous point, we show that the function đ defined by (39) satisfies (H1) with đ instead of đ and đđĄ instead of đđĄ , and(H2): (i)-(ii)-(iii) with đ = 0. đ (đ) with đ ⼠1, {đ ⼠2 Therefore, assuming that đ â đľđ,đ and đ â (0, đ)} or {đ â [1, 2), and đ â (1/đ, đ)}, Theorem 3 proves the existance of a constant đś > 0 satisfying 2đ /(2đ +1) (i) since đ1 (Ί) is bounded thanks to (K2) and (K3), say |đ1 | ⤠đ with đ > 0, we have = ln đ 2đ /(2đ +1) . ) đ ln đ óľŠ óľŠ2 ) E (óľŠóľŠóľŠđĚ â đóľŠóľŠóľŠ2 ) ⤠đś( đ (H2): (i)-(ii)-(iii) with đ = 0: sup (đĽ,đĽâ )âđ1 (Ί) E (âĚV â Vâ22 ) ⤠đś( . (100) Combining (94), (99), and (100), we end the proof of Theorem 6. Conflict of Interests The author declare that there is no conflict of interests regarding the publication of this paper. (96) Acknowledgment The author is thankful to the reviewers for their comments which have helped in improving the presentation. Journal of Probability and Statistics References [1] P. Doukhan, Mixing. Properties and Examples, vol. 85 of Lecture Notes in Statistics, Springer, New York, NY, USA, 1994. [2] M. Carrasco and X. Chen, âMixing and moment properties of various GARCH and stochastic volatility models,â Econometric Theory, vol. 18, no. 1, pp. 17â39, 2002. [3] R. C. Bradley, Introduction to Strong Mixing Conditions, Volume 1, 2, 3, Kendrick Press, 2007. [4] P. M. Robinson, âNonparametric estimators for time series,â Journal of Time Series Analysis, vol. 4, pp. 185â207, 1983. [5] G. G. Roussas, âNonparametric estimation in mixing sequences of random variables,â Journal of Statistical Planning and Inference, vol. 18, no. 2, pp. 135â149, 1987. [6] G. G. Roussas, âNonparametric regression estimation under mixing conditions,â Stochastic Processes and their Applications, vol. 36, no. 1, pp. 107â116, 1990. [7] Y. K. Truong and C. J. Stone, âNonparametric function estimation involving time series,â The Annals of Statistics, vol. 20, pp. 77â97, 1992. [8] L. H. Tran, âNonparametric function estimation for time series by local average estimators,â The Annals of Statistics, vol. 21, pp. 1040â1057, 1993. [9] E. Masry, âMultivariate local polynomial regression for time series: uniform strong consistency and rates,â Journal of Time Series Analysis, vol. 17, no. 6, pp. 571â599, 1996. [10] E. Masry, âMultivariate regression estimation local polynomial fitting for time series,â Stochastic Processes and their Applications, vol. 65, no. 1, pp. 81â101, 1996. [11] E. Masry and J. Fan, âLocal polynomial estimation of regression functions for mixing processes,â Scandinavian Journal of Statistics, vol. 24, no. 2, pp. 165â179, 1997. [12] D. Bosq, Nonparametric Statistics for Stochastic Processes. Estimation and Prediction, vol. 110 of Lecture Notes in Statistics, Springer, New York, NY, USA, 1998. [13] E. Liebscher, âEstimation of the density and the regression function under mixing conditions,â Statistics and Decisions, vol. 19, no. 1, pp. 9â26, 2001. [14] D. L. Donoho and I. M. Johnstone, âIdeal spatial adaptation by wavelet shrinkage,â Biometrika, vol. 81, no. 3, pp. 425â455, 1994. [15] D. L. Donoho and I. M. Johnstone, âAdapting to unknown smoothness via wavelet shrinkage,â Journal of the American Statistical Association, vol. 90, pp. 1200â1224, 1995. [16] D. Donoho, I. Johnstone, G. Kerkyacharian, and D. Picard, âWavelet shrinkage: asymptopia?â Journal of the Royal Statistical Society B, vol. 57, pp. 301â369, 1995. [17] D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, âDensity estimation by wavelet thresholding,â Annals of Statistics, vol. 24, no. 2, pp. 508â539, 1996. [18] A. Antoniadis, âWavelets in statistics: a review (with discussion),â Journal of the Italian Statistical Society, vol. 6, pp. 97â144, 1997. [19] W. Hardle, G. Kerkyacharian, D. Picard, and A. Tsybakov, Wavelet, Approximation and Statistical Applications, vol. 129 of Lectures Notes in Statistics, Springer, New York, NY, USA, 1998. [20] F. Leblanc, âWavelet linear density estimator for a discrete-time stochastic process: đż đ -losses,â Statistics and Probability Letters, vol. 27, no. 1, pp. 71â84, 1996. [21] K. Tribouley and G. Viennet, âđż đ adaptive density estimation in a alpha-mixing framework,â Annales de lâinstitut Henri Poincare B, vol. 34, no. 2, pp. 179â208, 1998. 11 [22] E. Masry, âWavelet-based estimation of multivariate regression functions in besov spaces,â Journal of Nonparametric Statistics, vol. 12, no. 2, pp. 283â308, 2000. [23] P. N. Patil and Y. K. Truong, âAsymptotics for wavelet based estimates of piecewise smooth regression for stationary time series,â Annals of the Institute of Statistical Mathematics, vol. 53, no. 1, pp. 159â178, 2001. [24] H. Doosti, M. Afshari, and H. A. Niroumand, âWavelets for nonparametric stochastic regression with mixing stochastic process,â Communications in Statistics, vol. 37, no. 3, pp. 373â 385, 2008. [25] H. Doosti and H. A. Niroumand, âMultivariate stochastic regression estimation by wavelet methods for stationary time series,â Pakistan Journal of Statistics, vol. 27, no. 1, pp. 37â46, 2009. [26] H. Doosti, M. S. Islam, Y. P. Chaubey, and P. GoĚra, âTwodimensional wavelets for nonlinear autoregressive models with an application in dynamical system,â Italian Journal of Pure and Applied Mathematics, no. 27, pp. 39â62, 2010. [27] J. Cai and H. Liang, âNonlinear wavelet density estimation for truncated and dependent observations,â International Journal of Wavelets, Multiresolution and Information Processing, vol. 9, no. 4, pp. 587â609, 2011. [28] S. Niu and H. Liang, âNonlinear wavelet estimation of conditional density under left-truncated and đź-mixing assumptions,â International Journal of Wavelets, Multiresolution and Information Processing, vol. 9, no. 6, pp. 989â1023, 2011. [29] F. Benatia and D. Yahia, âNonlinear wavelet regression function estimator for censored dependent data,â Journal Afrika Statistika, vol. 7, no. 1, pp. 391â411, 2012. [30] C. Chesneau, âOn the adaptive wavelet deconvolution of a density for strong mixing sequences,â Journal of the Korean Statistical Society, vol. 41, no. 4, pp. 423â436, 2012. [31] C. Chesneau, âWavelet estimation of a density in a GARCHtype model,â Communications in Statistics, vol. 42, no. 1, pp. 98â 117, 2013. [32] C. Chesneau, âOn the adaptive wavelet estimation of a multidimensional regression function under alpha-mixing dependence: beyond the standard assumptions on the noise,â Commentationes Mathematicae Universitatis Carolinae, vol. 4, pp. 527â556, 2013. [33] Y. P. Chaubey and E. Shirazi, âOn MISE of a nonlinear wavelet estimator of the regression function based on biased data under strong mixing ,â Communications in Statistics. In press. [34] M. Abbaszadeh and M. Emadi, âWavelet density estimation and statistical evidences role for a garch model in the weighted distribution,â Applied Mathematics, vol. 4, no. 2, pp. 10â416, 2013. [35] B. Delyon and A. Juditsky, âOn minimax wavelet estimators,â Applied and Computational Harmonic Analysis, vol. 3, no. 3, pp. 215â228, 1996. [36] G. Kerkyacharian and D. Picard, âThresholding algorithms, maxisets and well-concentrated bases,â Test, vol. 9, no. 2, pp. 283â344, 2000. [37] I. Daubechies, Ten Lectures on Wavelets, SIAM, 1992. [38] A. Cohen, I. Daubechies, and P. Vial, âWavelets on the interval and fast wavelet transforms,â Applied and Computational Harmonic Analysis, vol. 1, no. 1, pp. 54â81, 1993. [39] S. Mallat, A Wavelet Tour of Signal Processing, The sparse way, with contributions from Gabriel Peyralphae, Elsevier/Academic Press, Amsterdam, The Netherlands, 3rd edition, 2009. 12 [40] Y. Meyer, Ondelettes et Opalphaerateurs, Hermann, Paris, France, 1990. [41] V. Genon-Catalot, T. Jeantheau, and C. LareĚdo, âStochastic volatility models as hidden Markov models and statistical applications,â Bernoulli, vol. 6, no. 6, pp. 1051â1079, 2000. [42] J. Fan and J. Koo, âWavelet deconvolution,â IEEE Transactions on Information Theory, vol. 48, no. 3, pp. 734â747, 2002. [43] A. B. Tsybakov, Introduction alphaa lâestimation nonparamalphaetrique, Springer, New York, NY, USA, 2004. [44] E. Masry, âStrong consistency and rates for deconvolution of multivariate densities of stationary processes,â Stochastic Processes and their Applications, vol. 47, no. 1, pp. 53â74, 1993. [45] R. Kulik, âNonparametric deconvolution problem for dependent sequences,â Electronic Journal of Statistics, vol. 2, pp. 722â 740, 2008. [46] F. Comte, J. Dedecker, and M. L. Taupin, âAdaptive density deconvolution with dependent inputs,â Mathematical Methods of Statistics, vol. 17, no. 2, pp. 87â112, 2008. [47] H. van Zanten and P. Zareba, âA note on wavelet density deconvolution for weakly dependent data,â Statistical Inference for Stochastic Processes, vol. 11, no. 2, pp. 207â219, 2008. [48] W. Hardle, Applied Nonparametric Regression, Cambridge University Press, Cambridge, UK, 1990. [49] V. A. Vasiliev, âOne investigation method of a ratios type estimators,â in Proceedings of the 16th IFAC Symposium on System Identialphacation, pp. 1â6, Brussels, Belgium, July 2012, (in progress). [50] Y. Davydov, âThe invariance principle for stationary processes,â Theory of Probability and Its Applications, vol. 15, no. 3, pp. 498â 509, 1970. [51] C. Chesneau, âWavelet estimation via block thresholding: a minimax study under đżđ -risk,â Statistica Sinica, vol. 18, no. 3, pp. 1007â1024, 2008. [52] C. Chesneau and H. Doosti, âWavelet linear density estimation for a GARCH model under various dependence structures,â Journal of the Iranian Statistical Society, vol. 11, no. 1, pp. 1â21, 2012. Journal of Probability and Statistics Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Applied Mathematics Algebra Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Mathematical Problems in Engineering Journal of Mathematics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Discrete Dynamics in Nature and Society Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Abstract and Applied Analysis Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Journal of Stochastic Analysis Optimization Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014
© Copyright 2026 Paperzz