Hindawi Publishing Corporation Journal of Function Spaces Volume 2015, Article ID 579853, 10 pages http://dx.doi.org/10.1155/2015/579853 Research Article On the Null Space Property of đđ-Minimization for 0 < đ ⤠1 in Compressed Sensing Yi Gao,1,2 Jigen Peng,1 Shigang Yue,3 and Yuan Zhao1,3 1 School of Mathematics and Statistics, Xiâan Jiaotong University, Xiâan, Shaanxi 710049, China School of Mathematics and Information Science, Beifang University of Nationalities, Yinchuan, Ningxia 750021, China 3 School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK 2 Correspondence should be addressed to Yi Gao; [email protected] Received 18 December 2014; Revised 1 March 2015; Accepted 2 March 2015 Academic Editor: Yuri Latushkin Copyright © 2015 Yi Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The paper discusses the relationship between the null space property (NSP) and the đđ -minimization in compressed sensing. Several versions of the null space property, that is, the đđ stable NSP, the đđ robust NSP, and the đđ,đ robust NSP for 0 < đ ⤠đ < 1 based on the standard đđ NSP, are proposed, and their equivalent forms are derived. Consequently, reconstruction results for the đđ -minimization can be derived easily under the NSP condition and its equivalent form. Finally, the đđ NSP is extended to the đđ -synthesis modeling and the mixed đ2 /đđ -minimization, which deals with the dictionary-based sparse signals and the block sparse signals, respectively. 1. Introduction Compressed sensing has been drawing extensive and hot attention as soon as it was proposed since 2006 [1â4]. It is known as a new revolution in signal processing because it can realize sampling and compression of signal at the same time. Its fundamental idea is to recover a high-dimensional signal from remarkably small number of measurements using the sparsity of signals. đ-dimensional real-world signal is called sparse if the number of its nonzero coefficients under some representation is absolutely smaller than the signal length đ. Suppose that âđĽâ0 denotes the number of nonzero representation coefficients of the signal đĽ; if âđĽâ0 ⤠đ , then we say the signal đĽ is đ -sparse. Here, we call tentatively âđĽâ0 đ0 norm (in fact, it is not a norm, because it does not satisfy the positive homogeneity of norm obviously). Using ÎŁđ denotes the set of all đ -sparse signals: ÎŁđ = {đĽ â đ đ : âđĽâ0 ⤠đ }. Suppose the observed data đŚ â đ đ ; we will recover the signal đĽ â đ đ via a linear system đŚ = đ´đĽ, (1) where đ´ is an đ × đ (đ < đ) real matrix, known as measurement matrix. Generally speaking, when đ < đ, the underdetermined function đŚ = đ´đĽ has infinitely many solutions. In other words, without additional information, it is impossible to recover the signal under this condition. However, compressed sensing focuses on the sparsest solution, and its mathematical modeling is as follows: min âđĽâ0 đĽâđ đ s.t. (2) đŚ = đ´đĽ. We call the minimization (2) đ0 modeling. Unfortunately, the đ0 -minimization (2) is a nonconvex and NP-hard optimization problem [5]. To conquer the hardness, one seeks another tractable mathematical modeling to replace the đ0 minimization. This is called đđ (đ > 0) modeling. Consider min âđĽâđ đĽâđ đ s.t. (3) đŚ = đ´đĽ, where đ { óľ¨ óľ¨đ { {(â óľ¨óľ¨óľ¨đĽđ óľ¨óľ¨óľ¨ ) âđĽâđ = { đ=1 { óľ¨ óľ¨ { max óľ¨óľ¨đĽ óľ¨óľ¨ , {đ=1,...,đ óľ¨ đ óľ¨ 1/đ , 0 < đ < â; đ = â. (4) 2 Journal of Function Spaces For 1 ⤠đ ⤠â, âđĽâđ is a norm, while for 0 < đ < 1, it is a quasinorm because it does not satisfy the triangle inequality, and it only satisfies the đ-triangle inequality: óľŠđ óľŠ óľŠđ óľŠóľŠ đ óľŠóľŠđĽ + đŚóľŠóľŠóľŠđ ⤠âđĽâđ + óľŠóľŠóľŠđŚóľŠóľŠóľŠđ , đĽ, đŚ â đ đ. (5) It is natural that the đ2 modeling is to be considered: min âđĽâ2 đĽâđ đ s.t. (6) đŚ = đ´đĽ. To oneâs disappointment, the đ2 modeling is unable to exactly recover the sparse signals, even 1-sparse vectors. For example, supposing measurement matrix such as 1 0 0 â1 [0 1 0 â1] đ´=[ ], (7) [0 0 1 â1] we will recover the signal đĽ from the measurement vector đŚ = đ´đĽ = [1, 0, 0]đ . Obviously, the solutions of this function are đ = [1 + đĄ, đĄ, đĄ, đĄ]đ , đĄ â đ . Let đ(đĄ) = (1 + đĄ)2 + 3đĄ2 ; then minđĄâđ đ(đĄ) = đ(â1/4) = 3/4 < 1. But đ = [3/4, â1/4, â1/4, â1/4]đ is not sparse. In fact, [1, 0, 0, 0]đ is the sparsest solution of the above function. Moreover, for any đ > 1, although the đđ modeling is also a convex optimization problem, it cannot guarantee the sparsest solution; that is, the đđ modeling for đ > 1 is unable to exactly recover the sparse signals. Here, sparsity plays a key role in such recovery of signal. The problem is which đ we can select to get the sparsest solution exactly. CandeĚs et al. gave the following đ1 modeling [2, 3, 6â8]: min âđĽâ1 đĽâđ đ s.t. (8) đŚ = đ´đĽ. The đ1 -minimization (8) is a convex optimization problem and can be transformed into a linear programming problem to be solved tractably. A natural question is whether the solutions of (8) are equivalent to those of (2), or what kind of measurement matrices can be constructed to guarantee the equivalence of the solutions of the two problems. CandeĚs and Tao [2] proposed the Restricted Isometry Property (RIP), which is a milestone work in sparse information processing. We say that a matrix đ´ satisfies the Restricted Isometry Property of order đ , if there exists a constant đżđ â [0, 1), for any đĽ â ÎŁđ , such that (1 â đżđ ) âđĽâ22 ⤠âđ´đĽâ22 ⤠(1 + đżđ ) âđĽâ22 . (9) We call the smallest constant đżđ satisfying the above inequality the Restricted Isometry Constant (RIC). CandeĚs and Tao showed, for instance, that any đ -sparse vector is exactly recovered as soon as the measurement matrix đ´ satisfies the RIP with đż2đ ⤠0.4142 [9], and the solution of (8) is equivalent to that of (2). Later, the RIC has been improved; for example, đż2đ ⤠0.4531 by Foucart and Lai [10], đż2đ ⤠0.4721 by Cai et al. [11], đż2đ ⤠0.4931 by Mo and Li [12], and especially đż2đ ⤠0.5746 by Zhou et al. [13]. We see that the đ1 modeling is seemingly perfect selection because it not only is convex but also can exactly recover sparse signals. Then, what about the đđ -minimization under 0 < đ < 1? It is amazing that âđĽâđ can also obtain the sparse solutions via hyperplane đŚ = đ´đĽ, and more and more sparsity is being required as đ decreases below 1 [4]. This is natural to induce oneâs strong interest, and some important results have been obtained. Compared with the đ1 modeling, a sufficiently sparse signal can be recovered perfectly with the đđ (0 < đ < 1) modeling under less restrictive RIP requirements than those needed to guarantee perfect recovery with đ = 1 [14]. Compared with the đ0 modeling, empirical evidence strongly indicates that solving the đđ (0 < đ < 1) modeling takes much less time than that with đ = 0 [15]. These surprising phenomena undoubtedly inspire more people to research the đđ (0 < đ < 1) modeling although the đđ -minimization problem for 0 < đ < 1 is also NP-hard in general [16]. Reconstruction sparse signals via đđ (0 < đ < 1) modeling have been considered in the literature in a series of papers [10, 17â20]. However, it is not clear if there exists any hidden relationship between the đđ (0 < đ < 1) modeling and the đ0 modeling. Recently, [21] showed that the solution of the đđ (0 < đ < 1) problem (3) is also equivalent to that of the đ0 problem (2) when đ is smaller than a definitive constant đśđ´,đŚ , where the definitive constant đśđ´,đŚ depends only on đ´ and đŚ, which revealed the relationship behind these interesting phenomena. The null space property (NSP) was introduced when one checked the equivalence of the solutions between the đ0 modeling and the đ1 modeling. When we inspect the existence of the solutions of the function đŚ = đ´đĽ, đĽ â đ đ, we often make judgement via the null space of matrix đ´ with linear algebra knowledge. Therefore, it is absolutely important to propose the null space property. Now, we introduce the definition of the null space property [6, 8, 22]. Given a subset đ of [đ] := {1, 2, . . . , đ} and a vector V â đ đ, we denote by Vđ the vector that coincides with V on đ and that vanishes on the complementary set đđś =: [đ] \ đ. Definition 1. For any set đ â [đ] with card(đ) ⤠đ , a matrix đ´ â đ đ×đ is said to satisfy the null space property of order đ if óľŠ óľŠ óľŠóľŠ óľŠóľŠ óľŠóľŠVđ óľŠóľŠ1 < óľŠóľŠóľŠVđđś óľŠóľŠóľŠ1 , (10) for all V â Ker đ´ \ {0}. The thoughts of the null space have appeared since one researched the best approximation of đż 1 [23], but the name was first used by Cohen et al. in [22], and Donoho and Huo [6] and Gribonval and Nielsen [8] developed the thoughts and gave the following theorem. Theorem 2. Given a matrix đ´ â đ đ×đ, every đ -sparse vector đĽ â đ đ is the unique solution of the đ1 -minimization with đŚ = đ´đĽ if and only if đ´ satisfies the null space property of order đ . Journal of Function Spaces This theorem not only gives an existence condition of the solution of the đ1 modeling but also demonstrates that the solution is right to that of the đ0 modeling. That is, it implies that for every đŚ = đ´đĽ with đ -sparse đĽ, the solution of the đ1 minimization problem (8) actually solves the đ0 -minimization problem (2) when the measurement matrix đ´ satisfies the null space property of order đ . Indeed, assume that every đ -sparse vector đĽ is recovered via đ1 -minimization from đŚ = đ´đĽ. Let đ§ be the minimizer of the đ0 -minimization with đŚ = đ´đĽ; then âđ§â0 ⤠âđĽâ0 , so that also đ§ is đ -sparse. But since every đ -sparse vector is the unique đ1 -minimizer, it follows that đ§ = đĽ [16]. As for RIP, the null space property is a necessary and sufficient condition to exactly reconstruct the signal using the đ1 -minimization, which is very important in theory. We see that many works [7, 8, 16, 24â26], especially [16], focused on the null space property on the đ1 -minimization and gave the stable null space property, the robust null space property, and characterization of the solutions of the đ1 -minimization problem (8) under these properties. But, as for the null space property on the đđ -minimization for 0 < đ < 1, there are few papers [18, 27] touched upon it. This paper focuses on the different types of null space property on the đđ -minimization for 0 < đ < 1. The remainder of the paper is organized as follows. In Section 2, based on the standard đđ null space property, we give the definition of the đđ stable null space property and derive its equivalent form. Then we discuss the approximation of the solutions of the đđ -minimization with đ´đĽ = đ´đ§. In Section 3, we further consider the đđ robust null space property and the đđ,đ robust null space property for 0 < đ ⤠đ < 1, respectively. Using these properties, we effectively characterize reconstruction results for the đđ -minimization when the observations are corrupted by noise. In Section 4, according to many practical scenarios, we extend the đđ null space property to the đđ synthesis modeling and the mixed đ2 /đđ -minimization, which deals with the dictionary-based sparse signals and the block sparse signals, respectively. Finally, we relegate the proofs of the main results, that is, Theorems 7, 13, 18, and 26 to Appendix. 2. đđ Stable Null Space Property In this section, we will discuss the đđ stable null space property; for this purpose, we first introduce the đđ null space property [8, 16]. Definition 3. For any set đ â [đ] with card(đ) ⤠đ , a matrix đ´ â đ đ×đ is said to satisfy the đđ null space property of order đ if óľŠóľŠ óľŠóľŠ óľŠóľŠ óľŠóľŠ (11) óľŠóľŠVđ óľŠóľŠđ < óľŠóľŠVđđś óľŠóľŠđ , for all V â Ker đ´\{0}; here 0 < đ < 1, if one does not make the special statement, then one always supposes that 0 < đ < 1 in what follows. Definition 3 generalizes Definition 1. This definition is so important that it can characterize the existence and uniqueness of the solution of the đđ -minimization (3) for 0 < đ < 1. 3 Theorem 4. Given a matrix đ´ â đ đ×đ, every đ -sparse vector đĽ â đ đ is the unique solution of the đđ -minimization with đŚ = đ´đĽ if and only if đ´ satisfies the đđ null space property of order đ . The proof of Theorem 4 is implied in [8] and easily found in [18]. However, in more realistic scenarios, we can only claim that the vectors are close to sparse vectors but no absolutely sparse ones. In such cases, we would like to recover a vector đĽ â đ đ with an error controlled by its distance to đ -sparse vectors. This property is usually referred to as the stability of the reconstruction scheme with respect to sparsity defect [16]. To better discuss this property, we give the definition of the đđ stable null space property. Definition 5. For any set đ â [đ] with card(đ) ⤠đ , a matrix đ´ â đ đ×đ is said to satisfy the đđ stable null space property of order đ with constant 0 < đ < 1 if óľŠ óľŠóľŠ óľŠóľŠ 1/đ óľŠ óľŠóľŠVđ óľŠóľŠđ ⤠đ óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ , (12) for all V â Ker đ´. Remark 6. Formula (12) is often replaced by âVđ âđđ ⤠đâVđđś âđđ with constant 0 < đ < 1. Obviously, Definition 5 is stronger than Definition 3 if V â Ker đ´ \ {0}. But the following theorem demonstrates that it is also important to introduce Definition 5, because the đđ stable null space property characterizes the distance between two vectors đĽ â đ đ and đ§ â đ đ satisfying đ´đ§ = đ´đĽ. Theorem 7. For any set đ â [đ] with đđđđ(S) ⤠đ , the matrix đ´ â đ đ×đ satisfies the đđ stable null space property of order đ with constant 0 < đ < 1 if and only if âđ§ â đĽâđđ ⤠1+đ óľŠ óľŠđ (âđ§âđđ â âđĽâđđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ ) , 1âđ (13) for all vectors đĽ, đ§ â đ đ with đ´đ§ = đ´đĽ. We defer the proof of this theorem to Appendix. Under this theorem, we give the main stability result as follows. Corollary 8. Suppose that a matrix đ´ â đ đ×đ satisfies the đđ stable null space property of order đ with constant 0 < đ < 1; then for any đĽ â đ đ, a solution đĽâ of the đđ -minimization with đŚ = đ´đĽ approximates the vector đĽ with đđ -error: 1/đ 2 (1 + đ) óľŠóľŠ âóľŠ ] óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠[ 1âđ đđ (đĽ)đ , (14) where đđ (đĽ)đ := inf âđĽ â đ§âđ , đ§âÎŁđ (15) which denotes the error of best đ -term approximation to đĽ with respect to the đđ -quasinorm. Obviously, if đĽ â ÎŁđ , then đđ (đĽ)đ = 0. 4 Journal of Function Spaces Proof. Take đ to be a set of đ largest absolute coefficients of đĽ, so that đđ (đĽ)đ = âđĽđđś âđ . If đĽâ is a minimizer of the đđ minimization, then âđĽâ âđ ⤠âđĽâđ with đ´đĽâ = đ´đĽ. So we take đ§ = đĽâ in inequality (13); then the measurement vector đŚ â đ đ is only an approximation of the vector đ´đĽ â đ đ , with 1 + đ óľŠóľŠ â óľŠóľŠđ óľŠóľŠ â óľŠđ (óľŠđĽ óľŠ â âđĽâđđ + 2đđ (đĽ)đ ) óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠1 â đ óľŠ óľŠđ for some đ ⼠0. In this case, the reconstruction scheme should be required to output a vector đĽâ â đ đ whose distance to the original vector đĽ â đ đ is controlled by the measurement error đ ⼠0. This property is usually referred to as the robustness of the reconstruction scheme with respect to measurement error [16]. We are going to investigate the following modeling: 2 (1 + đ) ⤠đ (đĽ)đ , 1âđ đ đ (16) which is the desired inequality. óľŠ óľŠóľŠ óľŠóľŠđŚ â đ´đĽóľŠóľŠóľŠ2 ⤠đ, By Corollary 8, using đđ (đĽ)đ = 0 for đĽ â ÎŁđ , we have the following corollary. min âđĽâđ đĽâđ đ đ×đ Corollary 9. Suppose that a matrix đ´ â đ satisfies the đđ stable null space property of order đ with constant 0 < đ < 1; then every đ -sparse vector đĽ â đ đ can be exactly recovered from the đđ -minimization with đŚ = đ´đĽ. Compared with Theorem 4, the đđ stable null space property is only a sufficient condition to exactly recover sparse vector via đđ -minimization. Corollary 10. Suppose that a matrix đ´ â đ n×đ satisfies the đđ stable null space property of order đ with constant 0 < đ < 1; then for any 0 < đ < đ < 1 and any đĽ â đľđđ := {đĽ â đ đ : âđĽâđ ⤠1}, a solution đĽâ of the đđ -minimization with đŚ = đ´đĽ approximates the vector đĽ with đđ -error: 2(1 + đ) óľŠóľŠ âóľŠ ] óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠[ 1âđ 1/đ đśđ,đ 1/đâ1/đ đ âđĽâđ , (17) where đśđ,đ = [đźđź (1 â đź)1âđź ]1/đ ⤠1, đź = đ/đ. Proof. This result can be derived from the following inequality [16]: đđ (đĽ)đ ⤠đśđ,đ 1/đâ1/đ đ (18) for any 0 < đ < đ and đĽ â đ đ. Remark 11. Under the condition of Corollary 10, we choose đ = 1/3, đ = 1/2, and đ = 1/2; let âđĽâđ ⤠9/48; then 1 óľŠóľŠ âóľŠ óľŠóľŠđĽ â đĽ óľŠóľŠóľŠ1/2 ⤠. đ (19) Let us choose đ = 1/4, đ = 1/2, and đ = 1/2; let âđĽâđ ⤠4/9; then 1 óľŠóľŠ âóľŠ óľŠóľŠđĽ â đĽ óľŠóľŠóľŠ1/2 ⤠2 . đ (20) óľŠ óľŠóľŠ óľŠóľŠđŚ â đ´đĽóľŠóľŠóľŠ2 ⤠đ. s.t. (22) Definition 12. For any set đ â [đ] with card(đ) ⤠đ , a matrix đ´ â đ đ×đ is said to satisfy the đđ robust null space property of order đ with constant 0 < đ < 1 and đž > 0, if óľŠ óľŠđ óľŠóľŠ óľŠóľŠđ óľŠóľŠVđ óľŠóľŠđ < đ óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + đž âđ´Vâ2 , (23) for all V â đ đ. We see that Definition 12 is broader than Definition 5, because it does not require that V â Ker đ´. Obviously, if V â Ker đ´, the đđ robust null space property implies the đđ stable null space property. Similar to Theorem 7, we also give an equivalent form of the đđ robust null space property. Theorem 13. For any set đ â [đ] with đđđđ(đ) ⤠đ , the matrix đ´ â đ đ×đ satisfies the đđ robust null space property of order đ with constant 0 < đ < 1 and đž > 0 if and only if âđ§ â đĽâđđ ⤠1+đ óľŠ óľŠđ (âđ§âđđ â âđĽâđđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ ) 1âđ + âđĽâđ , (21) 2đž âđ´ (đ§ â đĽ)â2 , 1âđ (24) for all vectors đĽ, đ§ â đ đ. The proof of Theorem 13 is deferred to Appendix, and spirit of this theorem is the following corollary. Corollary 14. Suppose that a matrix đ´ â đ đ×đ satisfies the đđ robust null space property of order đ with constant 0 < đ < 1 and đž > 0; then for any đĽ â đ đ, a solution đĽâ of the đđ minimization with đŚ = đ´đĽ + đ and âđâ2 ⤠đ approximates the vector đĽ with đđ -error: 4đž 2 (1 + đ) óľŠóľŠ â óľŠđ đ (đĽ)đ + đ. óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠1âđ đ đ 1âđ (25) 3. đđ Robust Null Space Property Proof. Noting that âđ´(đ§âđĽ)â2 ⤠2đ, the rest of proof is similar to that of Corollary 8. In realistic situations, it is also inconceivable to measure a signal đĽ â đ đ with infinite precision. This means that Remark 15. When đ = 0, inequality (25) is the conclusion of Corollary 8. Journal of Function Spaces 5 Remark 16. Using inequality (đđ + đđ )1/đ ⤠21/đâ1 (đ + đ) for đ, đ ⼠0 and 0 < đ ⤠1, inequality (25) can be modified such that 2 (1 + đ) óľŠóľŠ âóľŠ 1/đâ1 ) [( óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠2 1âđ = 22/đâ1 ( 1/đ đđ (đĽ)đ + ( 1/đ 4đž đ) ] 1âđ 1/đ đž 1 + đ 1/đ ) đđ (đĽ)đ + 23/đâ1 ( đ) , 1âđ 1âđ (26) and if we set âđâ2 ⤠đđ , 0 < đ ⤠1, then we can get the following inequality: óľŠóľŠ âóľŠ óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠đśđ,đ đđ (đĽ)đ + đśđ,đ,đž đ, (27) where đśđ,đ = 22/đâ1 ((1 + đ)/(1 â đ))1/đ and đśđ,đ,đž 23/đâ1 (đž/(1 â đ))1/đ . = Inequality (27) or (14) is the same as that of Theorem 3.1 in [10] except for the constant when we only consider đŚ = đ´đĽ, but we used the different approach; the latter is based on the measurement matrix đ´ satisfying the following inequality: đĄ 1/đâ1/2 , đž2đĄ â 1 < 4 (â2 â 1) ( ) đ (28) 2 2 for some integer đĄ ⼠đ , where đž2đ := đ˝2đ /đź2đ , while đźđ and đ˝đ are the best constants such that the measurement matrix đ´ satisfies the following inequality: đźđ âđĽâ2 ⤠âđ´đĽâ2 ⤠đ˝đ âđĽâ2 , đĽ â ÎŁđ . (29) If we add the robust term, inequality (27) is better than that of Theorem 3.1 in [10]. The rest of this section will focus on another robust null space property, that is, the đđ,đ robust null space property for 0 < đ ⤠đ ⤠1. In the following definition, we will use two quasinorms đđ and đđ rather than the same quasinorm. Definition 17. Given 0 < đ ⤠đ ⤠1, for any set đ â [đ] with card(đ) ⤠đ , a matrix đ´ â đ đ×đ is said to satisfy the đđ,đ robust null space property of order đ with constant 0 < đ < 1 and đž > 0, if đž đ óľŠ óľŠđ óľŠóľŠ óľŠóľŠđ óľŠóľŠVđ óľŠóľŠđ < đ/đâ1 óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + đ/đâ1 âđ´Vâ2 , (30) đ đ for all V â đ đ. Obviously, when đ = đ < 1, Definition 17 reduces to Definition 12, so we will consider the case đ < đ < 1. Theorem 18. Given 0 < đ < đ ⤠1, for any set đ â [đ] with đđđđ(đ) ⤠đ , suppose that the matrix đ´ â đ đ×đ satisfies the đđ,đ robust null space property of order đ with constant 0 < đ < 1 and đž > 0; then, for any đĽ, đ§ â đ đ and âđ§ â đĽâđ ⤠1, one has âđ§ â đĽâđđ ⤠đś1 đ đ/đâ1 + óľŠ óľŠđ (âđ§âđđ â âđĽâđđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ ) đś2 đ/đâ1 đ where đś1 = (1 + đ)2 /(1 â đ) and đś2 = (3 + đ)đž/(1 â đ). Corollary 19. Given 0 < đ < đ ⤠1, suppose that a matrix đ´ â đ đ×đ satisfies the đđ,đ robust null space property of order đ with constant 0 < đ < 1 and đž > 0; then for any đĽ â đ đ, a solution đĽâ of the đđ -minimization with đŚ = đ´đĽ+đ and âđâ2 ⤠đ approximates the vector đĽ with đđ -error: 2đś1 2đś2 óľŠóľŠ â óľŠđ đđ (đĽ)đđ + đ/đâ1 đ, óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠đ/đâ1 đ đ (32) where đś1 and đś2 are the same as those of Theorem 18. Remark 20. In Corollary 19, if we set đ = 0, then đđ -error is controlled by (1/đ đ/đâ1 )đđ (đĽ)đđ ; that is, đś óľŠóľŠ â óľŠđ đ óľŠóľŠđĽ â đĽ óľŠóľŠóľŠđ ⤠đ/đâ1 đđ (đĽ)đ . đ (33) Although the quasinorm also satisfies â â âđ ⤠â â âđ for 0 < đ < đ < 1, the term (1/đ đ/đâ1 )đđ (đĽ)đđ is not worse than đđ (đĽ)đđ , because the reconstruction đđ -error decays in rate đ đ/đâ1 . 4. Extensions In this section, we will discuss two extensions of the đđ null space property; one is on the null space property of the đđ synthesis modeling and the other is on the đđ block null space property of the mixed đ2 /đđ -minimization. 4.1. Null Space Property of the đđ -Synthesis Modeling. The techniques above hold for signals which are sparse in the standard coordinate basis or sparse with respect to some other orthonormal basis. However, in practical examples, there are numerous signals which are sparse in an overcomplete dictionary đˇ rather than an orthonormal basis; see [27â32]. Here đˇ is đ × đ (đ < đ) matrix that is often rather coherent in applications. We also call đˇ a frame in the sense that the columns of đˇ form a finite frame. In this setting, the signal đĽ â đ đ can be represented as đĽ = đˇđ, where đ is an đ -sparse vector in đ đ . Such signal is called dictionary-sparse signals or frame-sparse signals [33]. When the dictionary đˇ is specified, the signals are also called đˇsparse signals. Many signals naturally possess frame-sparse coefficients, such as radar images (Gabor frames), cartoonlike images (curvelets), and images with directional features (shearlets) [32â35]. The đ1 -synthesis modeling [34] is defined by óľŠ óľŠ đĚ = arg minđ óľŠóľŠóľŠđóľŠóľŠóľŠ1 đâđ (31) âđ´(đ§ â đĽ)â2 , We defer the proof of this theorem to Appendix. In the proof, we will see that the condition âđ§ â đĽâđ ⤠1, đĽ, đ§ â đ đ, is necessary in Theorem 18. Ě đĽĚ = đˇđ. s.t. đŚ = đ´đˇđ, (34) The above method is called đ1 -synthesis modeling due to the second synthesizing step. It is relative to the đ1 -analysis 6 Journal of Function Spaces modeling, which finds the solution đĽĚ directly by solving the problem [32] óľŠ óľŠ đĽĚ = arg minđ óľŠóľŠóľŠđˇâ đĽóľŠóľŠóľŠ1 đĽâđ s.t. đŚ = đ´đĽ, (35) where đˇâ denotes the transpose of the đˇ. Empirical studies show that the đ1 -synthesis modeling often provides good recovery; however, it is fundamentally distinct from the đ1 -analysis modeling. The geometry of two problems was analyzed in [28], and there it was shown that because these geometrical structures exhibit substantially different properties, there was a large gap between the two formulations. This theoretical gap was also demonstrated by numerical simulations in [28], which showed that the two methods perform very differently on large families of signals. Recently, [33] elaborated why the authors introduced the đ1 synthesis and the relationship of the two methods, and it appeared that the đ1 -synthesis is a more thorough method than the đ1 -analysis, and the đ1 -analysis is a subproblem of the đ1 -synthesis via sparse duals. Besides, [33] built up a framework for the đ1 -synthesis method and proposed a dictionary-based null space property which is the first sufficient and necessary condition for the success of the đ1 synthesis method. In this subsection, we only propose the đđ -synthesis modeling and give the null space property of the minimization but did not elaborate the relationship between the đđ -synthesis modeling and the đđ -analysis modeling. The đđ -synthesis modeling is as follows: óľŠ óľŠ đĚ = arg minđ óľŠóľŠóľŠđóľŠóľŠóľŠđ đâđ Ě đĽĚ = đˇđ. s.t. đŚ = đ´đˇđ, (36) Given a frame đˇ â đ đ×đ , đˇâ1 (đľ) denotes the preimage of the set đľ under the frame đˇ. We introduce the following notation [33]: óľŠ óľŠ đˇÎŁđ = {đĽ â đ đ : âđ, such that đĽ = đˇđ, óľŠóľŠóľŠđóľŠóľŠóľŠ0 ⤠đ } . (37) Definition 21. Fix a dictionary đˇ â đ đ×đ , for any set đ â [đ] with card(đ) ⤠đ ; a matrix đ´ â đ đ×đ is said to satisfy the đđ null space property of frame đˇ of order đ if for any V â đˇâ1 (ker đ´ \ {0}) there exists đ˘ â ker đˇ, such that óľŠóľŠ óľŠ óľŠ óľŠ óľŠóľŠVđ + đ˘óľŠóľŠóľŠđ < óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ . (38) This null space property is abbreviated to đˇ-NSP. Here, a natural question is proposed which is why does not đ´đˇ directly satisfy the đđ null space property? The major difference is that đˇ-NSP is essentially the minimum of âVđ + đ˘âđ over all đ˘ â ker đˇ, where đ˘ = 0, and the đˇ-NSP degenerates into đ´đˇ having the đđ null space property. Therefore this new condition about the đđ -synthesis modeling is weaker than đ´đˇ having the đđ null space property. The following theorem asserts that the null space property of a frame đˇ of order đ is a sufficient and necessary condition for the đđ -synthesis modeling (36) to successfully recover all the đˇ-sparse signals with sparsity đ . Theorem 22. Fix a dictionary đˇ â đ đ×đ ; let đ´ â đ đ×đ; for any đĽ0 â đˇÎŁđ , one has đĽĚ = đĽ0 , where đĽĚ is the reconstructed signal from đŚ = đ´đĽ0 using the đđ -synthesis modeling if only if đ´ satisfies the đđ null space property of frame đˇ of order đ . Proof. Combining the proof of Theorem 4.2 in [33] with the proof of Theorem 4 (Lemma 2.2 in [18]), the content adds very little to the work. Here, the proof is omitted. Remark 23. On the null space property of the đđ -analysis modeling is implied in [27]. 4.2. Block Null Space Property of the Mixed đ2 /đđ -Minimization. The conventional compressed sensing only considers the sparsity that the signal đĽ has at most đ nonzero elements, which can appear anywhere in the vector, and it does not take into account any further structure. However, in many practical scenarios, the unknown signal đĽ not only is sparse but also exhibits additional structure in the form that the nonzero elements are aligned to blocks rather than being arbitrarily spread throughout the vector. These signals are referred to as the block sparse signals and arise in various applications, for example, DNA microarrays [36], equalization of sparse communication channels [37], and color imaging [38]. To define the block sparsity, it is necessary to introduce some further notations. Suppose that đĽ â đ đ is split into đ blocks, đĽ[1], đĽ[2], . . . , đĽ[đ], which are of length đ1 , đ2 , . . . , đđ , respectively; that is, đ đĽ = [âââââââââââââââââââ đĽ1 , . . . , đĽđ1 , âââââââââââââââââââââââââââââ đĽđ1 +1 , . . . , đĽđ1 +đ2 , . . . , âââââââââââââââââââââââââââââ đĽđâđđ +1 , . . . , đĽđ] , đĽ[2] đĽ[đ] [ đĽ[1] ] (39) đ and đ = âđ đ=1 đđ . A vector đĽ â đ is called block đ -sparse over I = {đ1 , đ2 , . . . , đđ } if đĽ[đ] is nonzero for at most đ indices đ. When đđ = 1 for each đ, the block sparsity reduces to the conventional definition of a sparse vector. Denote đ âđĽâ2,0 = âđź (âđĽ[đ]â2 > 0) , (40) đ=1 where đź(âđĽ[đ]â2 > 0) is an indicator function that obtains the value 1 if âđĽ[đ]â2 > 0 and 0 otherwise. So a block đ -sparse vector đĽ can be defined by âđĽâ2,0 ⤠đ , and âđĽâ0 = âđ đ=1 âđĽ[đ]â0 . Here, to avoid confusion with the above notations, we especially emphasize that đ denotes đ blocks of the signal out of đ which are nonzero but not đ entries out of đ which are nonzero outlined in the above sections. To recover a block sparse signal, similar to the standard đ0 -minimization, one will pursue the sparsest block sparse vector with the following mixed đ2 /đ0 modeling [39â41]: min âđĽâ2,0 đĽâđ đ s.t. đŚ = đ´đĽ. (41) But the mixed đ2 /đ0 -minimization problem is also NP-hard. It is natural that one uses the mixed đ2 /đ1 -minimization to replace the mixed đ2 /đ0 model [39â42]: min âđĽâ2,1 đĽâđ đ s.t. đŚ = đ´đĽ, (42) Journal of Function Spaces 7 đ where đ âđĽâ2,1 = â âđĽ[đ]â2 . (43) đ=1 To investigate the performance of this method, Eldar and Mishali [40] proposed the definition of the block Restricted Isometry Property (block RIP) of a measurement matrix đ´ â đ đ×đ. We say that a matrix đ´ satisfies the block RIP over I = {đ1 , đ2 , . . . , đđ } of order đ with positive constant đżđ |I if for every block đ -sparse vector đĽ â đ đ over I, such that (1 â đżđ |I ) âđĽâ22 ⤠âđ´đĽâ22 ⤠(1 + đżđ |I ) âđĽâ22 . (44) Obviously, the block RIP is a natural extension of the standard RIP, but it is a less stringent requirement comparing with the standard RIP. Besides, the required number of measurements of satisfying the block RIP is less than that of satisfying the standard RIP [43]. Eldar and Mishali [40] proved that the mixed đ2 /đ1 -minimization can exactly recover any block đ sparse signal when the measurement matrix đ´ satisfies the block RIP with đż2đ |I < 0.414. Recently, Lin and Li [43] improved the sufficient condition to đż2đ |I < 0.4931 and established another sufficient condition đżđ |I < 0.307 for exact recovery. There are also a number of works based on nonRIP-analysis to characterize the theoretical performance of the mixed đ2 /đ1 -minimization, such as block coherence [39], strong group sparsity [44], and null space characterization [42]. Based on the previous discussion of the performance of the đđ -minimization, it is also natural that one would be interested to make an ongoing effort to extend the đđ minimization to the setting of block sparse signal recovery. Therefore, the đ2 /đđ -minimization was proposed in [38, 45, 46]. Consider s.t. đŚ = đ´đĽ, minđ âđĽâ2,đ đĽâđ (45) where âđĽâ2,đ = đ đ (â âđĽ [đ]â2 ) đ=1 1/đ . (46) In [38, 45, 46], some numerical experiments demonstrated that fewer measurements are needed for exact recovery when 0 < đ < 1 than when đ = 1. Moreover, exact recovery conditions based on block restricted đ-isometry property have also been studied [46]. In this subsection, we will extend the đđ null space property to the block sparse signals. As for đ-block signal đĽ â đ đ, whose structure is like (39), we set đ â {1, 2, . . . , đ} and by đđś we mean the complement of the set đ with respect to {1, 2, . . . , đ}; that is, đđś = {1, 2, . . . , đ} \ đ. Definition 24. For any set đ â {1, 2, . . . , đ} with card(đ) ⤠đ , a matrix đ´ â đ đ×đ is said to satisfy the đđ block null space property over I = {đ1 , đ2 , . . . , đđ } of order đ , if óľŠ óľŠ óľŠóľŠ óľŠóľŠ óľŠóľŠVđ óľŠóľŠ2,đ < óľŠóľŠóľŠVđđś óľŠóľŠóľŠ2,đ , (47) for all V â Ker đ´ \ {0}, where Vđ denotes the vector equal to V on a block index set đ and zero elsewhere. Remark 25. Inequality (47) can be replaced by âVđ â2,đ < đ đ đ âVđđś â2,đ or âVđ â2,đ < (1/2)âVâ2,đ for every set đ â {1, 2, . . . , đ} with card(đ) ⤠đ . Using Definition 24, we can easily characterize the existence and uniqueness of the solution of the mixed đ2 /đđ minimization (45). Theorem 26. Given a matrix đ´ â đ đ×đ, every block đ -sparse vector đĽ â đ đ is the unique solution of the mixed đ2 /đđ minimization with đŚ = đ´đĽ if and only if đ´ satisfies the đđ block null space property of order đ . We defer the proof of Theorem 26 to Appendix. So far, we only extend the standard đđ NSP to the đđ -synthesis modeling and the mixed đ2 /đđ -minimization, respectively. As for extension of the stable đđ NSP and the robust đđ NSP, we will leave it for our next works, especially, for discussion of the block sparse signals via mixed đ2 /đđ minimization. Appendix In this section, we provide the proofs of Theorems 7, 13, 18, and 26. We will need the following lemma. Lemma A.1. Given a set đ â [đ] and vector đĽ, đ§ â đ đ, óľŠóľŠ óľŠđ óľŠđ óľŠ óľŠ óľŠđ đ đ óľŠóľŠ(đĽ â đ§)đđś óľŠóľŠóľŠđ ⤠âđ§âđ â âđĽâđ + óľŠóľŠóľŠ(đĽ â đ§)đ óľŠóľŠóľŠđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ . (A.1) If đĽâ is a solution of the đđ -minimization with đŚ = đ´đĽ, then óľŠđ óľŠ óľŠ óľŠđ óľŠóľŠ â â óľŠđ óľŠóľŠ(đĽ â đĽ )đđś óľŠóľŠóľŠđ ⤠óľŠóľŠóľŠ(đĽ â đĽ )đ óľŠóľŠóľŠđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ . (A.2) Proof. Using the following inequalities, we can easily obtain inequality (A.1): óľŠóľŠ óľŠđ óľŠ óľŠđ óľŠ óľŠđ óľŠóľŠ(đĽ â đ§)đđś óľŠóľŠóľŠđ ⤠óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ + óľŠóľŠóľŠđ§đđś óľŠóľŠóľŠđ , óľŠđ óľŠ óľŠđ óľŠ óľŠđ óľŠ óľŠđ óľŠ óľŠđ óľŠ âđĽâđđ = óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ + óľŠóľŠóľŠđĽđ óľŠóľŠóľŠđ ⤠óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ + óľŠóľŠóľŠ(đĽ â đ§)đ óľŠóľŠóľŠđ + óľŠóľŠóľŠđ§đ óľŠóľŠóľŠđ . (A.3) Proof of Theorem 7. For any set đ â [đ] with card(đ) ⤠đ , let us now assume that the matrix đ´ â đ đ×đ satisfies the đđ stable null space property of order đ with constant 0 < đ < 1. For đĽ, đ§ â đ đ with đ´đ§ = đ´đĽ, since V = đ§ â đĽ â Ker đ´, the đđ stable null space property yields óľŠ óľŠđ óľŠóľŠ óľŠóľŠđ óľŠóľŠVđ óľŠóľŠđ < đ óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ . (A.4) From Lemma A.1, we get óľŠ óľŠđ óľŠ óľŠđ óľŠóľŠ óľŠóľŠđ đ đ óľŠóľŠVđđś óľŠóľŠđ ⤠âđ§âđ â âđĽâđ + óľŠóľŠóľŠVđ óľŠóľŠóľŠđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ óľŠ óľŠđ óľŠ óľŠđ ⤠âđ§âđđ â âđĽâđđ + đ óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ . (A.5) Since 0 < đ < 1, so 1 óľŠóľŠ óľŠóľŠđ óľŠ óľŠđ (âđ§âđđ â âđĽâđđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ ) . óľŠóľŠVđđś óľŠóľŠđ ⤠1âđ (A.6) 8 Journal of Function Spaces Using (12), we derive óľŠ óľŠđ óľŠ óľŠđ óľŠ óľŠđ âVâđđ = óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + óľŠóľŠóľŠVđ óľŠóľŠóľŠđ ⤠(1 + đ) óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ ⤠1+đ óľŠ óľŠđ (âđ§âđđ â âđĽâđđ + 2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ ) , 1âđ (A.7) which is the desired inequality. Conversely, let us assume that the matrix đ´ satisfies (13) for all vectors đĽ, đ§ â đ đ with đ´đ§ = đ´đĽ. Given a vector V â Ker đ´, since đ´Vđđś = đ´(âVđ ), let đĽ = âVđ and đ§ = Vđđś ; then đĽđđś = 0, so we can get from (13) óľŠđ 1 + đ óľŠóľŠ óľŠóľŠđ óľŠóľŠ óľŠóľŠđ óľŠ (óľŠV đś óľŠ â óľŠV óľŠ ) , âVâđđ = óľŠóľŠóľŠVđđś + Vđ óľŠóľŠóľŠđ ⤠1 â đ óľŠ đ óľŠđ óľŠ đ óľŠđ óľŠ óľŠđ óľŠ óľŠđ óľŠ óľŠđ óľŠ óľŠđ (1 â đ) (óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + óľŠóľŠóľŠVđ óľŠóľŠóľŠđ ) ⤠(1 + đ) (óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ â óľŠóľŠóľŠVđ óľŠóľŠóľŠđ ) ; (A.9) therefore, we can get (A.10) Proof of Theorem 13. For any set đ â [đ] with card(đ) ⤠đ , let us assume that the matrix đ´ â đ đ×đ satisfies the đđ robust null space property of order đ with constant 0 < đ < 1 and đž > 0. For đĽ, đ§ â đ đ, setting V = đ§ â đĽ, the robust đđ null space property and Lemma A.1 yield (A.11) đž 1 óľŠ óľŠđ óľŠóľŠ óľŠóľŠđ (2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ + âđ§âđđ â âđĽâđđ ) + âđ´Vâ2 . óľŠóľŠVđđś óľŠóľŠđ ⤠1âđ 1âđ (A.12) Using (23), we derive óľŠ óľŠđ óľŠ óľŠđ âVâđđ = óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + óľŠóľŠóľŠVđ óľŠóľŠóľŠđ óľŠ óľŠđ ⤠(1 + đ) óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + đž âđ´Vâ2 1 + đ óľŠóľŠ óľŠóľŠđ (2 óľŠđĽ đś óľŠ + âđ§âđđ â âđĽâđđ ) 1 â đ óľŠ đ óľŠđ + = +( đ 1 đ 1/đâ1/đ âđ§ â đĽâđ ) (A.14) đ óľŠóľŠ 1 óľŠđ đ óľŠ(đ§ â đĽ)đđś óľŠóľŠóľŠđ + đ/đâ1 âđ§ â đĽâđ đ đ/đâ1 óľŠ đ = đž đ+1 âđ´ (đ§ â đĽ)â2 ⤠đ/đâ1 âđ§ â đĽâđđ đ đ/đâ1 đ đž + đ/đâ1 âđ´(đ§ â đĽ)â2 . đ + âđ§ â đĽâđđ ⤠(đ + 1)2 óľŠ óľŠđ (2 óľŠóľŠđĽđđś óľŠóľŠ + âđ§âđđ â âđĽâđđ ) (1 â đ) đ đ/đâ1 óľŠ óľŠđ + + and combining these two inequalities, we can get ⤠đž đ óľŠóľŠ óľŠđ óľŠ(đ§ â đĽ)đđś óľŠóľŠóľŠđ + đ/đâ1 âđ´ (đ§ â đĽ)â2 đ đ/đâ1 óľŠ đ ⤠Applying (24) to 0 < đ < 1, we can get That is, âVđ âđ ⤠đ1/đ âVđđś âđ . óľŠóľŠ óľŠóľŠđ óľŠ óľŠđ óľŠóľŠVđ óľŠóľŠđ < đ óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ + đž âđ´Vâ2 , óľŠóľŠ óľŠóľŠđ óľŠ óľŠđ óľŠ óľŠđ đ đ óľŠóľŠVđđś óľŠóľŠđ ⤠2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ + âđ§âđ â âđĽâđ + óľŠóľŠóľŠVđ óľŠóľŠóľŠđ , óľŠđ óľŠ óľŠđ óľŠ âđ§ â đĽâđđ = óľŠóľŠóľŠ(đ§ â đĽ)đ óľŠóľŠóľŠđ + óľŠóľŠóľŠ(đ§ â đĽ)đđś óľŠóľŠóľŠđ óľŠ óľŠđ óľŠ óľŠđ = óľŠóľŠóľŠ(đ§ â đĽ)đ óľŠóľŠóľŠđ + óľŠóľŠóľŠ(đ§ â đĽ) â (đ§ â đĽ)đ óľŠóľŠóľŠđ (A.8) and this can be written as óľŠóľŠ óľŠóľŠđ óľŠ óľŠđ óľŠóľŠVđ óľŠóľŠđ ⤠đ óľŠóľŠóľŠVđđś óľŠóľŠóľŠđ . Proof of Theorem 18. Let us now assume that the matrix đ´ satisfies the đđ,đ robust null space property with constant 0 < đ < 1 and đ > 0. For đĽ, đ§ â đ đ, choosing đ as an index set of đ largest entries of đ§ â đĽ, noticing inequality (18) and âđ§ â đĽâđ ⤠1, we get đž (1 + đ) âđ´Vâ2 + đž âđ´Vâ2 1âđ 2đž 1 + đ óľŠóľŠ óľŠóľŠđ (2 óľŠđĽ đś óľŠ + âđ§âđđ â âđĽâđđ ) + âđ´Vâ2 , 1 â đ óľŠ đ óľŠđ 1âđ (A.13) which is the desired inequality. The proof of sufficiency is similar to that of Theorem 7; here it is omitted. = 2đž (đ + 1) âđ´ (đ§ â đĽ)â2 (1 â đ) đ đ/đâ1 đž đ đ/đâ1 âđ´ (đ§ â đĽ)â2 (A.15) (đ + 1)2 óľŠ óľŠđ (2 óľŠóľŠóľŠđĽđđś óľŠóľŠóľŠđ + âđ§âđđ â âđĽâđđ ) đ/đâ1 (1 â đ) đ + đž (đ + 3) âđ´(đ§ â đĽ)â2 . (1 â đ) đ đ/đâ1 Proof of Theorem 26. In order to prove Theorem 26, we will need the following triangle inequality for 0 < đ ⤠1: đ óľŠđ óľŠ óľŠđ óľŠóľŠ óľŠóľŠđĽ + đŚóľŠóľŠóľŠ2,đ ⤠âđĽâ2,đ + óľŠóľŠóľŠđŚóľŠóľŠóľŠ2,đ , đĽ, đŚ â đ đ. (A.16) Case 1. We consider the simplest case; suppose that both đĽ and đŚ are split into đ blocks, đĽ = {đĽ[1], đĽ[2], . . . , đĽ[đ]}đ with length I = {đ1 , đ2 , . . . , đđ } and đŚ = {đŚ[1], đŚ[2], . . . , đŚ[đ]}đ with length J = {đ1 , đ2 , . . . , đđ }, respectively. We also assume that both đĽ[đ] and đŚ[đ] (đ = 1, 2, . . . , đ) have the same number of entries; that is, đđ = đđ (đ = 1, 2, . . . , đ). Then inequality đ (A.16) is easily obtained by only noticing that âđĽ + đŚâ2,đ = đ đ âđ đ=1 âđĽ[đ] + đŚ[đ]â2 and using the fact that â â â2 satisfies the triangle inequality for any 0 < đ ⤠1; that is, đ óľŠđ óľŠđ óľŠ óľŠóľŠ óľŠóľŠđĽ[đ] + đŚ[đ]óľŠóľŠóľŠ2 ⤠âđĽ[đ]â2 + óľŠóľŠóľŠđŚ[đ]óľŠóľŠóľŠ2 , đ = 1, 2, . . . , đ. (A.17) Case 2. Suppose that đĽ is split into đ blocks with length I = {đ1 , đ2 , . . . , đđ }, but đŚ is split into đ blocks with length Journal of Function Spaces 9 J = {đ1 , đ2 , . . . , đđ }; we might as well set đ < đ; then in the top đ blocks, there exist some blocks in đŚ, whose entries are more than those of corresponding blocks in đĽ. Without loss of generality, we might as well suppose that the first block has the above property; that is, đ1 > đ1 ; then we only supplement đ1 â đ1 zeros in block đĽ[1], such that đĽ[1] and đŚ[1] have the same number of entries. By Case 1, inequality (A.17) also holds and đ đ đ đ=1 đ=1 đ=1 đ óľŠđ óľŠđ óľŠ óľŠ â óľŠóľŠóľŠđĽ[đ] + đŚ[đ]óľŠóľŠóľŠ2 ⤠â âđĽ[đ]â2 + â óľŠóľŠóľŠđŚ[đ]óľŠóľŠóľŠ2 . (A.18) We can use the same way to deal with the case if there exist some blocks in đĽ whose entries are more than those of corresponding block in đŚ for the top đ blocks. Finally, we supplement again đ â đ zero-blocks in đĽ; then inequality (A.18) can be written by đ đ đ đ=1 đ=1 đ=1 đ óľŠđ óľŠđ óľŠ óľŠ â óľŠóľŠóľŠđĽ[đ] + đŚ[đ]óľŠóľŠóľŠ2 ⤠â âđĽ[đ]â2 + â óľŠóľŠóľŠđŚ[đ]óľŠóľŠóľŠ2 , (A.19) which is the desired inequality. Now we give the proof of Theorem 26. Let us first assume that every block đ -sparse vector đĽ â đ đ is the unique solution of the mixed đ2 /đđ -minimization with đŚ = đ´đĽ, and suppose that đĽ is split into đ blocks with size set I = {đ1 , đ2 , . . . , đđ }. Given a fixed index set đ â {1, 2, . . . , đ} with card(đ) ⤠đ , for any V â Ker đ´ \ {0}, since V = Vđ + Vđđś satisfies đ´V = 0, we have đ´Vđ = đ´(âVđđś ). Since Vđ is block đ -sparse and Vđ ≠ âVđđś , therefore óľŠóľŠ óľŠóľŠ óľŠ óľŠ óľŠ óľŠ óľŠóľŠVđ óľŠóľŠ2,đ < óľŠóľŠóľŠâVđđś óľŠóľŠóľŠ2,đ = óľŠóľŠóľŠVđđś óľŠóľŠóľŠ2,đ . (A.20) Conversely, let us assume that the block null space property relative to đ holds. Suppose that đĽ is a block đ -sparse vector with đ blocks, đĽ = {đĽ[1], đĽ[2], . . . , đĽ[đ]}đ with length I = {đ1 , đ2 , . . . , đđ }; if we let đ := {đ : đĽ[đ] ≠ 0, đ = 1, 2, . . . , đ}, then card(đ) ⤠đ . Let further đ§ ≠ đĽ be such that đ´đ§ = đ´đĽ; without loss of generality, suppose that đ§ is also split into đ blocks in terms of đĽ, đ§ = {đ§[1], đ§[2], . . . , đ§[đ]}đ with the same length as I = {đ1 , đ2 , . . . , đđ }. Then V = đĽâđ§ â Ker đ´ \ {0} and đ óľŠđ óľŠ óľŠ óľŠđ âđĽâ2,đ ⤠óľŠóľŠóľŠđĽđ â đ§đ óľŠóľŠóľŠ2,đ + óľŠóľŠóľŠđ§đ óľŠóľŠóľŠ2,đ óľŠ óľŠđ óľŠ óľŠđ = óľŠóľŠóľŠVđ óľŠóľŠóľŠ2,đ + óľŠóľŠóľŠđ§đ óľŠóľŠóľŠ2,đ óľŠ óľŠđ óľŠ óľŠđ < óľŠóľŠóľŠVđđś óľŠóľŠóľŠ2,đ + óľŠóľŠóľŠđ§đ óľŠóľŠóľŠ2,đ (A.21) óľŠđ óľŠ óľŠ óľŠóľŠđ óľŠ óľŠ óľŠ = óľŠóľŠâđ§đđś óľŠóľŠ2,đ + óľŠóľŠđ§đ óľŠóľŠ2,đ đ đ đ = â âđ§[đ]â2 + â âđ§[đ]â2 = âđ§â2,đ , đâđđś đâđ so we have âđĽâ2,đ < âđ§â2,đ . This establishes the required minimality of âđĽâ2,đ . Here, we have made use of inequality (A.16). Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments The authors would like to thank the anonymous reviewers for their insightful comments and valuable suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) under Grant no. 11131006 and by a Marie Curie International Research Staff Exchange Scheme Fellowship within the 7th European Community Framework Programme, via EYE2E (269118) and LIVCODE (295151), and in part by the Science Research Project of Ningxia Higher Education Institutions under Grant no. NGY20140147. References [1] E. J. CandeĚs, J. Romberg, and T. Tao, âRobust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,â IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489â509, 2006. [2] E. J. CandeĚs and T. Tao, âDecoding by linear programming,â IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203â4215, 2005. [3] E. J. Candes and T. Tao, âNear-optimal signal recovery from random projections: universal encoding strategies?â IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406â5425, 2006. [4] D. L. Donoho, âCompressed sensing,â IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289â1306, 2006. [5] B. K. Natarajan, âSparse approximate solutions to linear systems,â SIAM Journal on Computing, vol. 24, no. 2, pp. 227â234, 1995. [6] D. L. Donoho and X. Huo, âUncertainty principles and ideal atomic decomposition,â IEEE Transactions on Information Theory, vol. 47, no. 7, pp. 2845â2862, 2001. [7] M. Elad and A. M. Bruckstein, âA generalized uncertainty principle and sparse representation in pairs of bases,â IEEE Transactions on Information Theory, vol. 48, no. 9, pp. 2558â 2567, 2002. [8] R. Gribonval and M. Nielsen, âSparse representations in unions of bases,â IEEE Transactions on Information Theory, vol. 49, no. 12, pp. 3320â3325, 2003. [9] E. J. CandeĚs, âThe restricted isometry property and its implications for compressed sensing,â Comptes Rendus Mathematique, vol. 346, no. 9-10, pp. 589â592, 2008. [10] S. Foucart and M. J. Lai, âSparsest solutions of underdetermined linear systems via đđ -minimization for 0 < đ ⤠1,â Applied and Computational Harmonic Analysis, vol. 26, pp. 395â407, 2009. [11] T. T. Cai, L. Wang, and G. Xu, âShifting inequality and recovery of sparse signals,â IEEE Transactions on Signal Processing, vol. 58, no. 3, pp. 1300â1308, 2010. [12] Q. Mo and S. Li, âNew bounds on the restricted isometry constant đż2k ,â Applied and Computational Harmonic Analysis, vol. 31, no. 3, pp. 460â468, 2011. [13] S. Zhou, L. Kong, and N. Xiu, âNew bounds for RIC in compressed sensing,â Journal of the Operations Research Society of China, vol. 1, no. 2, pp. 227â237, 2013. [14] R. Saab and O. Yilmaz, âSparse recovery by non-convex optimization-instance optimality,â Applied and Computational Harmonic Analysis, vol. 29, no. 1, pp. 30â48, 2010. [15] R. Chartrand, âExact reconstruction of sparse signals via nonconvex minimization,â IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707â710, 2007. 10 [16] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Springer Science+Business Media, New York, NY, USA, 2013. [17] M. Davies and R. Gribonval, âRestricted isometry constants where lp sparse recovery can fail for 0 < q ⤠1,â IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2203â2214, 2009. [18] S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich, âThe Gelfand widths of đđ -balls for 0 < đ ⤠1,â Journal of Complexity, vol. 26, pp. 629â640, 2010. [19] S. Li and J. H. Lin, âCompressed sensing with coherent tight frames via đđ -minimization for 0 < đ ⤠1,â Inverse Problems and Imaging, http://arxiv.org/abs/1105.3299. [20] Q. Sun, âRecovery of sparsest signals via đđ -minimization,â Applied and Computational Harmonic Analysis, vol. 32, pp. 329â 341, 2012. [21] J. Peng, S. Yue, and H. Li, âđđ/đśđżđ equivalence: a phenomenon hidden among sparsity models for information processing,â http://arxiv.org/abs/1501.02018. [22] A. Cohen, W. Dahmen, and R. DeVore, âCompressed sensing and best đ-term approximation,â Journal of the American Mathematical Society, vol. 22, no. 1, pp. 211â231, 2009. [23] A. Pinkus, On L1 -Approximation, Cambridge University Press, Cambridge, UK, 1989. [24] B. S. Kashin and V. N. Temlyakov, âA remark on compressed sensing,â Mathematical Notes, vol. 82, no. 5-6, pp. 748â755, 2007. [25] M. J. Lai and Y. Liu, âThe null space property for sparse recovery from multiple measurement vectors,â Applied and Computational Harmonic Analysis, vol. 30, no. 3, pp. 402â406, 2011. [26] Z. Q. Xu, âCompressed sensing,â Chinese Science: Maths, vol. 42, no. 9, pp. 865â877, 2012. [27] A. Aldroubi, X. Chen, and A. M. Powell, âPerturbations of measurement matrices and dictionaries in compressed sensing,â Applied and Computational Harmonic Analysis, vol. 33, no. 2, pp. 282â291, 2012. [28] M. Elad, P. Milanfar, and R. Rubinstein, âAnalysis versus synthesis in signal priors,â Inverse Problems, vol. 23, no. 3, pp. 947â968, 2007. [29] R. Gribonval and M. Nielsen, âHighly sparse representations from dictionaries are unique and independent of the sparseness measure,â Applied and Computational Harmonic Analysis, vol. 22, no. 3, pp. 335â355, 2007. [30] H. Rauhut, K. Schnass, and P. Vandergheynst, âCompressed sensing and redundant dictionaries,â IEEE Transactions on Information Theory, vol. 54, no. 5, pp. 2210â2219, 2008. [31] M. Elad, Sparse and Redundant Representations, From Theory to Applications in Signal and Image Processing, Springer, New York, NY, USA, 2010. [32] E. J. CandeĚs, Y. C. Eldar, D. Needell, and P. Randall, âCompressed sensing with coherent and redundant dictionaries,â Applied and Computational Harmonic Analysis, vol. 31, no. 1, pp. 59â73, 2011. [33] X. Chen, H. Wang, and R. Wang, âA null space analysis of the l1-synthesis method in dictionary-based compressed sensing,â Applied and Computational Harmonic Analysis, vol. 37, pp. 492â 515, 2014. [34] Y. Liu, S. Li, T. Mi, H. Lei, and W. Yu, âPerformance analysis of l1-synthesis with coherent frames,â in Proceedings of the IEEE International Symposium on Information Theory (ISIT â12), pp. 2042â2046, July 2012. Journal of Function Spaces [35] Y. Liu, T. Mi, and S. Li, âCompressed sensing with general frames via optimal-dual-based đ1 -analysis,â IEEE Transactions on Information Theory, vol. 58, no. 7, pp. 4201â4214, 2012. [36] F. Parvaresh, H. Vikalo, S. Misra, and B. Hassibi, âRecovering sparse signals using sparse measurement matrices in compressed DNA microarrays,â IEEE Journal on Selected Topics in Signal Processing, vol. 2, no. 3, pp. 275â285, 2008. [37] S. F. Cotter and B. D. Rao, âSparse channel estimation via matching pursuit with application to equalization,â IEEE Transactions on Communications, vol. 50, no. 3, pp. 374â377, 2002. [38] A. Majumdar and R. K. Ward, âCompressed sensing of color images,â Signal Processing, vol. 90, no. 12, pp. 3122â3127, 2010. [39] Y. C. Eldar, P. Kuppinger, and H. BoĚlcskei, âBlock-sparse signals: uncertainty relations and efficient recovery,â IEEE Transactions on Signal Processing, vol. 58, no. 6, pp. 3042â3054, 2010. [40] Y. C. Eldar and M. Mishali, âRobust recovery of signals from a structured union of subspaces,â IEEE Transactions on Information Theory, vol. 55, no. 11, pp. 5302â5316, 2009. [41] E. Elhamifar and R. Vidal, âBlock-sparse recovery via convex optimization,â IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4094â4107, 2012. [42] M. Stojnic, F. Parvaresh, and B. Hassibi, âOn the reconstruction of block-sparse signals with an optimal number of measurements,â IEEE Transactions on Signal Processing, vol. 57, no. 8, pp. 3075â3085, 2009. [43] J. H. Lin and S. Li, âBlock sparse recovery via mixed đ2 = đđ minimization,â Acta Mathematica Sinica, vol. 29, no. 7, pp. 1401â 1412, 2013. [44] J. Huang and T. Zhang, âThe benefit of group sparsity,â The Annals of Statistics, vol. 38, no. 4, pp. 1978â2004, 2010. [45] Y. Wang, J. Wang, and Z. B. Xu, âOn recovery of block-sparse signals via mixed đ2 = đđ (0 < đ ⤠1) norm minimization,â EURASIP Journal on Advances in Signal Processing, vol. 2013, article 76, 17 pages, 2013. [46] Y. Wang, J. Wang, and Z. B. Xu, âRestricted p-isometry properties of nonconvex block-sparse compressed sensing,â Signal Processing, vol. 104, pp. 188â196, 2014. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Applied Mathematics Algebra Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Mathematical Problems in Engineering Journal of Mathematics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Discrete Dynamics in Nature and Society Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Abstract and Applied Analysis Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Journal of Stochastic Analysis Optimization Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014
© Copyright 2026 Paperzz