Dense Subsets of Pseudorandom Sets O. Reingold1 L. Trevisan2 M. Tulsiani2 1 Weizmann 2 UC Institute Berkeley 3 Harvard University S. Vadhan3 Progressions in Subsets of Integers Theorem (Szemeredi 1975) Any set of A of δN integers in {1, . . . , N} contains a length k -AP if N is large enough. Progressions in Subsets of Integers Theorem (Szemeredi 1975) Any set of A of δN integers in {1, . . . , N} contains a length k -AP if N is large enough. Theorem (Green-Tao 2004) The set of primes in {1, . . . , N} contains a length k -AP if N is large enough. Progressions in Subsets of Integers Theorem (Szemeredi 1975) Any set of A of δN integers in {1, . . . , N} contains a length k -AP if N is large enough. Theorem (Green-Tao 2004) The set of primes in {1, . . . , N} contains a length k -AP if N is large enough. Green-Tao showed that a property of dense subsets of the integers (having progressions) also holds for the primes. The Green-Tao Proof Thm 1 There is a pseudorandom set R ⊆ {1, . . . , N} such that primes have constant density in R. The Green-Tao Proof Thm 1 There is a pseudorandom set R ⊆ {1, . . . , N} such that primes have constant density in R. {1, . . . , N} R 2, 3, 5, . . . The Green-Tao Proof Thm 1 There is a pseudorandom set R ⊆ {1, . . . , N} such that primes have constant density in R. Thm 2 If R is a pseudorandom subset of {1, . . . , N} and if D is a dense subset i.e. |D| ≥ δR, then D contains a length k -AP. {1, . . . , N} R 2, 3, 5, . . . The Green-Tao Proof Thm 1 There is a pseudorandom set R ⊆ {1, . . . , N} such that primes have constant density in R. Thm 2 If R is a pseudorandom subset of {1, . . . , N} and if D is a dense subset i.e. |D| ≥ δR, then D contains a length k -AP. {1, . . . , N} R 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 Proof of Theorem 2 If D is a dense in a pseudorandom set R (|D| ≥ δ|R|), then there is a dense model set M (|M| ≥ δN) indistinguishable from D. {1, . . . , N} R 2, 3, 5, . . . M Proof of Theorem 2 If D is a dense in a pseudorandom set R (|D| ≥ δ|R|), then there is a dense model set M (|M| ≥ δN) indistinguishable from D. M must contain length k -APs (Szemeredi). So does D. {1, . . . , N} R 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 21, 51, 81, 111, 141, 171, 201, 231, 261, 291 Proof of Theorem 2 If D is a dense in a pseudorandom set R (|D| ≥ δ|R|), then there is a dense model set M (|M| ≥ δN) indistinguishable from D. M must contain length k -APs (Szemeredi). So does D. {1, . . . , N} R 199, 409, 619, 829, 1039, 1249, 1459, 1669, 1879, 2089 21, 51, 81, 111, 141, 171, 201, 231, 261, 291 “A dense subset of a pseudorandom set has a dense model.” Can we prove this in general? Abstracting out... A finite universe X (e.g. {1, . . . , N}, {0, 1}n ). A family of distinguishers F = {f : X → {0, 1}} (e.g. Circuits of size s). Abstracting out... A finite universe X (e.g. {1, . . . , N}, {0, 1}n ). A family of distinguishers F = {f : X → {0, 1}} (e.g. Circuits of size s). Distributions A and B are -indistinguishable by F if ∀f ∈ F |Ef (A) − Ef (B)| ≤ R is -pseudorandom if R is -indistinguishable from UX (uniform on X ). Abstracting out... A finite universe X (e.g. {1, . . . , N}, {0, 1}n ). A family of distinguishers F = {f : X → {0, 1}} (e.g. Circuits of size s). Distributions A and B are -indistinguishable by F if ∀f ∈ F |Ef (A) − Ef (B)| ≤ R is -pseudorandom if R is -indistinguishable from UX (uniform on X ). A is δ-dense in B if P(A = x) ≤ 1 P(B = x) δ (e.g. B = UX , A uniform on δ|X | elements =⇒ P(A = x) = 1 δ|X | ). What should a “Dense Model Theorem” be? D is δ-dense in R, R is -pseudorandom w.r.t F. ⇓ There is M δ-dense in UX , -indistinguishable from D by F. What should a “Dense Model Theorem” be? D is δ-dense in R, R is -pseudorandom w.r.t F. ⇓ There is M δ-dense in UX , -indistinguishable from D by F. equivalently, Every M δ-dense in UX is -distinguishable from D by F ⇓ R is -distinguishable from UX by F. What should a “Dense Model Theorem” be? D is δ-dense in R, R is 0 -pseudorandom w.r.t F 0 . ⇓ There is M δ-dense in UX , -indistinguishable from D by F. equivalently, Every M δ-dense in UX is -distinguishable from D by F ⇓ R is 0 -distinguishable from UX by F 0 . Relation between (, 0 ) and (F, F 0 ) depends on the reduction. The Results Theorem (Tao-Ziegler 2006) Suppose for all M δ-dense in UX , some function in F -distinguishes M and D. Then, there is a function h : X → {0, 1}n of the form h(x) = g(f1 (x), . . . , fk (x)) fi ∈ F , k = poly (1/, 1/δ) s.t. |Eh(R) − Eh(UX )| ≥ poly (, δ) The Results Theorem (Tao-Ziegler 2006) Suppose for all M δ-dense in UX , some function in F -distinguishes M and D. Then, there is a function h : X → {0, 1}n of the form h(x) = g(f1 (x), . . . , fk (x)) fi ∈ F , k = poly (1/, 1/δ) s.t. |Eh(R) − Eh(UX )| ≥ poly (, δ) exp(k ) complexity The Results Theorem (Tao-Ziegler 2006) Suppose for all M δ-dense in UX , some function in F -distinguishes M and D. Then, there is a function h : X → {0, 1}n of the form h(x) = g(f1 (x), . . . , fk (x)) fi ∈ F , k = poly (1/, 1/δ) s.t. exp(k ) complexity |Eh(R) − Eh(UX )| ≥ poly (, δ) Theorem (RTTV 2007) Suppose for all M δ-dense in UX , some function in F -distinguishes M and D. Then, there is a function h : X → {0, 1}n of the form h(x) = g(f1 (x), . . . , fk (x)) fi ∈ F , k = poly (1/, log 1/δ) s.t. |Eh(R) − Eh(UX )| ≥ Ω(δ) O(k ) complexity The Proof Switching the quantifiers ∀M ∃f Ef (D) − Ef (M) ≥ The Proof Switching the quantifiers ∀M ∃f Ef (D) − Ef (M) ≥ =⇒ ∃f ∀M Ef (D) − Ef (M) ≥ where f : X → [0, 1] is a convex combination of functions from F. The Proof Switching the quantifiers ∀M ∃f Ef (D) − Ef (M) ≥ =⇒ ∃f ∀M Ef (D) − Ef (M) ≥ where f : X → [0, 1] is a convex combination of functions from F. Proof: min-max. The Proof Switching the quantifiers ∀M ∃f Ef (D) − Ef (M) ≥ =⇒ ∃f ∀M Ef (D) − Ef (M) ≥ where f : X → [0, 1] is a convex combination of functions from F. Proof: min-max. Getting a threshold distinguisher Ef (D) − Ef (M) ≥ =⇒ ∃ t ∈ (0, 1) P(f (D) ≥ t) − P(f (M) ≥ t) ≥ The Proof Switching the quantifiers ∀M ∃f Ef (D) − Ef (M) ≥ =⇒ ∃f ∀M Ef (D) − Ef (M) ≥ where f : X → [0, 1] is a convex combination of functions from F. Proof: min-max. Getting a threshold distinguisher Ef (D) − Ef (M) ≥ =⇒ ∃ t ∈ (0, 1) P(f (D) ≥ t) − P(f (M) ≥ t) ≥ Proof: EZ is the average of P(Z ≥ t) over t ∈ (0, 1). The Proof Switching the quantifiers ∀M ∃f Ef (D) − Ef (M) ≥ =⇒ ∃f ∀M Ef (D) − Ef (M) ≥ where f : X → [0, 1] is a convex combination of functions from F. Proof: min-max. Getting a threshold distinguisher Ef (D) − Ef (M) ≥ =⇒ ∃ t ∈ (0, 1) P(f (D) ≥ t) − P(f (M) ≥ t) ≥ Proof: EZ is the average of P(Z ≥ t) over t ∈ (0, 1). In fact, ∃t P(f (D) ≥ t + /2) − P(f (M) ≥ t) ≥ /2 The Proof (contd...) Using the distinguisher for R The Proof (contd...) Using the distinguisher for R Let S be the set of δ|X | elements where f is maximized. P(f (D) ≥ t + /2) − P(f (US ) ≥ t) ≥ /2 =⇒ P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 The Proof (contd...) Using the distinguisher for R Let S be the set of δ|X | elements where f is maximized. P(f (D) ≥ t + /2) − P(f (US ) ≥ t) ≥ /2 =⇒ t + /2 D P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 The Proof (contd...) Using the distinguisher for R Let S be the set of δ|X | elements where f is maximized. P(f (D) ≥ t + /2) − P(f (US ) ≥ t) ≥ /2 =⇒ t + /2 t S /D P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 The Proof (contd...) Using the distinguisher for R Let S be the set of δ|X | elements where f is maximized. P(f (D) ≥ t + /2) − P(f (US ) ≥ t) ≥ /2 =⇒ P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 t + /2 t S / D /2 The Proof (contd...) Using the distinguisher for R Let S be the set of δ|X | elements where f is maximized. P(f (D) ≥ t + /2) − P(f (US ) ≥ t) ≥ /2 =⇒ P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 t + /2 t S / D /2 (X \ S) / (R \ D) The Proof (contd...) Using the distinguisher for R Let S be the set of δ|X | elements where f is maximized. P(f (D) ≥ t + /2) − P(f (US ) ≥ t) ≥ /2 =⇒ P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 t + /2 t S / D /2 (X \ S) / (R \ D) The Proof (almost done now...) Getting few functions (Chernoff bound) f is a distribution over functions such that P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 Sample k = poly (1/, log 1/δ) functions f1 , . . . , fk P P fi (UX ) fi (R) ≥ t + /4 − P ≥ t + /4 ≥ δ/4 P k k The Proof (almost done now...) Getting few functions (Chernoff bound) f is a distribution over functions such that P(f (R) ≥ t + /2) − P(f (UX ) ≥ t) ≥ δ/2 Sample k = poly (1/, log 1/δ) functions f1 , . . . , fk P P fi (UX ) fi (R) ≥ t + /4 − P ≥ t + /4 ≥ δ/4 P k k Note that we combine f1 , . . . , fk only as a linear threshold function. Complexity = O(k ). The Green-Tao proof (Iterative Partitioning) Partition X into pieces. X The Green-Tao proof (Iterative Partitioning) Partition X into pieces. X D To get M, pick whole pieces according to density of D in the piece. The Green-Tao proof (Iterative Partitioning) Partition X into pieces. X D To get M, pick whole pieces according to density of D in the piece. If D is distinguishable from M, then can refine partition. Use pseudorandomness of R to bound number of steps. Smuggling techniques in the other direction We adapt the Green-Tao proof technique to prove Impagliazzo’s hardcore lemma: If function f : X → {0, 1} is hard to compute correctly on more than 1 − δ fraction of inputs from X then there is a set H ⊆ X , |H| ≥ δ|X | such that f is “very hard” to compute on H. Smuggling techniques in the other direction We adapt the Green-Tao proof technique to prove Impagliazzo’s hardcore lemma: If function f : X → {0, 1} is hard to compute correctly on more than 1 − δ fraction of inputs from X then there is a set H ⊆ X , |H| ≥ δ|X | such that f is “very hard” to compute on H. Iterative partitioning gives a circuit for computing H. Further questions All this is good in theory... but how can it be applied? Further questions All this is good in theory... but how can it be applied? Pseudoentropy ⇔ density in a pseudorandom distribution. Further questions All this is good in theory... but how can it be applied? Pseudoentropy ⇔ density in a pseudorandom distribution. New proof of regularity lemma for subgraphs of expanders. Further questions All this is good in theory... but how can it be applied? Pseudoentropy ⇔ density in a pseudorandom distribution. New proof of regularity lemma for subgraphs of expanders. Uniform distribution on edges of the complete graph. Expanders are pseudorandom w.r.t. cuts. Further questions All this is good in theory... but how can it be applied? Pseudoentropy ⇔ density in a pseudorandom distribution. New proof of regularity lemma for subgraphs of expanders. Uniform distribution on edges of the complete graph. Expanders are pseudorandom w.r.t. cuts. And? Further questions All this is good in theory... but how can it be applied? Pseudoentropy ⇔ density in a pseudorandom distribution. New proof of regularity lemma for subgraphs of expanders. Uniform distribution on edges of the complete graph. Expanders are pseudorandom w.r.t. cuts. And? Other applications of “ergodic arguments” in complexity theory?
© Copyright 2025 Paperzz