(80630) Analytical Methods in Combinatorics and Computer Science Nov. 9, 2005
Lecture 2
Lecturer: Ehud Friedgut
Scribe: David Arnon
We have seen a number of examples of natural questions we hope Fourier analysis can help
us solve. Let us start by a few basic definitions that will give us the needed tools.
1
Fourier Expansion, definitions
1.1
Measure
Let us define a measure on {0, 1}n . Suppose that for p ∈ [0, 1] we randomly select a vector
x ∈ {0, 1}n s.t. for every i ∈ [n] we set xi = 1 with probability p and xi = 0 otherwise. The
resulting measure µp will be:
∀x ∈ {0, 1}n
∀A ⊆ {0, 1}n
µp (x) = pΣxi (1 − p)n−Σxi
X
µp (A) =
µp (x)
x∈A
An equivalent way of looking at the same measure will be to define for every i ∈ [n] a
measure µip on {0, 1} s.t. µip (1) = p and µip (0) = 1 − p. Using these measures we get that
µp = µ1p × µ2p × · · · × µnp is a measure on {0, 1} × · · · × {0, 1} that is equivalent to the measure
defined above. For p = 21 we get µ 1 which is the uniform distribution.
2
1.2
Inner Product
Looking at the functions f : {0, 1}n → C we use the following notation for the average value of
the function:
Z
Z
X
.
f = f (x)dx
= 2−n
f (x) = E [f (x)]
x∈{0,1}
Z
.
f (x)dµ(x) =
X
f (x)µ(x) = Ex∼µ [f (x)]
x∈{0,1}
Using this notation we can define an inner product of two functions:
Z
.
hf, gi = f ḡ = E [f ḡ]
Which equals E [f g] if we are dealing with real functions.
1
. R
Now for every 1 ≤ p < ∞ we can define the p-norm, |f |p =
|f |p p .
.
As p → ∞ the large values of f dominate this expression so we
also define
|f |∞ = max(|f |).
R
Note that for real functions we get the familiar relation |f |22 = f 2 = E f 2 = hf, f i.
2-1
1.3
The Fourier Basis Functions
Look at the space of functions L2 ({0, 1}n ) = {f |f : {0, 1}n → C}. The regular basis for this
space is {fy }y∈{0,1}n where
( n
22 x = y
fy (x) =
0
x 6= y
The size of this basis is 2n and it’s orthonormal i.e. hfx , fy i = δxy .
We would like to construct a different orthonormal basis for this space. We start by looking
at the dictator functions χ{1} , χ{2} , . . . , χ{n} where:
χ{i} : {0, 1}n → C
(
xi
χ{i} (x) = (−1)
=
1
−1
xi = 0
xi = 1
Or, equivalently, if we identify the vector x with the set S = {i|xi = 1}:
(
1 i∈
/S
χ{i} (S) =
= (−1)|{i}∧S|
−1 i ∈ S
We also refer to 1 as the constant function 1(x) = 1. For every i ∈ [n] we get
• χ{i} , 1 = E χ{i} · 1 = E χ{i} = 0 (as χ{i} is balanced)
h
i
• χ{i} , χ{i} = E χ2{i} = E [1] = 1.
So, the set χ{i} , 1 is an orthonormal basis for L2 ({0, 1}) (which has dimension 2) where one
should imagine this {0, 1} being on the “i’th place”:
i
↓
{0, 1} × · · · × {0, 1} × · · · × {0, 1}
By taking all the possible products of these sets we get an orthonormal basis for L2 ({0, 1}n ).
Since the 1 functions will not alter the product this is equivalent to all the products of the form
χ{i1 } ⊗ χ{i2 } ⊗ · · · ⊗ χ{ik }
.
Where χ{i} ⊗ χ{j} = χ{i} · χ{j}
note that χ{i} · χ{j} (T ) = (−1)|{i,j}∧T |
. Q
For every S ⊆ [n] we define χS = i∈S χ{i} . We have:
χS (T ) = (−1)|S∧T |
In this notation the basis we found is {χS }S⊆[n]
Claim 1 ∀S, T ⊆ [n], χS · χT = χS4T
2-2
Proof
(χS · χT ) (R) = (−1)|S∧R| · (−1)|T ∧R| =
(−1)2|(S∧T )∧R| · (−1)|(S4T )∧R| = (−1)|(S4T )∧R| = χS4T (R)
Claim 2 ∀S ⊆ [n], χS is linear. i.e. ∀x, y ∈ {0, 1}n , χS (x + y) = χS (x)χS (y).
Proof
χS (x + y) =1 χS (X4Y ) = (−1)|S∧(X4Y )| = (−1)|S∧X| · (−1)|S∧Y | = χS (X) · χS (Y ).
Since every χ{i}depends only on xiwhich in our setting is chosen independently, we have
that the collection χ{1} , χ{2} , . . . , χ{n} is independent.
As a result
Q we have
for
Qevery S 6= ∅: Q
[χ
]
=
χ
=
E S
E
i∈S {i}
i∈S E χ{i} =
i∈S 0 = 0.
Claim 3 ∀S, T ⊆ [n], hχS , χT i = δST
Proof
hχS , χT i = E [χS χT ] = E [χS4T ] =
Q
i∈S4T
E χ{i} =
(
0
1
S4T =
6 ∅
=
S4T = ∅
(
0
1
S=
6 T
S=T
So, we have that {χS }S⊆[n] is an orthonormal set of functions of size 2n and so it’s a basis
for L2 ({0, 1}n ). This is the Fourier Basis.
Remark We can look at Zn2 as an abelian group with the operation (x + y)i = xi ⊕ yi .
Equivalently, in the set notation we look at the group 2[n] = {S}S⊆[n] with the operation
S + T = S4T . We call the homomorphisms from this group into C the characters of the group.
Since for every S, T, R ⊆ [n] we have χR (S)χR (T ) = χR (S4T ) = χR (S + T ) the functions
χR are homomorphisms and so are the characters of the group 2[n] .
We can look at {χS }S⊆[n] as a group with the operation χS · χT = χS4T . As one can see,
this group is isomorphic to 2[n] .
This is a special case of the more general rule: For every finite abelian group A, the collection
of homomorphisms into C (i.e., the set of characters) is a group isomorphic to A. The characters
in that group are an orthonormal basis for the space of functions f : A → C, with the uniform
measure.
2
The Fourier Expansion
Since {χS }S⊆[n] is a basis every function
2
f=
f : {0, 1}n → R can be written as:
X
fˆ(S)χS
S⊆[n]
This is the Fourier Expansion of f . The coefficients fˆ(S) are called the Fourier coefficients.
.
for convenience we’ll denote by X the set of non-zero positions in vector x: X = {i|xi = 1}.
n
2
Actually for every f : {0, 1} → C, but from now on we concentrate on real functions.
1
2-3
Claim 4 ∀S ⊆ [n], fˆ(S) = hf, χS i
DP
E
ˆ(T )χT , χS = P fˆ(T )hχT , χS i = P fˆ(T )δST = fˆ(S)
Proof hf, χS i =
f
T
T ⊆[n]
T
Claim 5 ∀f, g : {0, 1}n → R,
hf, gi =
P ˆ
S f (S)ĝ(S)
Proof
hf, gi =
*
X
+
fˆ(S)χS ,
S
X
ĝ(T )χT
=
T
XX
S
T
XX
S
fˆ(S)ĝ(T )hχS , χT i =
fˆ(S)ĝ(T )δST =
X
T
fˆ(S)ĝ(S)
S
P
Corollary 6 Parseval’s Equality: |f |22 = E f 2 = hf, f i = S fˆ2 (S)
P ˆ2
Corollary 7 For any f : {0, 1}n → {1, −1} we have
S f (S) = 1
So we can look at these fˆ2 (S) as a probability distribution.
The expectation and variance of f both have simple formulas using the Fourier coefficients:
• E [f ] = hf, 1i = hf, χ∅ i = fˆ(∅)
P
P
• Var[f ] = E f 2 − E2 [f ] = S fˆ2 (S) − fˆ2 (∅) = S6=∅ fˆ2 (S)
2.1
Translation
Let us look at the following function:
.
f⊕y (x) = f (x ⊕ y) = f (x + y)
This function is the translation by y ∈ {0, 1}n of function f . We have the following relation
between the Fourier coefficients of the translated function f⊕y and the coefficients of the original
function f :
Claim 8 fˆ⊕y (S) = χS (y)fˆ(S)
Proof
Z
Z
z=x+y
−y=y
ˆ
f⊕y (S) =hf⊕y , χS i = f (x + y)χS (x)dx =
f (z)χS (z − y)dz =
Z
Z
χS linear
f (z)χS (z + y)dz
=
f (z)χS (z)χS (y)dz =
Z
χS (y) f (z)χS (z)dz = χS (y)hf, χS i = χS (y)fˆ(S)
2-4
2.2
Convolutions
We define the convolution of the functions f and g:
−y=y
.
(f ∗ g)(x) = Ey [f (y)g(x − y)] = Ey [f (y)g(x + y)] =
Z
f (y)g(x + y)dy
The convolution operation replaces the value f gives to a point x by an “average value”3
of f for the surroundings of x. The exact weight of every element in these surroundings is
determined by the function g.
Claim 9 f[
∗ g(S) = fˆ(S)ĝ(S)
Proof
Z Z
Z
[
f (y)g(x + y)dy · χS (x)dx =
f ∗ g(S) =hf ∗ g, χS i = f ∗ g(x)χS (x)dx =
x y
x
Z
Z
Z
Z
z=x+y
f (y) g(x + y)χS (x)dxdy =
f (y) g(z)χS (z + y)dzdy =
y
x
y
z
Z
Z
Z
f (y)χS (y) g(z)χS (z)dz dy = ĝ(S) f (y)χS (y)dy = ĝ(S)fˆ(S)
y
y
|z
{z
}
|
{z
}
ĝ(S)
fˆ(S)
To summarize, we got the following useful relations:
• χS (T ) = (−1)|S∧T | = χT (S)
• fˆ⊕y (S) = χS (y)fˆ(S)
• f[
∗ g(S) = fˆ(S)ĝ(S)
• f[
+ g(S) = fˆ(S) + ĝ(S)
3
(which is true for every set of coefficients).
Influence
As we have already seen the influence of a variable xi is defined to be the probability that
changing this variable4 will alter the result of the function f : {0, 1}n → {0, 1}.
Ii (f ) = Pr [f (x) 6= f (x ⊕ ei )]
x
3
Note that we say “average value” only because it’s a convenient way to think about it. In fact g is not
restricted to be a distribution function, so the convolution could be any linear combination of the surrounding
values.
4
We denote x with it’s i’th bit inverted by x ⊕ ei or x + ei .
2-5
We mark the total influence of the function f by I(f ).
. X
I(f ) =
Ii (f )
i
What can the Fourier expansion teach us about the influences of the function? To answer
that question let us define for a boolean function f : {0, 1}n → {0, 1} the function fi :
.
fi = f − f⊕ei
−1
fi (x) = f (x) − f⊕ei (x) = f (x) − f (x ⊕ ei ) =
0
1
f (x ⊕ ei ) > f (x)
f (x ⊕ ei ) = f (x)
f (x ⊕ ei ) < f (x)
As we can see fi equals 0 for x’s that are not affected by flipping the i’th variable and 1/−1 for
x’s that are affected by the i’th variable. This leads to the following observation:
Observation 10 Ii (f ) = Prx [fi (x) 6= 0] = E fi2
We also have the following relation between the Fourier coefficients of fi and those of f :
P
Claim 11 fi = 2 · S|i∈S fˆ(S)χS
ˆ
fˆi (S) = fˆ(S) −(
fˆ⊕ei (S). By claim 8 we know how to
( calculate f⊕ei (S):
−fˆ(S) i ∈ S
2fˆ(S) i ∈ S
fˆ⊕ei (S) = χS (ei )fˆ(S) =
=⇒
fˆi (S) =
fˆ(S) i ∈
/S
0
i∈
/S
the claim follows.
Proof
Using these tools we can now prove a connection between the influence of a variable and
the Fourier coefficients related to sets that contain this variable:
Theorem 12
X
Ii (f ) = 4 ·
fˆ2 (S)
S|i∈S
Proof
Using observation 10, Parseval’s Equality and claim 11 we get:
2
X
X
X 2
fˆ2 (S)
2fˆ(S) = 4
Ii (f ) = E fi2 =
fˆi (S) =
i∈S
S
i∈S
Which means that the influence of a variable xi is directly proportional to the sum of (the
squares of) the Fourier coefficients related to sets that contain i. Summing over all the indexes
we get:
Corollary 13
I(f ) = 4
X
S
2-6
fˆ2 (S)|S|
Proof
P
i Ii (f )
=
P
i4
·
P
ˆ2 (S) = 4 P P
S3i f
S
i∈S
P
fˆ2 (S) = 4 S fˆ2 (S)|S|
P ˆ2
When dealing with functions for which E f 2 =
S f (S) is a constant (e.g. balanced
boolean functions), in order to get a large I(f ) the coefficients of the large sets must contain
the bulk of the weight.
Observation 14 for f : {0, 1}n → {0, 1} s.t. E [f ] =
1
2
we have
I(f ) ≥ 1.
2
Proof Since
allows us to calculate the variance of f :
2 f is 2boolean we have f = f1. This
1
1
Var[f ] = E f − E [f ] = E [f ] (1 − E [f ]) = 2 · 2 = 4 .
Now we have:
P ˆ2
P
1
2
ˆ
I(f ) = 4 S f (S)|S| ≥ 4 S6=∅ f (S) = 4 Var[f ] = 4 · 4 = 1.
As an immediate corollary we have that the dictator function has the lowest sum of influences
possible for a balanced boolean function. The following theorem states that furthermore, the
dictator function (or it’s opposite) is the only such function to have I(f ) = 1.
Theorem 15 If f is a balanced boolean function with I(f ) = 1 then f (x) ≡ xi (or f (x) ≡ 1−xi )
for some i ∈ [n].
P
P
Proof We already saw that I(f ) = 4 S fˆ2 (S)|S| ≥ 4 S6=∅ fˆ2 (S) = 1. For this inequality
to hold as an equality we must have for every S 6= ∅ that fˆ2 (S)|S| = fˆ2 (S). This is possible
only if fˆ(S) = 0 or |S| = 1. So, the only non-zero coefficients belong to sets s.t. |S| ≤ 1. So
the Fourier expansion of f is:
n
X
1
f = ·1+
ci χ{i}
2
i=1
Since fˆ(∅) = E [f ] = 12 and f is boolean we must have at least one ci 6= 0, but is it possible
that more than one is non-zero? The answer is no. Suppose that for i0 6= j0 we have ci0 , cj0 6= 0.
Then, since f 2 = f we have that:
!2
n
X
X
1 X
2
+
ci χ{i}
= b0 +
f =f =
bi χ{i} + 2
ci cj χ{i,j}
2
i=1
i
i,j
So we have that fˆ({i0 , j0 }) = ci0 cj0 6= 0 Which is impossible since it belongs to a set of size 2.
2
So we have f = 21 + ci χ{i} . Since c2i + 21 = E f 2 = E [f ] = 12 we have that ci = ± 12 .
So, f = 12 − 12 χ{i} = xi or f = 21 + 12 χ{i} = 1 − xi .
4
Noise Operator
We already saw that the convolution operator changes the value of f at a point x to a linear
combination of the values of the ”neighbors” of x. We now investigate such an operator, this
time with a clearly defined distribution over the values in the surroundings.
Given a function f : {0, 1}n → R and ε, p s.t.
2-7
• 0≤ε≤1
• 0≤p≤
1
2
ε = 1 − 2p
1−ε
2
p=
we define an operator Tε :
Tε f : {0, 1}n → R
X
f (x + y)pΣyi (1 − p)n−Σyi
Tε f (x) = Ey∼µp [f (x + y)] =
y
The natural way to think about Tε f (x) is to think of the following process:
Flip every bit in x with probability p or, equivalently, with probability ε leave the bit
unchanged, otherwise (w.p. 1 − ε) uniformly select the bit out of {0, 1}.
The resulting vector is x + y. Calculate f (x + y) in the new point. The expectancy of the
resulting value is Tε f (x).
A few simple examples:
• For ε = 1
T1 f = f
• For ε = 0
T0 f = E [f ]
(a constant)
From the linearity of expectation we have that Tε is a linear operator:
• Tε (f + g) = Tε (f ) + Tε (g)
• Tε (af ) = a · Tε (f )
Since Tε is a linear operator we can talk about it’s eigenvectors. The following claim states
that for every S ⊆ [n], χS is an eigenvector of Tε with the eigenvalue ε|S| .
Claim 16 Tε (χS ) = ε|S| χS .
Proof
For every x we have:
Tε χS (x) = Ey∼µp [χS (x + y)] = Ey∼µp [χS (x)χS (y)] =
X
χS (x)Ey∼µp [χS (y)] = χS (x)
Pr [y](−1)|S∧Y | =
y
χS (x)
y∼µp
Pr [|S ∧ Y | is even] − Pr [|S ∧ Y | is odd] =
y∼µp
y∼µp
|S| X |S|
X |S| k
χS (x)
p (1 − p)|S|−k −
pk (1 − p)|S|−k =
k
k
|S|
k=0
k even
|S|
X
k=1
k odd
|S| X
|S| k
|S|
|S|−k
χS (x)
(−1)
p (1 − p)
= χS (x)
(−p)k (1 − p)|S|−k =
k
k
k=0
k=0
|S|
|S|
χS (x) (1 − p) − p
= χS (x)ε
k
2-8
n
As we have
seenwe can look at L2 ({0, 1} ) as a cross product of 1, χ{i} for all i ∈ [n],
where every 1, χ{i} is the Fourier basis for the functions on the i’th place in the cross product
{0, 1}n .
An equivalent way to define Tε is to define for each of those 1, χ{i} basis an operator Tε i
which is defined in a similar way to Tε :
• Tε i (1) = 1
• Tε i (χ{i} ) = pχ{i}⊕ei + (1 − p)χ{i}⊕0 = p(−χ{i} ) + (1 − p)(χ{i} ) = ε · χ{i}
In this notation we have:
Tε = Tε 1 ⊗ Tε 2 ⊗ · · · ⊗ Tε n
This gives an easy alternative way to calculate the eigenfunctions and eigenvalues of Tε . More
about this in lecture 4.
We can now calculate the Fourier expansion of Tε (f ):
Corollary 17
Tε (f ) =
X
fˆ(S)ε|S| χS
S
Proof
Tε (f ) = Tε
P
S
fˆ(S)χS
P
P
= S fˆ(S)Tε (χS ) = S fˆ(S)ε|S| χS
Observation 18 A linear operator R is a convolution iff all the characters are it’s eigenvectors.
Proof
:⇒ Assume R P
is a convolution, i.e. P
R(χS ) = χS ∗ g for some g. By claim 9:
χS (T )χT = T ĝ(T ) · δST · χT = ĝ(S)χS .
R(χS ) = χS ∗ g = T ĝ(T )c
⇐: Assume that everyPχS is an eigenvector of R with a matching eigenvalue of λS .
.
Look at the function g = S λS χS . By claim 9:
P
P ˆ
P ˆ
P ˆ
f ∗g =
f (S)ĝ(S)χS =
f (S)λS χS =
f (S)R(χS ) = R
fˆ(S)χS = R(f )
S
S
S
S
As an immediate corollary we have that Tε is a convolution.
Observation 19 E [Tε (f )] = E [f ].
Our original definition of Tε was that of a weighted average, hence we expect that the resulting
function will have theZ same average as the Zoriginal
one. Indeed:
Z
Proof
E [Tε (f )] =
Tε (f )(x)dµ 1 (x) =
f (x + y)dµp (y)dµ 1 (x) =
2
2
x y
Zx Z
Z
f (x + y)dµ 1 (x) dµp (y) = E [f ] dµp (y) = E [f ]
2
y x
y
|
{z
}
E[f ]
Remark This observation is the special case T0 Tε = T0 of the more general rule
Tε1 Tε2 = Tε1 ε2 which is also easy to prove.
2-9
5
KKL Theorem
In the example of the dictator function we saw that the influence of a single variable
can be
log n
as high as 1. In the Tribes function all the influences were much smaller - Θ n . For a
constant function all the influences are zero, but what about a balanced function? Is it possible
to find a balanced boolean function with smaller maximum influence?
The answer is no, as was conjectured by Linial and Ben-Or and proven in 1988 by Kahn,
Kalai and Linial. So, according to the KKL theorem the Tribes function has the smallest 5
maximum influence among all the balanced boolean functions.
In the proof of the KKL theorem we will use the following claim without a proof. The proof
will be given in the next lecture.
Claim 20 For 0 < ε < 1 and a function f s.t. Range(f ) ⊆ {−1, 0, 1} we have:
2
2
|Tε f |2 ≤ |f |21+ε
2
Since |f |2 ≤ 1 and 1+ε
2 > 1 this claim shows that |Tε f |2 will be smaller than |f |2 , so the
operation of smoothing will decrease the 2-norm of the function.
Theorem 21 [KKL] 1988
For all 0 < v < 1 and f : {0, 1}n → {0, 1} s.t. Var(f ) = v we have:
max Ii (f ) ≥ [1 + o(1)] v ·
i∈[n]
ln(n)
n
Proof
We’ll list a few facts that will help us in the proof:
P
A. v = Var[f ] = S6=∅ fˆ2 (S)
P
Ii (f ) = |fi |22 = 4 i∈S fˆ2 (S)
(theorem 12).
P
.
C. I = I(f ) = 4 S fˆ2 (S)|S|
(corollary 13).
P
D. fi = 2 · i∈S fˆ(S)χS
(claim 11).
B. ∀i ∈ [n],
|S|
\
ˆ
E. T
ε (f )(S) = f (S)ε
(corollary 17).
F.
v
≤
2
C P
Proof I ≥ 4 |S|> I fˆ2 (S)|S| ≥
2v
Using fact A the result follows.
5
4I
2v
X
fˆ2 (S)
I
0<|S|≤ 2v
P
I
|S|> 2v
Up to a constant
2-10
fˆ2 (S)
=⇒
v
2
≥
P
I
|S|> 2v
fˆ2 (S).
For every i ∈ [n], let us look at the operation of Tε on function fi :
|Tε fi |22 =
X
2
E
T\
ε (fi ) (S) =
X
S
2
D
fˆi (S)ε2|S| = 4 ·
X
fˆ2 (S)ε2|S|
i∈S
S
For every i ∈ [n], fi (x) ∈ {−1, 0, 1} so we can use claim 20:
22
X
2
B
Ii (f ) 1+ε2 = |fi |22 1+ε ≥ |Tε fi |22 = 4 ·
fˆ2 (S)ε2|S|
i∈S
Summing over all the i ∈ [n] we get:
X
i
2
Ii (f ) 1+ε2 ≥ 4 ·
X
fˆ2 (S) · ε2|S| · |S| ≥ 4 ·
S
X
I
0<|S|≤ 2v
I
fˆ2 (S) εI/v ·
· (1 + o(1))
2v
(1)
By fact F the last sum is greater than v2 , and so we have:
X
2
Ii (f ) 1+ε2 ≥ I · εI/v (1 + o(1))
(2)
i
Remark: At this point, if we plug in ε = 12 , say, and assume that all influences are less
than log(n)
then we immediately get that the left hand side is a negative power of n, which
n
can only be explained if I is of order log n, so we are essentially done. We only push on a bit
farther in order to get the nice constant (i.e. 1). However we actually need this observation for
bootstrapping: for larger values of ε which are closer to 1 we need to know that I is large in
order to justify the fact that the expression ε2|S| · |S| attains its minimum for the largest |S| in
the sum, a fact that we used in Equation (1) above. Also note that for any fixed ε we need n
to be sufficiently large, which is why we added (1 + o(1)) to the RHS.
Assume now that Ii (f ) < c · v lnnn for every i. We will prove that c > 1 + o(1).
As a result of this assumption we have two easy facts:
P
G. I = i Ii (f ) < c · v · ln n.
H.
X
Ii (f )
2
1+ε2
≤I·
i
c · v · ln n
n
1−ε22
1+ε
P
Proof AsPis easy to prove, the sum i αiβ under the constraints 0 ≤ αi ≤ α,
T
β > 1 and
i αi = T achieves it’s maximum when all the weight T is concentrated in α
elements, each with weight α for a total of Tα · αβ = T · αβ−1 .
P
ln n
2
Since
and 1+ε
2 > 1, We have that:
i Ii (f ) = I, for every i: Ii (f ) < c · v n
1−ε2
2
2
P
−1
n 1+ε2
1+ε2 ≤ I · c · v ln n 1+ε2
= I · c·v·ln
i Ii (f )
n
n
2-11
Picking up from equation (2):
I·
c · v · ln n
n
1−ε22
H
1+ε
≥
X
G
2
Ii (f ) 1+ε2 ≥I · εI/v (1 + o(1)) > I · εc ln n (1 + o(1))
i
c · v · ln n
n
1−ε22
1+ε
>εc ln n (1 + o(1))
taking ln :
1 − ε2
ln (c · v · ln n) − ln(n) >c · ln n · ln ε · (1 + o(1))
2
1+ε
1 − ε2
1
· (1 + o(1)) >
ln(n)
−
ln
(c
·
v
·
ln
n)
c · ln n · ln
ε
1 + ε2
2
1−ε
ln (c · v · ln n)
1−
c>
·(1 + o(1))
ln(n)
(1 + ε2 ) ln 1ε
|
{z
}
1+o(1)
c>
ε2
1−
(1 + ε2 ) ln
|
{z
ε→1
1
ε
· (1 + o(1))
}
−→1
As this is true for every ε < 1 and as a quick calculation will verify
ε→1
1−ε2
−→
(1+ε2 ) ln( 1ε )
1
we
are done.
References
[KKL]
J. Kahn, G. Kalai and N. Linial The influence of variables on Boolean functions in
Proc. 29-th Annual Symposium on Foundations of Computer Science, 68–80, 1988.
2-12
© Copyright 2026 Paperzz