Rittu_toc1

Theorem 8.19 If algorithm A has absolute approximation ratio RA
, then the shifting algorithm has absolute approximation
(KRA+1)/(K+1)
Proof. If N is the number of disks in some optimal solution. Since
A yields RA approximations, the number of disks returned by our
algorithm for partition Pi is bounded by (1/ RA )∑jєPiNj, where Nj is
the optimal number of disks needed to cover the points in vertical
strip j in partition Pi and where j ranges over all such strips.
Let Oi is the number of disks in the optimal solution that cover
points in two adjacent strips of partition Pi. Our observation can be
rewritten as ∑jєPiNj ≤ N + Oi . Because each partition has a different
set of adjacent strip and each partition is shifted by a previous one
by a full disk diameter , none of the disk that cover points in
adjacent strips of Pi can cover point in adjacent strips of Pj, for i≠j.
Thus the total number of disks that can cover points in adjacent
strips in any partition is at most N – the total number of dick in an
optimal solution. Hence we can write ∑ki=1Oi ≤ N . By summing
our first inequality over all k partitions and substituting our
second inequality, we obtain
∑ki=1∑jєPiNj ≤ (k+1).N
mini=1…k∑jєPiNj ≤ (1/k) ∑ki=1∑jєPiNj ≤ ((k+1)/k).N
Using now our first bound for our shifting algorithm, we conclude
that its approximation is bounded by (1/(1-RA)).((k+1)/k).N and
thus has an absolute approximation ratio (KRA+1)/(K+1), as
desired.
This result generalizes easily to coverage by uniform convex
shapes other than disks, with suitable modifications regarding the
effective diameter of the shape.
Theorem 8.20 There is an approximation scheme for Disk
Covering such that, for every natural number k, the scheme
provides an absolute approximation ratio of (2k+1)/(k+1)2 and
runs in O(k4nO(k2)) time.
8.3.4 Fixed Ratio Approximations
There are a very large number of problems that have some fixedratio approximation and thus belong to APX but do not appear to
belong to PTAS, although they obey the necessary condition of
simplicity. Examples include Vertex Cover, Maximum Cut, and
the most basic problem of all, namely Maximum 3SAT, the
optimization version of 3SAT.
Theorem 8.21 MaxkSAT has a 2-k-approximation.
Proof. Consider the following simple algorithm.
• Assign to each remaining clause ci weight 2-|ci|; thus every
unassigned literal left in a clause halves the weight of the
clause. (the weight of the clause is inversly proportional to
the number of ways in which that clause could be satisfied.)
• Pick any variable x that appears in some remaining clause.
Set x to true if the sum of the weights of the clauses in which
x appears as an uncomplemented literal exceeds the sum of
the clauses in which it appears as a complemented literal; set
it to false otherwise.
• Update the clause and their weights and repeat until all
clauses have been satisfied or reduced to falsehood.
We claim that this algorithm will leave at most m2-k unsatisfied
clauses (where m is the number of clauses in the instance); since
the best that any algorithm could do would be to satisfy all m
clauses. Note that m2-k is exactly the total weight of the m clauses
of length k in the original instance; thus our claim is that the
number of clauses left unsatisfied by the algorithm is bound by
∑mi=12-|ci| ,the total weight of the clauses in the instance.
To prove our claim we use induction on the number of
clauses. With a single clause, the algorithm clearly returns a
satisfying truth assignment and thus meets the bound. Assume
that the algorithm meets the bound on all instances of m or fewer
clauses. Let x be the first variable set by the algorithm and denote
by mt the number of clauses satisfied by the assignment. mf the
number of clauses losing a literal as a result of the assignment.
mu=m+1-mt-mf the number of clauses unaffected by assignment.
Also let wm+1 denote the total weight of all the clauses in the
original instance, wt the total weight of clauses satisfied by the
assignment, wu the total weight of unaffected clauses, and wf the
total weight of the clauses losing a literal before the loss of that
lateral; thus we can write wm+1=wt+wu+wf. Because we must have
had wt ≥ wf in order to assign x as we did, we can write
wm+1=wt+wu+wf ≥ wu+2wf. The remaining m-mt = mu+mf clauses
now have the total weight of wu+2wf, because the weight of every
clause that loses a literal doubles. By inductive hypothesis, our
algorithm will leave at most wu+2wf clauses unsatisfied among
these clauses and thus also in the original problem; since we
have, as noted above, wm+1≥ wu+2wf , our claim is proved.
Definition 8.12 Let Π1 and Π2 be the two problems in NPO. We
say that Π1 PTAS-reduces to Π2 if there exists three functions, f, g
and h, such that
• for any instance x of Π1, f(x) is an instance of Π2 and is
computable in time polynomial in |x|;
• for any instance x of Π1, any solution y for instance f(x) of Π2,
and any relational precision requirement ε (expressed as a
fraction), g(x, y, ε) is a solution for x and is computable in
time polynomial in |x| and |y|;
• h is a computable injective function on the set of rationals in
the interval [0,1);
• for any instance x of Π1, any solution y for instance f(x) of Π2
and any precision requirement ε (expressed as a fraction), if
the value of y obeys precision requirement h(ε), then the
value of g(x, y, ε) obeys the precision requirement ε.
Proposition 8.4
• PTATS-reductions are reflexive and transitive
• if Π1 PTATS reduces to Π2 and Π2 belongs to APX
(respectively, PTAS), then Π1 belongs to APX (respectively,
PTAS)
Defination 8.13
The class OPTNP is exactly the class of problems that PTATSreduce to Max3SAT.
Theorem 8.22 The Maximum Weighted Satisfiability
(MaxWSAT) problem has the same instances as Satisfiability,
with the addition of a weight function mapping each variable to a
natural number. The objective is to find a satisfying truth
assignment that maximizes the total weight of the true variables.
An instance of the Maximum Bounded Weighted Satisfiability
problem is an instance of a MaxWSAT with bound W such that
the sum of the weights of all variables in the instance must lie in
the interval [W, 2W].
• Maximum weighted satisfiability is NPO-complete.
•Maximum bounded weighted satisfiability is APX-complete.
Proof. Let Π be a problem in NPO and let M be a
nondeterministic machine that, for each instance of Π, guesses a
solution, checks that if it is feasible, and computes its value. If the
guess fails, the M halts with a 0 on the tape; otherwise it halts
with the value of the solution, written in binary and “in reverse”,
with its LSB on square 1 and increasing bits to the right of that
position. By definition of NPO, M runs in polynomial time. For
M and any instance x, the construction used in the proof of
Cook’s theorem yields a Boolean formula of polynomial size that
describe exactly those computation paths of M on input x and
guess y that lead to a non zero answer. We assign a weight of 0 to
all variables used in the construction, except for those that denote
that a tape square contains the character 1 at the end of
computation-and that only for squares to the right of position 0.
That is, only the tape squares that contain a 1 in the binary
representation of the value of the solution for x will count toward
the weight of the MaxWSAT solution.
This transformation between instances can easily be
carried out in polynomial time; a solution for the original problem
can be recovered by looking at the assignment of the variables
describing the initial guess; and the precision mapping function h
is just the identity.
Definition 8.14 Let Π1 and Π2 be two maximization problems;
denote the value of an optimal solution for an instance x by
opt(x). A gap-preserving reduction from Π1 to Π2 is a
polynomial-time map from instances of Π1 to instances of Π2,
together with 2 pair of functions, (c1,r1) and (c2,r2), such that r1
and r2 return values no smaller than 1 and the following
implications hold:
Opt(x) ≥ c1(x) => opt(f(x)) ≥ c2(f(x))
Opt(x) ≤ c1(x)/r1(x) => opt(f(x)) ≤ c2(f(x))/r2(f(x))
Theorem 8.23 For each problem Π in NP, there is a polynomialtime map f from instances of Π to instances of Max3SAT and a
fixed ε>0 such that, for any instance x of Π, the following
implications hold:
x is a “yes” instance => opt(f(x)) = |f(x)|
x is a “no” instance => opt(f(x)) < (1- ε) |f(x)|
Proof. The gist of alternate characterization of NP is that a “yes”
instance of a problem in NP has a certificate that can be
verified probabilistically in polynomial time by inspecting
only a constant number of bits of the certificate, chosen with
the help of a logarithmic number of random bits. If x is a “yes”
instance, then verifier will accept it with probability 1,
(that is, it will accept no matter what the random bits are);
otherwise, the verifier will reject it with probability at least ½(i.e.
at least half of the random bit sequences will lead to rejection).
Since Π is in NP, a “yes” instance of size n has a
certificate that can be verified in polynomial time with the help of
at most c1logn random bits and by reading at most c2 bits from
the certificate. All 2c2 possible outcomes that can result from
looking up these c2 bits can be examined. Each outcome
determines a computation path; some paths lead to acceptance
and some to rejection, each in at most a polynomial number of
steps. Because there is a constant number of paths and each path
is of polynomial length, we can examine all of these paths,
determine which are accepting and which rejecting, and write a
formula of constant size that describes the accepting paths in
terms of the bits of the certificate read during the computation.
This formula is a disjunction of at most 2c2 conjuncts, where each
conjunct describes one path and thus has at most c2 literals.
Each such formula is satisfiable if and only if the c2 bits of the
certificate examined under the chosen sequence of random bits
can assume values that lead the verifier to accept its input. We
can then take all nc1 such formulae, one for each sequence of
random bits, and place them into a single large conjunction. The
resulting large conjunction is satisfiable if and only if there exists
a certificate such that, for each choice of c1logn random bits (i.e.
for each choice of the c2 certificate bits to be read), the verifier
accepts its input.
If the verifier rejects its input, then it does so for at least
one half of the possible choices of random bits. Therefore, at least
one half of the constant-size formulae are unsatisfiable. But then
at least one out of every k clauses must me false for these (½)nc1
formulae, so that we must have at least (½k)nc1 unsatisfied
clauses in any assignment. Thus if the verifier accepts its input,
then all knc1 clauses are satisfied, whereas, if it rejects its input,
then at most (k- ½k)nc1 = (1- ½k2)knc1 clauses can be satisfied.
Since k is a fixed constant, we have obtained the desired gap, with
ε=½k2.
Corollary 8.3 No OPTNP-hard problem can be in PTAS unless P
equals NP.
Theorem 8.2.4 Maximum Bounded Satisfiability PTAS-reduces to
Max3SAT.
Corollary 8.4 OPTNP equals APX
NP-Hardness of Approximation Schemes.
• If the problem is not p-simple or if its decision version is strongly
NP-complete, then it is not in FPTAS unless P equals NP.
• If the problem is not simple or if it is OPTNP-hard then it not in
PTAS unless P equals NP.
Thank You!!!