The complexity of counting graph homomorphisms

The complexity of counting graph homomorphisms
∗
Martin Dyer and Catherine Greenhill†
Abstract
The problem of counting homomorphisms from a general graph G to a fixed
graph H is a natural generalisation of graph colouring, with important applications in statistical physics. The problem of deciding whether any homomorphism
exists was considered by Hell and Nešetřil. They showed that decision is NPcomplete unless H has a loop or is bipartite; otherwise it is in P. We consider the
problem of exactly counting such homomorphisms, and give a similarly complete
characterisation. We show that counting is #P-complete unless every connected
component of H is an isolated vertex without a loop, a complete graph with all
loops present, or a complete unlooped bipartite graph; otherwise it is in P. We
prove further that this remains true when G has bounded degree. In particular,
our theorems provide the first proof of #P-completeness of the partition function
of certain models from statistical physics, such as the Widom–Rowlinson model,
even in graphs of maximum degree 3. Our results are proved using a mixture of
spectral analysis, interpolation and combinatorial arguments.
1
Introduction
Combinatorial counting problems on graphs are important in their own right, and for
their application to statistical physics. In the physics application it is often a weighted
version of the problem which is of interest, corresponding to the partition function of the
associated Gibbs distribution. Exactly counting proper graph colourings (i.e. evaluating
the chromatic polynoial) is a classical problem and its close relative is evaluating the
partition function of the Potts model in statistical physics. See, for example [24]. Here
we consider the complexity of exact counting in a range of models of this type. We show
that polynomial-time algorithms for exact counting are unlikely to exist, other than in
certain “obvious” cases.
Many counting problems can be restated as counting the number of homomorphisms
from the graph of interest G to a particular fixed graph H. The vertices of H correspond
to colours, and the edges show which colours may be adjacent. The graph H may contain
loops. Specifically, let C be a set of k colours, where k is a constant. Let H = (C, EH )
∗
School of Computer Studies, University of Leeds, Leeds LS2 9JT, UNITED KINGDOM. Research
supported by ESPRIT Working Group RAND2.
†
Supported by a Leverhulme Special Research Fellowship.
1
be a graph with vertex set C. Given a graph G = (V, E) with vertex set V , a map
X : V 7→ C is called a H-colouring if
{X(v), X(w)} ∈ EH
for all {v, w} ∈ E.
In other words, X is a homomorphism from G to H. Let ΩH (G) denote the set of
all H-colourings of G. The counting problem is then to determine |ΩH (G)|. We may
further allow the vertices or edges of H to possess weights. The above homomorphism
viewpoint has been taken previously by many authors, for example [1, 2, 13, 18, 21], and
more recently by Hell and Nešetřil [11], Galluccio, Hell and Nešetřil [9] and Brightwell
and Winkler [3]. The latter consider the relationship between graph homomorphisms
and phase transitions in statistical physics.
Thus, for example, proper k-colourings of G would have H a k-clique with no loops.
The Potts model corresponds to introducing a loop on every vertex and giving these
loops the same positive edge-weight, the original clique edges all having (say) unit weight.
The problem of counting independent sets corresponds to H being a single edge with
one looped vertex. The (vertex) weighted version here is the well-known hardcore lattice
gas model. The looped vertex has weight 1, and the unlooped weight λ > 0. (See,
for example [15, 3, 20, 7].) Another example with physical application is the problem
of counting q-particle Widom–Rowlinson configurations in graphs (see [25, 16]), where
q ≥ 2. This is a particular model of a gas consisting of q types of particles. The graph
corresponding to q-particle Widom–Rowlinson has q + 1 looped vertices, one for each
particle type and one representing empty sites. The latter vertex is joined to all other
vertices. The graph corresponding to the 4-particle Widom–Rowlinson model is shown
in Figure 1.
Figure 1: The graph describing 4-particle Widom–Rowlinson configurations
A further example is the Beach model [5], which is a physical system with more than
one measure of maximal entropy. The graph corresponding to the Beach model is shown
in Figure 2.
Hell and Nešetřil [11] gave a full characterisation of graphs H for which the corresponding decision problem (i.e. “is ΩH (G) empty ?”) is NP-complete. The decision
problem corresponding to H can easily be solved in polynomial time if H has a loop
or is bipartite. Conversely, Hell and Nešetřil [11] showed that if H is loopless and not
bipartite then the decision problem corresponding to H is NP-complete. We will (somewhat unusually) consider the graph which consists of an unlooped isolated vertex v to
2
Figure 2: The graph describing the Beach model
be a complete bipartite graph with vertex bipartition {v} ∪ ∅. This should be borne in
mind when reading Hell and Nešetřil’s result above and our Theorem 1.1 below.
We consider the complexity of exactly counting H-colourings. Denote by #H the
counting problem which takes as an instance a graph G and returns the number of Hcolourings of G, |ΩH (G)|. The problem #H is clearly in #P for every graph H. We will
prove the following.
Theorem 1.1 Let H be a fixed graph. The problem of counting H-colourings of graphs
is #P-complete if H has a connected component which is not a complete graph with
all loops present or a complete bipartite graph with no loops present. Otherwise, the
counting problem is in P.
In particular, Theorem 1.1 provides the first #P-completeness proof for counting configurations in both the Widom-Rowlinson and Beach models. Note further that #P-hardness
of the partition function for any weighted version of the problem can be deduced from
our results below, whenever some suitable version of the underlying counting problem is
#P-complete. For example, #P-hardness of the partition functions of the appropriate
weighted versions of the Widom-Rowlinson and Beach models follows easily.
In physical applications, we are usually interested in graphs of low degree. Therefore,
we will prove further the following theorem.
Theorem 1.2 Let H be a graph such that #H is #P-complete. Then there exists
a constant ∆ such that #H remains #P-complete when restricted to instances with
maximum degree at most ∆.
We can show that the implied ∆ in Theorem 1.2 is three in most important cases. For
example, for both the Widom-Rowlinson and Beach models this follows. Unfortunately,
for a technical reason which will emerge, we are not able to assert this in general.
Theorem 1.2 shows that a counting problem which is #P-complete remains #Pcomplete when restricted to instances with some constant maximum degree. This is
in contrast with the corresponding result for decision problems, obtained by Galluccio,
Hell and Nešetřil [9]. They showed that there exist graphs H with NP-complete decision
problems, such that decision is always a polynomial-time operation when restricted
to graphs with maximum degree 3. Moreover, there does not seem to be any clear
characterisation of graphs for which the corresponding decision problem is polynomialtime, when restricted to graphs with a given constant maximum degree. Indeed, it is
possible that no such characterisation exists.
Theorem 1.1 is proved by using a mixture of algebraic and combinatorial methods. The algebraic tools involve spectral analysis and interpolation, using operations on
3
graphs called “stretchings” and “thickenings”. This has a flavour of Jaeger, Vertigan
and Welsh’s approach to proving #P-hardness of the Tutte polynomial [14]. In particular, we present a useful result (Lemma 3.4) which we believe is the first interpolation
proof involving eigenvalues. Unfortunately, the algebraic tools do not seem sufficient, by
themselves, to prove Theorem 1.1. Therefore the proof is completed by a combinatorial
case analysis, somewhat analogous to Hell and Nešetřil’s approach to the decision problem. However, for the decision problem it is easy to see that attention can be restricted
to the case where H is connected. In the counting problem this fact is far from obvious,
and forms the content of our central Theorem 4.1.
Two #P-complete problems are the starting points of all our reductions. The first
is the problem of counting proper 3-colourings of graphs. (There is an easy and wellknown reduction from this problem to that of counting proper q-colourings of graphs,
for any q ≥ 4.) This 3-colouring problem is #P-complete even when restricted to
bipartite graphs, as shown by Linial [19]. The second “base” problem is that of counting
independent sets in graphs. This problem is #P-complete even when restricted to line
graphs, as shown by Valiant [23].
The plan of the paper is as follows. In the next section we introduce the operations
of stretchings and thickenings. In Section 3 we use these operations to prove some interpolation results. In Section 3.1 we show how the counting problem for a singular weight
matrix can always be solved by reduction from the counting problem corresponding to
a certain nonsingular matrix, possibly by introducing vertex weights. In Section 3.2 we
show that these vertex weights can be ignored in any proof of #P-hardness. In Section 4
we prove Theorem 1.1, by analysing cases. Finally, in Section 5 we prove Theorem 1.2,
and show that all graph homomorphism counting problems are easy on instances with
maximum degree 2.
1.1
Edge weights and vertex weights
The graph H is described by the k × k adjacency matrix A, where Aij = 1 if {i, j} ∈ EH
and Aij = 0 otherwise, for all i, j ∈ C. Other problems involve weighted versions of this
set-up. The weights may be on the vertices of H or on the edges. The edge weights may
be stored in a symmetric matrix A, called a weight matrix, such that Aij = 0 if and only
of {i, j} 6∈ EH . Our focus throughout the paper is on counting graph homomorphisms
(where all edge weights and all vertex weights equal 1). In the proofs, however, it is
usually more convenient to work with the adjacency matrix of the graph (or with more
general weight matrices). Of course, the weighted problems are of interest in their own
right, as they are used in statistical physics.
Suppose that the vertex weights are {λi }i∈C . We assume that λi > 0 for all i ∈ C.
Let D be the diagonal matrix with λi in the (i, i) position. Thus D is an invertible
diagonal matrix. Let the edge weights be stored in the matrix A. If X ∈ ΩH (G), let
Y
Y
wA (X) =
AX(v)X(w) , w
eD (X) =
λX(v) .
v∈V
{v,w}∈E
4
Thus wA (X) measures the edge-weighting of the H-colouring X, and w
eD (X) measures
the vertex-weighting of X. In total, the H-colouring X has weight wA,D (X), defined by
Y
Y
wA,D (X) = wA (X) w
eD (X) =
AX(v)X(w)
λX(v) .
{v,w}∈E
v∈V
There is a slight abuse in writing AX(v)X(w) , where {v, w} ∈ E, as the indices of a matrix
are ordered while the endpoints of an edge are not. However, as A is symmetric this
does not cause any harm. We are interested in ZA,D (G), defined by
X
ZA,D (G) =
wA,D (X).
X∈ΩH (G)
If all vertex weights are equal to 1, we write wA (X) and ZA (G) instead of wA,D (X)
and ZA,D (G) respectively. If in addition all edge weights are equal to 1, then ZA (G) =
|ΩH (G)|.
2
Stretchings and thickenings
Recently, most #P-completeness proofs have used interpolation as the main tool in
building polynomial-time reduction. See, for example, [22, 23, 14, 4, 10]. (For some
#P-completeness proofs which do not use interpolation, see [12]). These interpolations
are often designed to preserve desirable properties, such as bounded maximum degree
of a graph. Two tools often used in these proofs are stretchings and thickenings, which
are now described in the graph setting.
Let Pr denote the path with r + 1 vertices u0 , . . . , ur and r edges {ui , ui+1 }, for
0 ≤ i < r, where r ≥ 1. Let G = (V, E) be a given graph. The r-stretch of G, denoted
by Sr G, is obtained by replacing each edge {v, w} in E by a copy of the path Pr , using
the identifications v = u0 , w = ur . We can also define the r-stretch of G with respect
to a subset F ⊆ E of edges of G, denoted by Sr (F ) (G). To form Sr (F ) (G), replace each
edge in F by a copy of Pr .
We seek an expression for ZA (Sr G). For i, j ∈ C let
ΩH (i,j) (Pr ) = {X ∈ ΩH (Pr ) | X(u0 ) = i, X(ur ) = j} .
(1)
It is not difficult to see that these sets form a partition of ΩH (Pr ). Let D be a diagonal
matrix of positive vertex
weights {λi | i ∈ C} and let Π be the diagonal matrix with
√
(i, i) entry equal to λi . Using induction, one can prove that
X
p
wA,D (X) = λi λj B r ij
(2)
X∈ΩH (i,j) (Pr )
where B = ΠAΠ. (Note that the proof follows exactly the same steps as the proof of
Lemma 3.8, given below.) In the vertex-unweighted case, we have
X
wA (X) = Ar ij .
(3)
X∈ΩH (i,j) (Pr )
5
There is a relationship between the graph Ar and random walks on graphs which
are endowed with H-colourings. Consider performing a random walk on the vertices of
a graph G. If Ar ij 6= 0 then it is possible to walk from a vertex coloured i to a vertex
coloured j in exactly r steps.
Using (3), we can express ZA (Sr G) in terms of the entries of Ar , for r ≥ 1.
Corollary 2.1 Let r ≥ 1. Then
X
ZA (Sr G) =
Y
Ar X(v)X(w) = ZAr (G).
X:V →C {v,w}∈E
Proof. Suppose that X ∈ ΩH (Sr G). We can think of X as being formed from the
following ingredients: we have a map Y : V 7→ C which is the restriction of X to the
vertices of G, and for each edge {v, w} ∈ E we have the restriction of X to the path Pr
between v and w, with endpoints coloured Y (v) and Y (w). This construction can be
reversed, giving a bijection between ΩH (Sr G) and
[
Y
ΩH (Y (v),Y (w)) (Pr ).
Y :V 7→C {v,w}∈E
Using this bijection, we find that
ZA (Sr G) =
X
wA (X)
X∈ΩH (Sr G)
=
X
Y
Y :V 7→C {v,w}∈E
=
X
Y
X
wA (Z)
Z∈ΩH (Y (v),Y (w)) (Pr )
Ar Y (v)Y (w) ,
Y :V 7→C {v,w}∈E
as claimed. The second equality follows from the bijection and the third equality follows
from (3).
Think of Sr as an operation which acts on graphs. Let σr A = Ar , the rth power of
A. Then Corollary 2.1 can be restated as
ZA (Sr G) = Zσr A (G).
We can think of σr as the r-stretch operation for weight matrices. Notice that S1 and
σ1 are both identity maps. Moreover Sr St = Srt and σr σt = σrt for all r, t ≥ 1. The
vertex-weighted form of Corollary 2.1 states that
ZA,D (Sr G) = ZAσr−1 (DA),D (G)
for any graph G and any diagonal matrix D of vertex weights. This statement is proved
using (2).
6
Now let p ≥ 1. The p-thickening of the graph G, denoted by Tp G, is obtained by
replacing each edge by p copies of itself. This results in a multigraph where each edge
has multiplicity p. We can also define the p-thickening of a graph G with respect to a
subset F ⊆ E of edges of G, denoted by Tp (F ) (G). Form Tp (F ) (G) by replacing each
edge in F by p copies of itself.
Since the endpoints of an edge of G become endpoints of p edges in Tp G, we see
immediately that
Y
X
ZA (Tp G) =
AX(v)X(w) p .
(4)
X:V →C {v,w}∈E
Think of Tp as an operation which acts on (multi)graphs. Let τp A denote the matrix
whose (i, j) entry is equal to Aij p . Then ZA (Tp G) = Zτp A (G). We can think of τp as
the p-thickening operation for weight matrices. Notice that T1 and τ1 are both identity
maps. Moreover, Tp Tq = Tpq and τp τq = τpq for all p, q ≥ 1.
Thickenings do not interfere with vertex-weighted problems. Specifically, it is not
difficult to see that
ZA,D (Tp G) = Zτp A,D (G)
for all weight matrices A, invertible diagonal matrices D and graphs G. This gives rise
to a polynomial-time reduction from EVAL(τp A, D) to EVAL(A, D) for any p ≥ 1 which
is polynomially bounded.
In some circumstances, we wish to perform the stretching or thickening operation
with respect to some subset of edges only. For the rest of this section, however, we
consider the case where all edges are involved. Now consider how the thickening and
stretching operations interact with each other.
The r-stretch of the p-thickening of G is denoted by Sr Tp G. By inspection, we see
that
X
Y
ZA (Sr Tp G) =
Ar X(v)X(w) p = Zτp σr A (G).
(5)
X:V →C {v,w}∈E
Notice that the thickening and stretching operations are applied to A in the reverse
order that they are applied to G. The p-thickening of the r-stretch of G is denoted by
Tp Sr G. Let B = τp A, so that Bij = Aij p . Then, by inspection, we see that
X
Y
ZA (Tp Sr G) =
B r X(v)X(w) = Zσr τp A (G).
X:V →C {v,w}∈E
Again, the order of the stretching and thickening operations is reversed for the weight
matrix. For illustration, the graphs S5 T3 e and T3 S5 e are shown in Figure 3, where e is
a single edge.
We would like to be able to apply arbitrary compositions of the thickening and
stretching operations.
Lemma 2.1 Any composition of thickening and stretching operations is equivalent to a
composition of the form
Sr` Tp` Sr`−1 Tp`−1 · · · Sr2 Tp2 Sr1 Tp1 ,
7
w
v
w
v
Figure 3: The graphs S5 T3 e and T3 S5 e, where e = {v, w}
where ` ≥ 1 and 1 ≤ ri , pi for 1 ≤ i ≤ `. In addition, we have
ZA (Sr` Tp` Sr`−1 Tp`−1 · · · Sr1 Tp1 G) = Zτp1 σr1 ···τp`−1 σr`−1 τp` σr` A (G).
Proof. This first statement follows since Sr St = Srt , Tp Tq = Tpq , S1 = id and T1 = id
for all r, s, p, q ≥ 1. The second statement can easily be proved by induction on `.
3
Interpolation results
We now describe some interpolation results which are obtained using stretchings and
thickenings. First, let us introduce some notation. Let H be a graph and A a weight
matrix on H. Let D be a nonsingular diagonal matrix of vertex weights. The problems
#H, EVAL(A) and EVAL(A, D) are defined below.
PROBLEM:
INSTANCE:
OUTPUT:
#H
A graph G
|ΩH (G)|
EVAL(A)
A graph G
ZA (G)
EVAL(A, D)
A graph G
ZA,D (G)
If A is the adjacency matrix of the graph H, then the problems #H and EVAL(A)
are identical. Similarly, if D is the identity matrix then the problems EVAL(A) and
EVAL(A, D) are identical.
The operations of stretching and thickening give rise to polynomial-time reductions,
as below.
Lemma 3.1 Suppose that r, p are positive integer constants and A is a weight matrix.
There is a polynomial-time reduction from EVAL(τp σr A) to the problem EVAL(A).
Proof. Let G be an instance of EVAL(A). We can form the graph Sr Tp G from G in
8
polynomial time, and Zτp σr A (G) = ZA (Sr Tp G) by (5). This completes the polynomialtime reduction.
This result can be extended to constant length compositions of stretching and thickening operations, using Lemma 2.1. Something slightly more complicated can be said if
vertex weights are in use.
Many polynomial-time reductions involve the following standard interpolation technique, as used in [22, 4]. Although the result is well-known, for completeness we include
a proof.
Lemma 3.2 Let w1 , . . . , wr be known distinct nonzero constants. Suppose that we know
values f1 , . . . , fr such that
r
X
fs =
ci wi s
i=1
for 1 ≤ s ≤ r. The coefficients c1 , . . . , cr can be evaluated in polynomial time.
Proof. We can express the equations in matrix form, as
  
 
f1
w1 w2 · · · wr
c1
f2  w1 2 w2 2 · · · wr 2  c2 
  
 
 ..  =  ..
..
..   ..  .
.
.
.  .
.
.
.  . 
r
r
fr
w1 w2 · · · wr r
cr
The r × r matrix in the above equation is invertible. To see this, divide each column
by the entry in the 1st row of that column. The result is a Vandermonde matrix with
distinct columns. This matrix is invertible, hence so is the original matrix. We can
invert the matrix in polynomial time, to solve for c1 , . . . , cr .
The next two results concern the set of distinct weights of H-colourings of a given
graph. In order to describe this result, we need some notation. Let UA (G) be the set of
all distinct weights of H-colourings of G; that is,
UA (G) = {wA (X) | X ∈ ΩH (G)} .
Suppose that G has m edges and H has k vertices. Let the nonzero entries of A be
µ1 , . . . , µs . Then UA (G) ⊆ Wm (A), where
Wm (A) = {µ1 c1 · · · µs cs | 0 ≤ ci ≤ m for 1 ≤ i ≤ s, µ1 + · · · + µs = m} .
It is not difficult to show that
2
|UA (G)| ≤ |Wm (A)| ≤ (m + 1)k .
(6)
Let NA (G, w) be the number of H-colourings of G with weight equal to w, for any real
number w. That is,
NA (G, w) = | {X ∈ ΩH (G) | wA (X) = w} |.
Clearly NA (G, w) = 0 if w 6∈ UA (G). For any real number w, denote by EVAL(A, w) the
counting problem which takes as instance a graph G and returns the output NA (G, w).
9
Lemma 3.3 Let A be a weight matrix for the graph H. There is a polynomial-time
reduction from the problem EVAL(A, w) to the problem EVAL(A), for every real number
w.
Proof. The set Wm (A), defined above, can be constructed explicitly in polynomial time
and contains only nonzero entries. Suppose that Wm (A) = {w1 , . . . , wt } so that there
are t distinct elements in Wm (A). If w 6∈ Wm (A) then NA (G, w) = 0. Assume now that
w = wj ∈ Wm (A). For 1 ≤ p ≤ t form the graph Tp G, the p-thickening of G. By (6),
the integer t is polynomially bounded. For each value of p, the graph Tp G can be formed
in polynomial time. Therefore we can form all t graphs in polynomial time. Using (4),
we see that
Y
X
X
ZA (Tp G) =
AX(v)X(w) p =
wp NA (G, w).
X:V →C {v,w}∈E
w∈Wm (A)
The values in Wm (A) are known, distinct and nonzero. Using Lemma 3.2, we can
calculate the coefficients NA (G, w) for all w ∈ Wm (A), in polynomial time. In particular, we know NA (G, wj ), the quantity of interest. This completes the polynomial-time
reduction.
Corollary 3.1 Let H be a graph and let A be a weight matrix for H. There is a
polynomial-time reduction from #H to EVAL(A).
Proof. Using the reduction of Lemma 3.3, we obtain the value of NA (G, w) for all
w ∈ Wm (A). Summing these values, we obtain |ΩH (G)|.
Corollary 3.2 Suppose that the distinct entries of A are pairwise coprime positive integers, (where 1 is considered to be coprime to all integers). Let S be some subset of these
entries. Let B be the matrix obtained by replacing all entries of A by 1 if they belong to
S, and replacing them by 0 otherwise. Then there is a polynomial-time reduction from
the problem EVAL(B) to the problem EVAL(A).
Proof. Write {µ1 , . . . , µr } for the set of distinct entries of A. Let G be a given graph
with m edges. The set Wm (A) can be written as
(
)
r
X
Wm (A) = µ1 α1 · · · µr αr | 0 ≤ αi ≤ m for 1 ≤ i ≤ r,
αi = m .
i=1
Since the µi are coprime, these weights are distinct. That is, if the equation
µ1 α 1 · · · µr α r = µ1 β1 · · · µr βr
holds for two elements of Wm (A), then (α1 , . . . , αr ) = (β1 , . . . , βr ). Define the subset
Vm (S, A) of Wm (A) by
Vm (S, A) = {µ1 α1 · · · µr αr ∈ Wm (A) | αi = 0 whenever µi 6∈ S} .
10
That is, Vm (S, A) is the set of all candidate weights which are coprime to all entries of
A \ S. It is not difficult to see that
X
ZB (G) =
NA (G, w).
w∈Vm (S,A)
The set Vm (S, A) can be computed in polynomial time. This completes the polynomialtime reduction.
Corollary 3.3 Let A be a symmetric matrix with every entry a nonnegative integer,
and let B be the matrix obtained by replacing all the entries of A which do not equal 1
by zero. Then there is a polynomial-time reduction from EVAL(B) to EVAL(A).
Proof. Using the reduction of Lemma 3.3, we can obtain the value of NA (G, 1). But
wA (X) 6= 1 unless all edges in G are given weight 1 by the H-colouring X. This is only
possible if none of the edge-weights with greater than 1 are used. Hence NA (G, 1) =
ZB (G), as required.
Let G = (V, E) be a graph and let A a weight matrix. For F ⊆ E, define
Y
wA (F ) (X) =
AX(v)X(w) .
{v,w}∈F
Thus wA (X) = wA (E) (G). The following lemma is very useful, and is proved using
interpolation involving the eigenvalues of the matrix A. To the best of our knowledge,
this is the first result involving interpolation on eigenvalues.
Lemma 3.4 Let A be a nonsingular symmetric matrix, and let G be a given graph. Let
F ⊆ E be a subset of edges of G and let m = |F |. Suppose that we know the values of
X
fr (G) =
cA (X) wσr A (F ) (X)
X:V 7→C
for 1 ≤ r ≤ (m + 1)k , where cA is any function which depends on A but not on r. Then
we can evaluate
X
cA (X) wI (F ) (X)
X:V 7→C
in polynomial time, where I is the k × k identity matrix.
Proof. Let the eigenvalues of A be α1 , . . . , αk . These eigenvalues can be found computationally in polynomial time (indeed, in constant time, since k is a constant). Let L be
11
the diagonal matrix such that Lii = αi for 1 ≤ i ≤ k. Since the matrix A is symmetric,
there exists an orthogonal matrix Q such that QLQT = A. Now
X
Y
fr (G) =
cA (X)
Ar X(v)X(w)
X:V 7→C
=
{v,w}∈F
X
cA (X)
X
cA (X)
X:V 7→C
=
Y
(Q Lr QT )X(v)X(w)
Y
X
{v,w}∈F
X:V 7→C
QX(v)` QX(w)` (α` )r .
(7)
{v,w}∈F `∈C
Let S be defined by
S = {α1 c1 · · · αk ck | 0 ≤ ci ≤ m for 1 ≤ i ≤ k, c1 + · · · + ck = m} .
Since we know the eigenvalues of A explicitly, the set S can be constructed explicitly in
polynomial time. We can write
X
fr (G) =
as s r
(8)
s∈S
for 1 ≤ r ≤ (m + 1)k , where the as are some (unknown) coefficients. The elements of S
are known, distinct and nonzero. Therefore, we can obtain the coefficients
as for s ∈ S
P
in polynomial time, using Lemma 3.2. Thus we can calculate Y = s∈S as . But Y is
obtained by setting r = 0 in (8). Hence Y is also equal to the value obtained by setting
r = 0 in (7), since both (7) and (8) are equations for fr (G). Therefore
Y X
X
QX(v)` QX(w)`
cA (X)
Y =
X:V 7→C
=
{v,w}∈F `∈C
X
cA (X)
X
cA (X) wI (F ) (X).
X:V 7→C
=
Y
(Q QT )X(v)X(w)
{v,w}∈F
X:V 7→C
This completes the proof.
3.1
Singular weight matrices
In order to apply Lemma 3.4 we need a nonsingular symmetric matrix A. We now give
a series of lemmas which show how to proceed when the adjacency matrix A is singular.
Lemma 3.5 Let A be a symmetric 0-1 matrix which has a pair of linearly dependent
columns. Then there exists a symmetric 0-1 matrix A0 with no two linearly dependent
columns, and a positive diagonal matrix D, such that the problems EVAL(A0 , D) and
EVAL(A) are equivalent. Moreover, the matrices A0 and D can be constructed from A
in constant time.
12
Proof. Suppose that A is the adjacency matrix of the graph H. Let H 0 be the subgraph
of H which is formed from H as follows. First, delete all isolated vertices. Next, suppose
that {v1 , . . . , v` } are vertices of H which have the same set of neighbours as each other.
Delete v2 , . . . , v` from H. Continue until no two vertices have the same set of neighbours,
and call the resulting graph H 0 . Let A0 be the adjacency matrix of H 0 . Then A0 has
no zero columns or repeated columns. Finally, let D be the diagonal matrix with the
same number of rows and columns as A0 , such that Dii equals the number of vertices
in H which have the same neighbours as the ith vertex of H 0 (considered as a vertex in
H). It is not difficult to see that EVAL(A) and EVAL(A0 , D) are equivalent, and that
A0 and D can be formed from A in constant time.
The following will be helpful when performing 2-stretches.
Lemma 3.6 Let A be a symmetric matrix, and D be an invertible diagonal matrix. If
ADA has two linearly dependent columns then A has two linearly dependent columns.
Proof. Let Q = ΠA where Π = D1/2 . Then ADA = QT Q. Let ai , qi denote the ith
column of A, Q respectively. Suppose that A has no two linearly dependent columns.
It follows that Q has no two linearly dependent columns. Therefore, by the Cauchy–
Schwarz inequality,
q
qi T qj <
(qi T qi )(qj T qj )
whenever i 6= j. Fix i, j such that 1 ≤ i < j ≤ k. The ith and jth columns of ADA
contain a nonsingular 2 × 2 submatrix
T
qi qi qi T qj
.
qi T qj q j T qj
Therefore these columns are not linearly dependent.
We now establish an upper bound on the off-diagonal of ADA, when A has no two
linearly dependent columns.
Lemma 3.7 Suppose that A is a symmetric 0-1 matrix with no two linearly dependent
columns. Let D be a diagonal matrix such that every diagonal entry Dii = λi is positive.
Define λmin = min {λi | i ∈ C}, and let
λmin
γ = exp −
,
2Tr(D)
P
where Tr(D) = i∈C λi . Then
q
(ADA)ij ≤ γ (ADA)ii (ADA)jj
for all i 6= j.
13
√
Proof. As in the proof of Lemma 3.6, let Π = D and let Q = ΠA. Then ADA = QT Q.
Let qi denote the ith column of q. Define the sets Ni by Ni = {` ∈ C | A`i 6= 0} for all
i ∈ C. Then
X
qi T qj =
λ`
`∈Ni ∩Nj
for all i, j ∈ C. Now let i and j be fixed, distinct elements of C. Since A is a 0-1 matrix
with no repeated rows, the sets Ni and Nj must be different. Hence, without loss of
generality, Ni ∩ Nj is a strict subset of Ni . Therefore
X
qi T qj ≤
λ` − λmin ≤ qi T qi − λmin .
`∈Ni
It follows that
(qi T qj )2 ≤ qi T qi − λmin qi T qj
and so
λmin
(qi T qj )2
≤ 1− T
T
T
(qi qi )(qj qj )
qi qi
λmin
≤ 1−
Tr(D)
−λmin /Tr(D)
≤ e
.
as required.
We can now prove the main result of this section, showing that the p-thickening of
ADA is nonsingular when p is large enough, whenever A has no two linearly dependent
columns.
Theorem 3.1 Let A be a singular weight matrix with no two linearly dependent columns.
Let D be a diagonal matrix of positive vertex weights. Let λmin = min {λi | i ∈ C}. The
matrix B = τp (ADA) is nonsingular, where
p ≥ d2Tr(D) log(2k)/λmin e + 1.
(9)
Proof. Let A0 = ADA. By Lemma 3.6, ADA has no two linearly dependent columns.
We show that the p-thickening of A0 is nonsingular for sufficiently large values of p.
Specifically, we prove that the value of p quoted above is large enough.
Consider the determinant of A0 . Each term of det(A0 ) has the form
±
k
Y
A0 iθ(i) ,
i=1
14
where θ is a permutation on {1, . . . , k}. Let γ be as defined in Lemma 3.7 and let
t(θ) = | {i | θ(i) 6= i} |. Then, using Lemma 3.7,
k
Y
0
A iθ(i) ≤ γ
t(θ)
i=1
k p
k
k
Y
Y
Y
p
t(θ)
0
0
A ii
A θ(i)θ(i) = γ
A0 ii ,
i=1
i=1
i=1
with equality holding if and only if t(θ) = 0.
Suppose that p ≥ 1, and consider the p-thickening of A0 . Each term of det(τp A0 ) has
the form
k
Y
p
±
A0 iθ(i)
i=1
for some permutation θ. Now
k
| {θ ∈ Sym(k) | t(θ) = t} | ≤
t! ≤ k t
t
for 0 ≤ t ≤ k. By separating out the identity permutation, and subtracting all other
terms, we find that
!p
!p k
k
k
Y
Y
X
det(τp A0 ) ≥
A0 ii −
A0 ii
k t γ pt
i=1
>
k
Y
=
k
Y
i=1
A0 ii
!p A0 ii
!p
i=1
i=1
1−
kγ p
1 − kγ p
t=1
1 − 2kγ p
.
1 − kγ p
This quantity is positive whenever 2kγ p < 1. Rearranging, we find that this holds
whenever p > d2Tr(D) log(2k)/λmin e. Therefore τp A0 is a nonsingular matrix for p ≥
d2Tr(D) log(2k)/λmin e + 1. This completes the proof.
Note that, in the vertex-unweighted case, it suffices to take p = 2kdlog(2k)e + 1.
3.2
Ignoring vertex weights
The results of the previous subsection show how to obtain a nonsingular weight matrix
from a singular one, at the cost of introducing vertex weights. In this section we show
that the vertex weights can be ignored in any proof of #P-hardness. Specifically, we
give a polynomial-time reduction from EVAL(A) to EVAL(A, D), for any weight matrix
A and any matrix D of positive vertex weights.
Recall the definition of ΩH (i,j) (Pr ), as given in (1). We can analogously define
ΩH (i,j) (S2 Tp Pr ) = {X ∈ ΩH (S2 Tp Pr ) | X(u0 ) = i, X(ur ) = j} .
15
Lemma 3.8 Let p ≥ 1 be some constant value. Let A be a symmetric matrix and let
D be a diagonal matrix of positive
√ vertex weights {λi | i ∈ C}. Let Π be the diagonal
matrix with (i, i) entry equal to λi for all i ∈ C. Then
X
p
wA,D (X) = λi λj B r ij
X∈ΩH (i,j) (S2 Tp Pr )
where B = Πτp (ADA)Π.
Proof. We prove the result by induction on r, using the fact that
ZA,D (S2 Tp e) = Zτp (ADA),D (e)
for any edge e. Hence the result holds for r = 1. Suppose that the result holds for some
r such that r ≥ 1. Then
X
X
X
wA,D (X) =
wA,D (Y ) A`j λj
`∈C Y ∈ΩH (i,`) (Pr )
X∈ΩH (i,j) (Pr+1 )
=
Xp
=
p
=
p
λi λ` B r i` A`j λj
`∈C
λi λj
X
B r i` B`j
`∈C
λi λj B r+1 ij ,
as required.
Using this device, we may prove that the vertex-unweighted problem is at least as
easy as any vertex-weighted version, for any weight matrix A.
Theorem 3.2 Let A be a symmetric matrix with no two linearly dependent columns.
There is a polynomial-time reduction from EVAL(A) to EVAL(A, D), for any diagonal
matrix D of positive vertex weights.
Proof. Let G be a given instance of EVAL(A), and let D be any
√ diagonal matrix of
vertex weights. Define Π to be the diagonal matrix with Πii = λi for i ∈ C, as in
Lemma 2. From G, form the (multi)graph Γ with edge bipartition EΓ = E 0 ∪ E 00 , as
follows. Take each vertex v ∈ V in turn. Let d(v) denote the degree of v in G. If
d(v) ≥ 3 then replace v by three vertices v1 , v2 , v3 . Let {{vi , vj } | 1 ≤ i < j ≤ 3} ⊆ E 00 ,
and join each neighbour of v in G to exactly one of v1 , v2 or v3 , using an edge in E 0 .
This can be done so that dΓ (vi ) ≤ d(v), where dΓ (vi ) denotes the degree of vi in Γ. If
v is a vertex of degree 2, replace v by two v1 , v2 . Each neighbour of v in G is joined
to exactly one of v1 , v2 by an edge in E 0 , and the edge {v1 , v2 } is placed in E 00 with
multiplicity two (a double edge). Here ensure that dΓ (vi ) = 3 for i = 1, 2. Finally, if v is
a vertex of degree 1 then add the loop {v, v} into E 00 . Consider the degree of v to be 3 in
16
Γ. Note that each vertex in VΓ is the endpoint of exactly two edges in E 00 (considering
each “end” of the loop added to vertices with degree 1 to be a distinct edge).
Let n be the number of vertices in Γ. Fix p to be the value on the right hand side of
( 9). Let S2 Tp Sr 00 Γ denote the graph obtained from Γ by replacing every edge in E 00 by
00
S2 Tp Pr . That is, S2 Tp Sr 00 Γ = S2 Tp Sr (E ) Γ. Form S2 Tp Sr 00 Γ for 1 ≤ r ≤ (n + 1)k . This
can be achieved in polynomial time. Let B = Πτp (ADA)Π. For ease of notation, let
0
00
wA 0 (X) = wA (E ) (X) and let wA 00 (X) = wA (E ) (X) for all X : V 7→ C. It is not difficult
to see that
X
ZA,D (S2 Tp Sr 00 Γ) =
w
eD (X) wA 0 (X) ×
X:VΓ 7→C
X
Y
wA,D (Y )/(λX(v) λX(w) )
{v,w}∈E 00 Y ∈ΩH (X(v),X(w)) (S2 Tp Pr )
=
X
w
eD (X) wA 0 (X)
X
wA 0 (X)wσr B 00 (X).
X:VΓ 7→C
=
Y
q
B r X(v)X(w) / λX(v) λX(w)
{v,w}∈E 00
(10)
X:VΓ 7→C
Here the second equality follows by Lemma 3.8, and the third equality follows as every
vertex in Γ is the endpoint of exactly two edges in E 00 .
The right hand side of (10) is of the form specified in Lemma 3.4, with U = E 00 .
Moreover, it is independent of the vertex weights. The matrix τp (ADA) is nonsingular,
by Theorem 3.1. Therefore the matrix B is nonsingular, so Lemma 3.4 applies. If we
knew
of ZA,D (S2 Tp Sr 00 Γ) for 1 ≤ r ≤ (n + 1), we could calculate the value of
P the value
0
00
X:VΓ 7→C wA (X) wI (X) in polynomial time, by Lemma 3.4. However, this quantity is
equal to ZA (G), by inspection. This completes the polynomial-time reduction.
The following result is a corollary of Lemma 3.5 and Theorem 3.2. It shows that
we may always “collapse” vertices in H with the same set of neighbours, into a single
vertex. If the resulting graph gives rise to a #P-complete counting problem, then #H
is #P-complete. This result will be used repeatedly in the proof of Theorem 1.1.
Corollary 3.4 Let H be a graph, and let H 0 be obtained from H by replacing all vertices with the same neighbourhood structure by a single vertex (with that neighbourhood
structure). If #H 0 is #P-complete then #H is also #P-complete.
Proof. The graph H 0 described above is the one used in Lemma 3.5. Let A, A0 be
the adjacency matrices of H and H 0 , respectively. Then EVAL(A0 , D) and EVAL(A)
are equivalent for some diagonal matrix D of positive vertex weights, as in Lemma 3.5.
Moreover, there is a polynomial-time reduction from EVAL(A0 ) to EVAL(A0 , D), by
Theorem 3.2. This completes the proof.
17
4
The main proof
In this section, we prove Theorem 1.1. First, we describe those graphs H for which the
associated counting problem can be solved in polynomial time. Recall that an isolated
vertex with no loops is considered to be a complete bipartite graph.
Lemma 4.1 Suppose that H is a complete graph with all loops present, or a complete
bipartite graph with no loops. Then the counting problem #H can be solved in polynomial
time.
Proof. If H is an isolated vertex without a loop then ΩH (G) is empty unless G is
a collection of isolated vertices, in which case |ΩH (G)| = 1. Suppose that H is the
complete graph on k vertices with every loop present, where k ≥ 1. If G has n vertices
then |ΩH (G)| = k n . Finally, suppose that H is the complete bipartite graph with vertex
bipartition C1 ∪ C2 , and with no loops present. If G is not bipartite then ΩH (G) is
empty. Finally, assume that G is bipartite with vertex bipartition V1 ∪ V2 . Suppose that
|Ci | = ki for i = 1, 2 and |Vi | = ni for i = 1, 2. Then it is not difficult to see that
|ΩH (G)| = k1 n1 k2 n2 + k1 n2 k2 n1 .
This completes the proof.
The next result shows that a counting problem is #P-complete whenever the counting problem associated with at least one of its connected components is #P-complete.
This is a critical result for our proof of Theorem 1.1.
Theorem 4.1 Suppose that H is a graph with connected components H1 , . . . , HT . If
#H` is #P-complete for some ` such that 1 ≤ ` ≤ T , then #H is #P-complete.
Proof. Let A, A` be the adjacency matrix of H, H` respectively, for 1 ≤ ` ≤ T . Fix
a positive integer r such that A` r has only positive entries, for 1 ≤ ` ≤ T . Note that
r can be found in constant time. We show how to perform polynomial-time reductions
from EVAL(A` ) to EVAL(A), for 1 ≤ ` ≤ T . This is sufficient, since at least one of the
problems EVAL(A` ) is #P-hard, by assumption, and EVAL(A) is clearly in #P.
Let G be a given graph. We wish to calculate the values of ZA` (G) in polynomial
time, for 1 ≤ ` ≤ T . For 1 ≤ s ≤ T , form the graph Gs with edge bipartion E 0 ∪ E 00
from G as follows. Let v ∈ V be an arbitrary vertex of G. Take s copies of G, placing
all these edges in E 0 . Let {vi , vj } ∈ E 00 for 1 ≤ i < j ≤ s, where vi is the copy of v in
the ith copy of G. These graphs can be formed from G in polynomial time. Now for
2
1 ≤ s ≤ T and 1 ≤ p ≤ s2k , form the graph (Sr Tp )00 Gs by taking the p-thickening of
each edge in E 00 , and then forming the r-stretch of each of these ps(s − 1)/2 edges. That
is, between vi and vj we have p copies of the path Pr , for 1 ≤ i < j ≤ s. These graphs
can be formed from Gs in polynomial time. Let ZA (G, c) be defined by
ZA (G, c) = | {X ∈ ΩH (G) | X(v) = c} |.
18
Then
=
ZA ((Sr Tp )00 Gs )
T
X
X
Y
ZA` (G, X(vi ))
`=1 X:{v1 ,...,vs }7→C` 1≤i≤s
=
X
Y
Ar X(vi )X(vj )
1≤i<j≤s
cw w p ,
p
(11)
(12)
w∈W (s) (A)
where W (s) (A) is defined by
(
)
Y
(s)
r
W (A) =
A X(vi )X(vj ) | X : {v1 , . . . , vs } 7→ C` for some `, 1 ≤ ` ≤ T \ {0} .
1≤i<j≤s
The set W (s) (A) can be formed explicitly in polynomial time. Arguing as in (6), the
2
set W (s) (A) has at most s2k distinct elements, all of which are positive. Suppose that
we knew the values of ZA ((Sr Tp )00 Gs ) for 1 ≤ p ≤ |W (s) (A)|. Then, by Lemma 3.2, the
values cw for w P
∈ W (s) (A) can be found in polynomial time. Adding them, we obtain
fs = fs (G) =
w∈W (s) (A) cw . This value is also obtained by putting p = 0 in (12).
Equating this to the value obtained by putting p = 0 in (11), we see that
fs =
T
X
X
Y
ZA` (G, X(vi ))
`=1 X:{v1 ,...,vs }7→C` 1≤i≤s
=
T
X
ZA` (G)s .
`=1
P
For ease of notation, let x` = ZA` (G) for 1 ≤ ` ≤ T . We know the values of fs = T`=1 x` s
for 1 ≤ s ≤ T . Let ψs be the sth elementary symmetric polynomial in variables
x1 , . . . , xT , defined by
X
ψs =
xi1 · · · xis
1≤i1 <···<is ≤T
for 1 ≤ s ≤ T . Now
fs − ψ1 fs−1 + · · · + (−1)s−1 f1 ψs−1 + (−1)s s ψs = 0
for 1 ≤ s ≤ T (this is Newton’s Theorem, see for example [8, p. 12]). Using these
equations, we can evaluate ψs for 1 ≤ s ≤ T in polynomial time. But x1 , . . . , xT are the
roots of the polynomial
g(z) = z T − ψ1 z T −1 + · · · + (−1)T −1 ψT −1 z + (−1)T ψT .
Since this is a polynomial with integral coefficients, the roots can be found in polynomial
time using the algorithm of Lenstra, Lenstra and Lovász [17]. Thus we obtain the set
of values {ZA` (G) | 1 ≤ ` ≤ T }.
19
Let N = | {ZA` (G) | 1 ≤ ` ≤ T } |. If N = 1 then all the values of ZA` (G) are
equal. Thus we know the value of ZA` (G) for 1 ≤ ` ≤ G, as required. Otherwise, search for a connected graph Γ, with the minimal number of vertices, such that
| {ZA` (Γ) | 1 ≤ ` ≤ T } | = N . We know that Γ exists, since it is a minimal element of
a nonempty set with a lower bound (the empty graph), using partial order on graphs
defined by the number of vertices and inclusion. Moreover, Γ depends only on H.
Therefore we can find Γ by exhaustive search, in constant time. (This constant may
very well be huge, but we are not seeking a practical algorithm.) We also know the
values ZA` (Γ) for 1 ≤ ` ≤ T . Let ∼ be the equivalence relation on {1, . . . , T } such that
ZA` (Γ) = ZAs (Γ) if and only if ` ∼ s. Let π be the partition of {1, . . . , T } consisting of
the equivalence classes of ∼. Write π = I1 ∪ · · · ∪ IN and let µj = |Ij | for 1 ≤ j ≤ N .
Finally, let µ = max {µj | 1 ≤ j ≤ N }. Assume without loss of generality that j ∈ Ij for
1 ≤ j ≤ N . That is, the first N values of ZA` (Γ) form a transversal of the N equivalence
classes.
We perform a second reduction, which is an adaption of the one just described. For
1 ≤ s ≤ µ and 1 ≤ t ≤ N , form the graph G(s,t) with edge bipartition E 0 ∪ E 00 as
follows. Let w be an arbitrary vertex in Γ, and recall the distinguished vertex v in
G. Take s copies of G and t copies of Γ, placing all these edges in E 0 . Let V ∗ =
{w1 , . . . , ws } ∪ {v1 , . . . , vt }, where wi is the copy of w in the ith copy of Γ and vj is the
copy of v in the jth copy of G. Finally, let E 00 be the set of all possible edges between the
2
vertices in V ∗ . Form the graph (Sr Tp )00 G(s,t) for 1 ≤ p ≤ (s + t)2k and 1 ≤ s ≤ N , by
replacing each edge in E 00 by p copies of the path Pr . Arguing as in the first reduction,
2
the values of ZA ((Sr Tp )00 G(s,t) ) for 1 ≤ p ≤ (s + t)2k can be used to produce the values
f(s,t) (G) =
T
X
ZA` (G)s ZA` (Γ)t
(13)
`=1
for 1 ≤ s ≤ µ, 1 ≤ t ≤ N , in polynomial time.
We can rewrite (13) as


N
X X

ZA` (G)s  ZAj (Γ)t .
f(s,t) (G) =
j=1
`∈Ij
For each fixed value of s, we know the value f(s,t) (G) for 1 ≤ t ≤ N . First suppose
that
0 for 1 ≤ ` ≤ T . Using Lemma 3.2, we obtain the coefficients c(s) j =
P ZA` (Γ) 6=
s
`∈Ij ZA` (G) , for 1 ≤ j ≤ N , in polynomial time. We can do this for 1 ≤ s ≤ µ. Now
suppose without loss of generality that ZA1 (Γ) = 0. Then Lemma 3.2 only guarantees
that we can find c(s) j for 2 ≤ j ≤ N , in polynomial time. However, we know the
PT
s
set {ZA` (G) | 1 ≤ ` ≤ T }. Therefore we can form the value c(s) =
`=1 ZA` (G) in
polynomial time, for 1 ≤ s ≤ µ. Then
c
(s)
1
=
X
s
ZA` (G) = c
(s)
−
N
X
j=2
`∈I1
20
c(s) j .
Thus in both cases we can find the values of
X
c(s) j =
ZA` (G)s
`∈Ij
for 1 ≤ j ≤ N and 1 ≤ s ≤ µ, in polynomial time. Arguing as above, using Newton’s
Theorem, we can find the set of values {ZA` (G) | ` ∈ Ij } for 1 ≤ j ≤ N in polynomial
time. If all these values are equal, then we know all the values ZA` (G) for ` ∈ Ij .
Otherwise, we perform the second reduction again, for the graph HIj = ∪`∈Ij H` . We
obtain a tree of polynomial-time reductions, where each internal node has at least two
children, and there are at most T leaves. (A leaf is obtained when all values ZA` (G)
in the cell of the partition are equal, which will certainly happen when the cell is a
singleton set.) There are at most T internal nodes in such a tree. That is, we must
perform at most T + 1 reductions in all. This guarantees that we can obtain all the
values ZA` (G) for 1 ≤ ` ≤ T in polynomial time, as required.
Proof of Theorem 1.1.
The remainder of the section is devoted to the proof of Theomem 1.1. Let H be
a graph with adjacency matrix A, and let G be an arbitrary graph. We can assume
that G is connected, and by Theorem 4.1 we can also assume that H is connected. By
Lemma 4.1 we can assume that H is not a complete graph with all loops present, or
a complete bipartite graph with no loops present. The problem #H is clearly in #P,
so it remains to show that it is #P-hard. We do this by demonstrating a series of
polynomial-time reductions from some known #P-hard counting problem to #H.
Case 1. Suppose that H has a loop on every vertex. Then H is not complete. Let G
be a given connected graph. Form the graph G0 from G by introducing a new vertex v0
and joining all vertices of G to v0 . The graph G0 can be formed from G in polynomial
time. Define Hi to be the subgraph of H induced by the set of neighbours of the vertex
i in H. Note that i ∈ Hi since H has a loop on every vertex. This implies that Hi is
connected for all i ∈ C. Let H 0 be the graph with connected components given by the
multiset {Hi | i ∈ C}. Then
0
|ΩH (G )| =
k
X
|ΩHi (G)| = |ΩH 0 (G)|.
i=1
This gives a polynomial-time reduction from #H 0 to #H. Therefore, by Theorem 4.1,
it suffices to show that #Hi is #P-hard for some i ∈ C. We will iterate this reduction
until a #P-hard problem is obtained.
For i ∈ C let Si = {j ∈ C | {i, j} ∈ EH , i 6= j}. That is, Si is the set of neighbours
of i in H distinct from i. Suppose that there exists a vertex i ∈ C which satisfies the
following conditions:
(i) there exists j ∈ C such that {i, j} 6∈ EC ,
21
(ii) the subgraph of H induced by Si is not a clique.
Then Hi is a connected graph which (by (i)) is smaller than H. There is still a loop
on every vertex of Hi . By (ii), the graph Hi is not complete. Repeat this process with
H = Hi . In a finite number of steps we reach a graph Hi such that no vertex of Hi
satisfies the given conditions. There is at least one vertex (such as i) attached to all
other vertices. The subgraph of Hi induced by Sj is not a clique, for all such vertices j.
For all other vertices j, the subgraph of Hi induced by Sj is a clique.
Using Corollary 3.4, we can collapse all vertices of Hi with the same neighbourhood
structure down to a single vertex. Suppose that there are now q + 1 vertices in Hi . We
know that q ≥ 2, since the subgraph of Hi induced by Si is not a clique. The graph Hi
encodes the Widom–Rowlinson model of a gas with q particles. Let A be the adjacency
matrix of Hi , and let Iq be the q × q identity matrix. The matrix A is shown below,
together with its square:




q + 1 2 2 ··· 2
1 ··· 1
1
 2
2 1 · · · 1
 1






1 2 · · · 1
A =  ..
A2 =  2
,
.
Iq
 ..
 .

.. .. . . .. 
 .
. .
. .
1
2
1 1 ··· 2
There is a polynomial time reduction from EVAL(A2 ) to EVAL(A), using Lemma 3.1
with r = 2 and p = 1. Let B be obtained from A2 by replacing all entries which do not
equal 1 by 0. By Corollary 3.3, there is a polynomial-time reduction from EVAL(B)
to EVAL(A2 ). However, EVAL(B) is the problem of counting proper q-colourings of a
graph. This problem is #P-hard for q ≥ 3.
Finally suppose that q = 2. Then A2 has distinct entries 3, 2 and 1 which are
coprime. Apply Corollary 3.2 with S = {2}. This shows that there is a polynomial time
reduction from EVAL(B) to EVAL(A2 ), where B is the matrix shown below together
with its square.




0 1 1
2 1 1
B = 1 1 0 ,
B 2 = 1 2 1 .
1 0 1
1 1 2
By Lemma 3.1, there is a polynomial time reduction from EVAL(B 2 ) to EVAL(B). Now
apply Corollary 3.3 to B 2 . This gives a polynomial-time reduction to EVAL(B 2 ), from
the #P-hard problem of counting proper 3-colourings. Hence EVAL(A2 ) is #P-hard,
completing the proof for Case 1.
Case 2. Now suppose that H has some looped vertices and some unlooped vertices. We
use the same reduction as Case 1. Recall the subgraph Hi of H, induced by the set of
all neighbours of i. Here i ∈ Hi if and only if there is a loop at i. Since H is connected
and contains both looped and unlooped vertices, there is an edge {i, j} ∈ EH such that
i is looped and j is unlooped. Note that the edge {i, j}, together with the loop at i,
describes the #P-hard problem of counting independent sets in graphs. Consider the
22
graph Hi . This is a connected graph which has some looped vertices and some unlooped
vertices. In addition, at least one looped vertex of Hi is joined to all the vertices of Hi .
Using Corollary 3.4, we can assume that i is the only such vertex.
Now suppose that j and ` are both unlooped neighbours of i which are joined by
an edge. Then (Hi )` is smaller than Hi , since it does not contain `. It still contains
independent sets as a subproblem. Therefore we can replace Hi by (Hi )` . After a finite
number of steps we can assume that there are no edges between unlooped vertices in
Hi .
Next, suppose that there exists more than one looped vertex in Hi . By above, if ` 6= i
and ` has a loop, then ` is not joined to all the vertices in Hi . If ` is a looped vertex
which is joined to an unlooped vertex j, the graph (Hi )` is smaller than Hi and still
contains independent sets as a subproblem. Hence we can replace Hi by (Hi )` . After a
finite number of steps, we can assume that i is the only looped vertex which is joined
to any unlooped vertices.
Suppose that there exists a looped vertex ` in Hi such that the subgraph of Hi
generated by S` is not a clique (where, recall, S` is the set of all neighbours of S` in
Hi other than `). Then (Hi )` has a loop on every vertex but is not complete. This
problem is #P-hard, by Case 1. Otherwise, the subgraph of Hi generated by S` is a
clique, for all looped vertices in Hi other than i. Using Corollary 3.4, we can collapse all
vertices with the same neighbourhood structure down to a single vertex. The resulting
graph has one unlooped vertex and q looped vertices, where q ≥ 2. There is one looped
vertex i which is joined to all the other vertices. (We can think of this as the graph for
q-particle Widom–Rowlinson, with the loop removed from one low-degree vertex.) The
adjacency matrix A of this graph is shown below, together with its square:




1 1 1 ··· 1 1
q + 1 2 2 ··· 2 1
1 1 0 · · · 0 0
 2
2 1 · · · 1 1




1 0 1 · · · 0 0
 2

1
2
·
·
·
1
1




A =  .. .. .. . . .. ..  ,
A2 =  ..
.. .. . . .. ..  .
. . .


. . .
. . .
. .


 .

1 0 0 · · · 1 0
 2
1 1 · · · 2 1
1 0 0 ··· 0 0
1
1 1 ··· 1 1
Let B be obtained from A2 by replacing all entries which do not equal 1 by 0. Using
Corollary 3.3, there is a polynomial-time reduction from EVAL(B) to EVAL(A2 ). But
B describes a graph which has exactly one looped vertex which is joined to all the other
vertices, of which there are at least two.
We have reached the point where H contains exactly one looped vertex, joined to
all other vertices, of which there are at least one. Arguing as above, we can assume
that there are no edges between the unlooped vertices. Then we can replace all the
unlooped vertices by a single vertex, using Corollary 3.4. The resulting graph describes
the #P-hard problem of counting independent sets in graphs. Thus #Hi is #P-hard ,
23
and so is the original problem #H. This completes the proof in Case 2.
Case 3. Now suppose that H is a bipartite graph with no loops. Let G be a given
connected graph. If G is not bipartite then ΩH (G) = ∅. Therefore we can assume that
G is bipartite, with vertex partition V1 ∪ V2 . We adapt the reduction used in Cases 1
and 2 to the bipartite case, by using two apices instead of one. Specifically, form the
graph G0 from G by introducing two new vertices v1 , v2 , and joining all vertices in Vi to
vi , for i = 1, 2. Also let {v1 , v2 } be an edge in G0 . The graph G0 can be formed from G
in polynomial time.
Let Ni denote the set of neighbours of i in H, for all i ∈ C. If {i, j} ∈ EH , let Hij be
the subgraph of H induced by Ni ∪ Nj . Let H 0 be the graph with connected components
given by the multiset {Hij | {i, j} ∈ EH }. Then
X
|ΩH (G0 )| =
|ΩHij (G)| = |ΩH 0 (G)|.
{i,j}∈EH
This gives a polynomial-time reduction from #H 0 to #H. Therefore, by Theorem 4.1,
it suffices to show that #Hij is #P-hard for some {i, j} ∈ EH .
The diameter of a connected graph is the maximum, over all vertices i, j, of the
length of the shortest path between i and j. Since H is bipartite and not complete
it has diameter at least 3. Suppose that the diameter of H is d ≥ 5. Let A be the
adjacency matrix of H, and consider the matrix A3 . Then A3 is the weight matrix of a
bipartite graph H̃ with diameter strictly between 3 and d − 2. There is a polynomialtime reduction from #H̃ to EVAL(A), by Corollary 3.1. After a finite number of steps,
we may assume that H has diameter 3 or 4.
Suppose that H has vertex bipartition C = C1 ∪ C2 where |C1 | = r and |C2 | = s.
Say that i is on the left if i ∈ C1 , otherwise say that i is on the right. Let x ∈ C1 , y ∈ C2
be such that {x, y} 6∈ EH . Such a pair exists, since H is not a complete bipartite graph.
Since H has diameter 3 or 4, there exist i ∈ C1 and j ∈ C2 such that
x 7→ j 7→ i 7→ y
is a path in H between x and y. Consider the graph Hij . Note that i and j both belong
to Hij , and that Hij is a connected bipartite graph with no loops which is not complete.
Moreover, all vertices on the right in Hij are joined to i, and all vertices on the left in
Hij are joined to j. Using Corollary 3.4, we can assume that i and j are the only vertices
in Hij which satisfy these conditions. Let A be the adjacency matrix of Hij . Then
0 B
A=
,
BT 0
where B has the form

1
1

B =  ..
.
1

1 ··· 1
∗ · · · ∗

.. . . ..  .
. .
.
∗ ··· ∗
24
Here ‘∗’ stands for entries which we have yet to determine. Note that B is not necessarily
square. It is an r × s matrix where r ≥ 2 and s ≥ 2.
If all of the entries marked ‘∗’ are equal to zero, we are done. Otherwise, choose
0 0
{i , j } ∈ EHij such that i0 6= i, j 0 6= j and
deg(i0 ) + deg(j 0 )
is maximal (referring to the degree of i0 and j 0 in Hij ). The graph (Hij )i0 j 0 is smaller
than Hij . If it is not a complete bipartite graph, we may work with this graph instead.
Using Corollary 3.4, we can collapse all vertices with the same neighbourhood structure
into a single vertex. Thus we can assume that B has the form


1 1 1 ··· 1
1 1 0 · · · 0




B = 1 0 ∗ · · · ∗ .
 .. .. .. . . .. 
. . .
. .
1 0 ∗ ··· ∗
If all the entries marked ‘∗0 are zero, we are done. Repeat this procedure, which must
terminate in a finite number of steps. Finally, apply Corollary 3.4 again, to delete
any repeated rows of A. The resulting bipartite graph H has the following form. The
vertex bipartition of H is C1 ∪ C2 , where C1 = {i1 , . . . , ir } and C2 = {j1 , . . . , js } where
s ∈ {r, r + 1}. All edges of the form {i1 , j` }, {i` , j1 }, {i` , j` } are present for 1 ≤ ` ≤ r,
apart from possibly the edge {ir , jr } in the case that s = r. Moreover, r ≥ 2 unless
s = r and the edge {ir , jr } is present, in which case r ≥ 3.
Let A be the adjacency matrix of H. The matrix A2 is a block diagonal matrix with
two blocks, one given by BB T and one given by B T B. By Theorem 4.1, it suffices to
show that the problem associated with at least one of these blocks is #P-hard. Using
Corollary 3.3, there is a polynomial-time reduction from EVAL(F ) to EVAL(BB T ),
where F is obtained from BB T by replacing all entries which do not equal 1 by zero.
When s = r but the edge {ir , jr } is absent, the graph corresponding to F is connected
and has a looped vertex and an unlooped vertex. Therefore EVAL(F ) is #P-hard, by
Case 2. Otherwise, the matrix F describes the problem of counting proper (r − 1)colourings of graphs. This is #P-hard when r ≥ 4.
Suppose now that s = r = 3 and the edge {i3 , j3 } is present. Then the distinct
entries of B 2 are coprime. S = {2} to give a polynomial-time reduction from the
problem EVAL(F ) to EVAL(B 2 ), where


0 1 1
F = 1 1 0 .
1 0 1
The graph corresponding to F is connected and has a looped vertex and an unlooped
vertex. Therefore the corresponding counting problem is #P-hard, by Case 2. This
completes the proof when s = r = 3 and the edge {i3 , j3 } is present.
25
Next suppose that r = 2 and s = 3. Here, the distinct entries of BB T are coprime.
Therefore we can apply Corollary 3.2 with S = {2}. This gives a polynomial-time
reduction to EVAL(BB T ) from the problem of counting independent sets in graphs.
The latter is #P-hard. Finally, suppose that r = 3 and s = 4. Then B T B is given by


3 2 2 1
2 2 1 1

BT B = 
2 1 2 1 .
1 1 1 1
Let F be the matrix obtained from B T B by replacing all entries which do not equal 2
by 0, and those entries equal to 2 by 1. Using Corollary 3.2 with S = {2} again, there is
a polynomial-time reduction from EVAL(F ) to EVAL(BB T ). But F describes a graph
which has a connected component consisting of both looped and unlooped vertices. This
problem is #P-hard, by Theorem 4.1 and Case 2. This completes the proof for Case 3.
Case 4. Finally, suppose that H has no loops and is not bipartite. Note that the decision
problem corresponding to H is NP-complete, by Hell and Nešetřil [11]. (However, this
does not immediately imply that the counting problem is #P-hard.) Recall the reduction
used in Cases 1 and 2. It suffices to show that Hi is #P-hard for some i ∈ C, where Hi
is the graph induced by the set of neighbours of i in H. Note that i 6∈ Hi as H has no
loops. Suppose that some connected component of Hi is not bipartite, for some i ∈ H.
(Recall that we are treating isolated vertices as bipartite graphs.) Then we can replace
H by Hi , which is smaller than H. After a finite number of steps we can assume that
H is a connected, loopless, nonbipartite graph, and that every component of Hi is a
complete bipartite graph, for all i ∈ C. Say H satisfies Property 1 when this holds. (We
can assume that Hi is complete since we can apply Case 3 otherwise.)
Next, let d be the minimum length of an odd cycle in H. Since H is not bipartite,
we know that d ≥ 3. If d ≥ 5 then let A be the adjacency matrix of H, and consider
the matrix A3 . Then A3 is the weight matrix of a graph H̃, which contains all the edges
of H and also some new edges. Now H̃ is not bipartite since it still contains the cycle
C of minimal length. Also H̃ has no loops as d ≥ 5. Finally, some chords have been
introduced between vertices of the cycle C, showing that the minimal length odd cycle
in H̃ is strictly between 3 and d − 2. Thus after a finite number of steps we can assume
that H contains a triangle. If H does not satisfy Property 1 then we can repeat the
entire procedure from the beginning. This can only continue for a finite number of steps,
since the first stage deletes at least one vertex and the second stage introduces at least
one new edge. Thus we can assume that H satisfies Property 1 and contains a triangle.
Now consider the following reduction. Let G be a given graph, which we can assume
is connected. Form the 2-stretch S2 G of G, by subdividing each edge of G. Join all
the newly formed vertices (the midpoints of the edges of G) to another new vertex v0 .
Denote the resulting graph by G0 . Let Ni denote the set of neighbours of i in H, for all
i ∈ C. For all i ∈ C let Vi be the the set of vertices
[
Vi =
Nj .
{i,j}∈EH
26
So Vi is the vertex i, together with all the neighbours of neighbours of i in H. Let A(i)
be the symmetric matrix with rows and columns in one-one correspondence with Vi ,
defined by
A(i) j,` = |Ni ∩ Nj ∩ N` |.
Finally, let H ∗ i be the graph underlying the matrix A(i) . That is, {j, `} is an edge in
H ∗ i if and only if A(i) j` 6= 0. Then
0
|ΩH (G )| =
k
X
X
wA(i) (Y ).
i=1 Y ∈ΩH ∗ (G)
i
Let A0 be the matrix with k diagonal blocks given by A(1) , . . . , A(k) . The above says
that
|ΩH (G0 )| = ZA0 (G).
Thus it suffices to show that EVAL(A0 ) is #P-hard. By Corollary 3.1, it suffices to show
that #H 0 is #P-hard, where H 0 is the graph underlying the weight matrix A0 . Using
Theorem 4.1, it suffices to show that #H ∗ i is #P-hard for some i ∈ C.
Recall that H contains a triangle, {i, j, `} say. Consider H ∗ i . Then
A(i) ii = |Ni | ≥ 2, A(i) ij = |Ni ∩ Nj | ≥ 1, A(i) i` = |Ni ∩ N` | ≥ 1.
This shows that H ∗ i has a loop at i and that the edges {i, j}, {i, `} are both present in
H ∗ i . Finally, note that {j, `} is not an edge in H ∗ i . For otherwise the graph Hi contains
a triangle, contradicting the fact that H satisfies Property 1. Thus H ∗ i has a connected
component containing a looped vertex i and missing an edge {j, `}. This shows that
#H ∗ i is #P-hard, by Cases 1 and 2 Theorem 4.1. This completes the proof for Case 4.
Thus the theorem holds.
5
Bounded degree graphs
Let A be a weight matrix and let D be an invertible diagonal matrix of vertex weights.
Let ∆ ≥ 2 be a constant. Denote by EVAL(∆) (A, D) the problem EVAL(A, D), restricted to those instances G with maximum degree at most ∆. We restate Theorem 1.2
in terms of A and D. The proof is an extension of the proof given in [4] for k-colourings.
Theorem 5.1 Suppose that A is a symmetric matrix, and D is a diagonal matrix of
positive vertex weights. Then there exists a polynomial-time reduction from EVAL(A, D)
to EVAL(∆) (A, D) for some constant ∆ ≥ 3.
Proof. Let G = (V, E) be a graph with arbitrary maximum degree. We form a graph
Γ = (VΓ , EΓ ) from G, such that Γ has maximum degree at most 3. The vertices of Γ are
partitioned into two sets VΓ = V 0 ∪ V 00 , and the edges of Γ are partitioned into two sets
EΓ = E 0 ∪ E 00 . To form Γ, perform the following operation in turn to each vertex v ∈ V
with degree d > 3: replace v by a copy of Pd−3 on vertices v0 , . . . , vd−3 . All these edges
27
go into E 00 . Join every neighbour of v in G to exactly one of the new vertices, using an
edge in E 0 , in such a way that each vi has degree 3. All other edges go into E 0 . Let
V 00 consist of those vertices which are the endpoint of some edge in E 00 , and let V 0 hold
all other vertices. Finally, let U ⊆ V 00 be the set of vertices which are the endpoint of
exactly one edge in E 00 . That is, for each v in g with degree d > 3, we have v0 , vd−3 ∈ U
and v1 , . . . , vd−4 ∈ V 00 \ U . Let m = |E 00 |.
√
Let Π be the diagonal matrix with Πii = λi , and let p be the least positive integer
such that τp (ADA) is nonsingular. Let B = Πτp (ADA)Π. For 1 ≤ r ≤ (m + 1)k , form
S2 Tp Sr 00 Γ by replacing each edge in E 00 by S2 Tp Pr . Then S2 Tp Sr 00 Γ has maximum degree
∆ = 3p, for 1 ≤ r ≤ (m + 1)k . Let
Y
w
eD 0 (X) =
v ∈ V 0 λX(v) .
We will evaluate the function ZA,D (S2 Tp Sr 00 Γ) as follows. The first factor is the contribution of the edge weights in E 0 . The second factor is the contribution of all the vertex
weights in VΓ . Then the contribution from the copies of Pr strung between edges in
E 00 is recorded, with the vertex weights of the endpoints removed (since they’ve already
been counted). We obtain
X
ZA,D (S2 Tp Sr 00 Γ) =
wA 0 (X) w
eD (X) ×
X:VΓ 7→C
Y
X
wA,D (Y )/λX(v) λX(w)
{v,w}∈E 00 Y ∈ΩH (X(v),X(w)) (S2 Tp Pr )
=
wA 0 (X) w
eD (X)
X
wA 0 (X) w
eD 0 (X) w
eΠ (U ) (X)
X
wA 0 (X) w
eD 0 (X) w
eΠ (U ) (X) wσr B 00 (X).
X:VΓ 7→C
=
X:VΓ 7→C
=
X:VΓ 7→C
Y
q
B r X(v)X(w) / λX(v) λX(w)
X
{v,w}∈E 00
Y
B r X(v)X(w)
{v,w}∈E 00
(14)
The second equality follows from Lemma 3.8, and the third equality follows since every
vertex in V 00 belongs to at most two edges in E 00 . Hence the contribution from the
weights of vertices in V 00 \ U disappears, and the contribution from vertices in U is given
by the matrix Π. The right hand side of (14) is of the form required by Lemma 3.4, and
the matrix B is nonsingular, by choice of p. Therefore Lemma 3.4 applies, with F = E 00 .
If we knew the values of ZA,D (S2 Tp Sr 00 Γ) for 1 ≤ r ≤ (m + 1)k , we could calculate the
value of
X
wA 0 (X) w
eD 0 (X) w
eΠ (U ) (X) wI 00 (X)
(15)
X:VΓ 7→C
in polynomial time, by Lemma 3.4. But wI 00 (X) is zero unless all edges in E 00 have
endpoints coloured with the same colour. Let d(v) denote the degree of vertex v ∈ V in
28
the graph G. The value in (15) is equal to
Y
X
wA 0 (X) w
eD 0 (X)
λX(v) =
X:VΓ 7→C
v∈VG
X
X:VG 7→C
d(v)>∆
wA (X) w
eD (X) = ZA,D (G).
This completes the polynomial-time reduction.
In the above proof, ∆ = 3p where p is the least positive integer such that τp (ADA) is
nonsingular. When A is nonsingular, we have ∆ = 3. We conjecture that Theorem 1.2
always holds for ∆ = 3, although in the singular case this still requires proof. Note that
The Widom–Rowlinson model and the Beach model both have nonsingular adjacency
matrices, so these problems are still #P-hard when restricted to graphs with maximum
degree at most 3.
As mentioned in Section 1, the situation is very different if we consider decision problems rather than counting problems. Theorem 1.2 shows that a #P-complete counting
problem remains #P-complete when restricted to instances with some constant maximum degree. However, Galluccio, Hell and Nešetřil [9] showed that there exist graphs H
with NP-complete decision problems, such that decision is a polynomial-time operation
when restricted to graphs with maximum degree 3. In fact, H can be taken to be a
triangle-free graph with chromatic number 3 (see [6] for details).
Theorem 1.2 only concerns problems which are restricted to maximum degree at
most ∆, for some constant ∆ ≥ 3. We show below that EVAL(2) (A) is always in P, for
all weight matrices A. The proof involves the path Pr on r + 1 vertices and r edges,
as defined in the previous section. It also involves the cycle Lr , which has r vertices
u1 , . . . , ur and r edges {u1 , ur } ∪ {ui , ui+1 | 1 ≤ i < r}.
Lemma 5.1 Let A be any symmetric matrix, and let D be a nonsingular diagonal
√ma√
trix. Then EVAL(2) (A, D) ∈ P. Let Π the diagonal matrix defined by Πii = Dii = λi .
Then
X
ZA,D (Pr ) = (D(AD)r )ij and ZA,D (Lr ) =
B r ii = Tr(B r ).
i∈C
Proof. Let G be any graph with maximum degree at most 2. Then G is
Pa disjoint union
of isolated vertices, paths and cycles. Each isolated vertex contributes i∈C λi . We now
derive an expression for ZA,D (Pr ). Recall the sets ΩH (i,j) (Pr ), defined in (1) for i, j ∈ C,
which form a partition of ΩH (Pr ). Then using Lemma 2 we obtain
X
ZA,D (Pr ) =
wA,D (X)
X∈ΩH (Pr )
=
X
i,j∈C
=
X
X∈ΩH (i,j) (Pr )
Xp
λi λj B r ij .
i,j∈C
= (D(AD)r )ij ,
29
wA,D (X)
as stated.
Next we derive an expression for ZA,D (Lr ). Consider “cutting” the cycle at vertex
ur , replacing it with two vertices u0 and ur 0 with edges {u0 , u1 } and {ur−1 , ur 0 }. This
gives a bijection between ΩH (Lr ) and the union over all colours i ∈ C of ΩH (i,i) (Pr ).
Hence
X
X
X
X
ZA,D (Lr ) =
wA,D (X) =
wA,D (X)/λi =
B r ii = Tr(B r ),
X∈ΩH (Lr )
i∈C X∈ΩH (i,i) (Pr )
i∈C
as stated. We divide by λi in the second equality since u1 and ur 0 are identified in Lr .
Suppose that G has n vertices. We can form the matrices {B r | 1 ≤ r ≤ n} in polynomial time, and use these matrices to calculate ZA,D (·) for each connected component
of G. Multiplying these values together, we obtain ZA,D (G) in polynomial time.
References
[1] M. Albertson, P. Catlin, and L. Gibbons, Homomorphisms of 3-chromatic graphs,
II, Congressus Numerantium, 47 (1985), pp. 19–28.
[2] G. Bloom and S. Burr, On unavoidable digraphs in orientations of graphs, Journal
of Graph Theory, 11 (1987), pp. 453–462.
[3] G. R. Brightwell and P. Winkler, Graph homomorphisms and phase transitions,
Journal of Combinatorial Theory, Series B, (To appear).
[4] R. Bubley, M. Dyer, C. Greenhill, and M. Jerrum, On approximately counting
colourings of small degree graphs, SIAM Journal on Computing, 29 (1999), pp. 387–
400.
[5] R. Burton and J. Steif, Nonuniqueness of measures of maximal entropy for subshifts
of finite type, Ergodic Theory and Dynamical Systems, 14, pp. 213–236.
[6] P. A. Dreyer, C. Malon, and J. Nešetřil, Universal H-colorable graphs without a
given configuration, (1999), (Preprint).
[7] M. Dyer and C. Greenhill, On Markov chains for independent sets, (1997),
(Preprint).
[8] H. M. Edwards, Galois Theory, Springer–Verlag, New York, (1984).
[9] A. Galluccio, P. Hell, and J. Nešetřil, The complexity of H-coloring of bounded
degree graphs, Discrete Mathematics, To appear.
[10] C. Greenhill, The complexity of counting colourings and independent sets in sparse
graphs and hypergraphs, Computational Complexity, (1999), (To appear).
30
[11] P. Hell and J. Nešetřil, On the complexity of H-coloring, Journal of Combinatorial
Theory, Series B, 48 (1990), pp. 92–110.
[12] H. B. Hunt III, M. V. Marathe, V. Radhakrishnan, and R. E. Stearns, The complexity of planar counting problems, SIAM Journal on Computing, 27 (1998), pp. 1142–
1167.
[13] R. W. Irving, NP-completeness of a family of graph-colouring problems, Discrete
Applied Mathematics, 5 (1983), pp. 111–117.
[14] F. Jaeger, D. L. Vertigan, and D. Welsh, On the computational complexity of the
Jones and Tutte polynomials, Mathematical Proceedings of the Cambridge Philosophical Society, 108 (1990), pp. 35–53.
[15] J. Kahn, An entropy approach to the hard-core model on bipartite graphs, (1999),
(Preprint).
[16] J. L. Lebowitz and G. Gallavotti, Phase transitions in binary lattice gases, Journal
of Mathematical Physics, 12 (1971), pp. 1129–1133.
[17] A. K. Lenstra, H. W. Lenstra Jnr., and L. Lovász, Factoring polynomials with
rational coefficients, Mathematische Annalen, 261 (1982), pp. 515–534.
[18] L. A. Levin, Universal sequential search problems, Problems of Information Transmission, 9 (1973), pp. 265–266.
[19] N. Linial, Hard enumeration problems in geometry and combinatorics, SIAM Journal on Algebraic and Discrete Methods, 7 (1986), pp. 331–335.
[20] M. Luby and E. Vigoda, Approximately counting up to four, in Twenty-Ninth Annual Symposium on Theory of Computing, ACM, New York, 1997 pp. 682–687.
[21] H. A. Maurier, J. H. Sudborough, and E. Welzl, On the complexity of the general
colouring problem, Information and Control, 51 (1981), pp. 123–145.
[22] S. P. Vadhan, The complexity of counting in sparse, regular, and planar graphs,
Preprint (Available from http://www-math.mit.edu/∼salil/), (May 1997).
[23] L. G. Valiant, The complexity of enumeration and reliability problems, SIAM Journal on Computing, 8 (1979), pp. 410–421.
[24] D. J. A. Welsh, Complexity: knots, colourings and counting, vol. 186 of London
Mathematical Society Lecture Note Series, Cambridge University Press, Cambridge,
(1993).
[25] B. Widom and J. S. Rowlinson, New model for the study of liquid-vapour phase
transition, The Journal of Chemical Physics, 52 (1970), pp. 1670–1684.
31