An Algebraic Approach to Computational Complexity

Proceedings of the International Congress of Mathematicians
August 16-24, 1983, Warszawa
L. G. VALIANT
An Algebraic Approach
to Computational Complexity
The theory of computational complexity is concerned with identifying
methods for computing solutions to problems in a minimal number of
steps. Despite the diversity of application areas from which such problems
can be drawn this theory has been successful in identifying a small number
of fundamental questions on which a sizable fraction of the field hangs.
A prime example is the P = ?ÎTP question of Cook [4], concerning discrete
search problems. Unfortunately the techniques currently known for
proving that particular problems inherently require a certain number of
steps are rudimentary. Hence these fundamental questions appear far
from resolution.
In contrast with our ignorance about the absolute difficulty of computing problems, much is known about their relative difficulty. For example
there are numerous results that relate pairs A and B of problems in the
following way: If there exists a polynomial time algorithm for B (i.e. one
that for some constant a solves B for inputs of size n in na steps) then there
exists a polynomial time algorithm for A also. Such results do not depend
on or determine whether the absolute complexities are w2 and 2n or any
other function of n.
Our purpose here is to give a brief discussion of a very strict notion of
reducibility called ^-projection. Further details can be found in [19] where
it was introduced and in [8, 15, 20]. The remaining references at the end
of this paper describe work relevant to it in various ways.
The remarkable property of this notion of reduction is that in spite
of its demanding and restricted nature numerous natural problems that
superficially look dissimilar can be related by it. It is applicable to a variety
of algebraic structures among which rings of multivariate polynomials and
Boolean algebra are important examples. It can be used to give an account
[1637]
1638
Section 17: L. G. Valiant
of the relative difficulty of computational problems almost without
having to define models of computation.
A computation problem, such as that of evaluating the determinant of
a square matrix, is represented by an infinite family of instances of it,
each corresponding to a different number of arguments and indexed by this
number. The family DET will be the set {DET^ DET 4 , DET 9 ,...} where
DET m is the m variable polynomial that is the determinant of a Vm xVm
matrix. Such a family is always defined with respect to a particular ring
or field from which the constant coefficients are drawn. A second example
is PEEM =* {PEEMu PEBM4, PEEM 9 ,...} where PEEM™ is the permanent of a Vm xVm matrix. Eecall that the permanent has the same
set of monomials as the determinant but the coefficient of each monomial is now positive.
It is interesting to contrast these two particular problems because
one is easy to compute while the other is apparently hopelessly difficult.
Gaussian elimination methods can be used to compute anwxw determinant in 0(nz) arithmetic operations while more recent techniques do even
better [5,16]. On the other hand the best algorithm known for computing
the permanent takes 0(n2n) steps [9,13]. Even the multiplicative factor of
n in this* bound appears difficult to remove.
The relationships which we explore among such pairs of families are
öf the following kind. If A{(y19 ..., y{) and Bj(x19..., oof) are polynomials
over the ring JB we say that At is a projection of B5 if there is a substitution
a: {oo1,..., xj}-^{y1,..., #JUJB such that Ai(yX9...9y^
is identical to
B^(o(x1), ..., a(xf)). For example, A2(y1} y2) = #i+# 2 is the projection of
DET 4 (^ U , x12, x21, x22) under the substitution a(xn) =y19 G(X12) =33/2>
a(x21) = —1, and a(x22) = 1. Further a family J. is a projection of a family
B if for all A{ e A, A{ is a projection of some Bj eB.
Now it so happens that the two families PEEM and DET are projections
of each other. This in itself, however, is of little practical interest since the
definition permits that PEEM9, for example, be the projection of DET^ only
for enormous values of j . Hence we need to add the following quantification. A family A is a p-projection of family B if for some constant a9
for all Aî G A, Ai is a projection of some Bj e B with j < ia.
In the investigations described here the following kind of question
is central : Is PEEM a ^-projection of DET ? One aspect of the computational
relevance of this question is immediate. If we could give a positive answer
to this question then we would have a polynomial time algorithm for the
permanent. The algorithm would consist simply of the determinant
algorithm applied after the appropriate initial substitution of variables.
An Algebraic Approach to Computational Complexity
1639
Complexity-theoretic results state that the particular question raised
above has broader significance in at least two directions.
First, it can be shown that the permanent exemplifies a large class,
called pD (p-definable)9 of families of polynomials in the sense that all
members of the class are ^-projections of it [19, 20]. The class is essentially
that of multivariate polynomials in which the degree grows only polynomially in the number of arguments and in which the coefficient of
each monomial is easily computed from the specification of the variables
in the monomial (cf. the permanent and determinant). The degree constraint turns out to be quite natural and we shall assume it in the discussion
to follow. The class pB contains all such families that can be computed
in a polynomial number of steps and, in addition, includes a large number
of other families for which no such fast algorithms are known. Examples
of the latter are most reliability problems such as the following. Consider
a network with nodes { 1 , . . . , n} where the connection between node i and j
has probability p{j of not failing. Then thé probability EEL that nodes 1 and
n are connected to each other is a polynomial in the variables {$>%}. Further
examples of such ^-definable families abound as generating functions for
combinatorial problems. For example HO is one such function defined for
the Hamiltonian circuit problem for graphs in a natural way.
A family in pB of which every member of pD is a ^-projection is called
complete for pB. That there should be natural complete problems is not
self-evident. However, it turns out that PEEM, HO and EEL, and many
others are all complete for pB with respect to appropriate rings. These
families are therefore jp-projection of each other also. The proofs of these
facts support the stronger statement that these ^-projections are strict
in the sense that two or more variables are never mapped to one. Hence
these polynomials can be obtained one from the other by simply fixing
some variables as constants and renaming the others.
Hence the importance of the permanent is due essentially to the fact
that a wide variety of other polynomials can be expressed succinctly
in terms of it. Our interest in the permanent versus determinant question
stems from the second fact that the determinant also has a large class of
polynomials that it can efficiently encode and this class is, to a first
approximation, the class of all polynomial families that can be computed
fast. We can conclude therefore that the permanent is a ^-projection of
the determinant if and only if the permanent and all the other families
complete in pB can be computed fast. Hence a major computational
problem has been reduced to a purely algebraic one.
Unfortunately, there is a huge gap in our current understanding of
51 — Proceedings..., t. II
1640
Section 17: L. G. Valiant
the above question. It is known that a>n nxn determinant cannot be
mapped to an nxn permanent if n exceeds two, even if substitutions
by arbitrary linear forms are allowed [l£, 18]. The possibility that an
(n +1) x (n +1) determinant suffices, however, remains open. On the other
hand there is substantial historical evidence that fully exponential growth
is necessary since the contrary would imply fast algorithms for NT-complete
problems and more.
The previously quoted result about the universality of the determinant
for describing easy to compute problems needs one qualification concerning
the model of computation assumed. It states that any polynomial is the
projection of a determinant of size no larger than the minimal formula or
expression for the polynomial. Whether the same result holds if we replace
formula by the more basic model of computation,' the straight-line program, remains an important open question. The relationship between
the two measures of complexity is bounded by a growth factor of nloen
(called quasipolynomial) which is much less than the truly exponential
factors (i.e., exp(we) for some e > 0 ) which constitute the gaps in our
current knowledge aboit all the relevant questions.
The class of functions that can be obtained as a projection of a given
function J. is a precise description of the class of functions that can be
computed using a chip or program package for A directly, without the
need for further programming. Hence the result for the determinant gives
mathematical meaning for why the determinant and linear algebra itself
is such a ubiquitous computational tool.
Boolean algebra is an equally fertile ground for carrying out an investigation akin to that described above for the multivariate polynomials.
Here we define a projection to be substitution of variables by variables,
negations of variables or constants. Eeductions by such ^-projections can
be shown to be sufficient in many cases where only looser notions of
reductions were known previously. Also, they can be shown to hold between
easy to compute functions wehere such looser notions are meaningless.
Among specific results it can be shown that any polynomial time
computable family is the ^-projection of a family.of linear programming
problems [20]. This provides some explanation of the ubiquity of linear
programming in combinatorial optimization. When we consider parallel
rather than sequential computations, the transitive closure problem is
universal, and this is again supported by much empirical evidence. For
hard to compute combinatorial search problems one can get essentially
the well-known NP-complete class [15, 19]. The algebraic approach
provides an arguably simpler formulation than the now classical theory
An Algebraic Approach to Computational Complexity
1641
using Turing machines. Such questions as P = ?NP are shown to be essentially equivalent to questions of whether one fixed family of Boolean
functions is a ^-projection of another.
A major motivation of studying this very strict notion of reducibility
is the expectation of being able to prove negative results.' One such theorem
states that the symmetric Boolean functions are not very expressive in that
there exist functions with polynomial bounded formulae that are not
the ^-projection of any family of symmetric functions [15]. A slightly
more powerful family is the one for detecting whether an undirected
graph is connected. This has the same shortcoming if the ^-projections
are restricted to be monotone (i.e., substitutions by negated variables
are disallowed [15]), but becomes ^-universal under general ^-projections
[3, 14].
Early work in computer science, such as that of Turing, concentrated
on the notion of uniformity in computation, the notion that a fixed finite
program is a description of potential behaviour on an infinite number of
different inputs. Empirical evidence suggests that this notion may not
be all-important in distinguishing polynomial time from exponential
time computability. In trying to write a fast program for solving the
Travelling Salesman Problem (TSP) it does not appear to make our task
any easier to restrict ourselves to solving instances with exactly five
hundred cities. For this reason in our algebraic theory of Boolean complexity we have excluded this notion of uniformity altogether, and thereby
gained much simplicity. The notion can be added back (e.g., logarithmic
space computable ^-projection) with no difficulty. At present we do not
believe, however, that the notion of uniformity will be central in ultimately
resolving the important open questions.
We can summarize our approach as one in which tJie algebraic relationships among the natural computational problems are central and relations
with computational models are almost secondary, We can caricature the
advantage of this by considering again the Travelling Salesman Problem.
On conventional models of computation this problem is always clumsy to
discuss because it involves both reaL numbers and discrete choices. I t
becomes very easy to discuss, however, in the context of an appropriate
algebra. Consider the set of rationals with the two binary operations
of minimization and addition (to correspond to conventional addition and
multiplication respectively). Many combinatorial optimization problems
can be expressed naturally as polynomials in this algebra. TSP is Min< {wj
where minimization is over all Hamiltonian circuits in the associated
graph and wi is the sum of the weights on the ith such cycle. I t turns out
1642
Section 17: L. G. Valiant
that TSP is complete for _p-definable polynomials in this algebra. Hence
we have some explanation of the difficulty of TSP among combinatorial
optimization problems without having to mention any specific model of
computation.
References
[1] Baur W. and Strassen V., The computational complexity of partial derivatives,
/
Theoret. Comput. Sci. 22 : 3 (1983), pp. 317-332.
[2] Borodin A., von zur Gathen J., and HopcroffcJ . E., Fast parallel matrix and god
computations. In: Proc. 23rd IEEE Symp. on Foundations of Computer Science
(1982), p p . 65-71.
[3] Chandra A. K., Stockmeyer L. J., and Vishkin IL, A complexity theory.for unbounded fan-in parallelism. In: Proc. 23rd IEEE Symp. on Foundations of Computer Science (1982), pp. 1-13.
[4] Cook S. A., The complexity of theorem proving procedures. I n : Proc. 3rd ACM
Symp. on Theory of Computing (1971), pp. 131-158.
[5] Coppersmith D. and Winograd S., On the asymptotic complexity of matrix
multiplication. In: Proc. 22nd IEEE Symp. on Foundations of Computer Science
(1951), pp. 82-90.
[6] von zur Gathen J., Parallel algorithms for algebraic problems. I n : Proc. 15th
ACM Symp. on Theory of Computing (1983), pp. 17-23.
[7] Hyafil L., On the parallel evaluation of multivariate polynomials, SIAM
J.
Computing ft: 2 (1976), p p . 120-123.
[8] Jerrum M. R., On the complexity of evaluating multivariate polynomials, Ph. D.
Thesis, Edinburgh University, 1981.
[9] Jerrum M. R. and Snir M., Some exact complexity results for straight-line computations over semirings, JAOM 29: 3 (1982), pp. 874-897.
[10] Kalorkoti K. A., A lower bound on the formula size of rational functions. I n :
N Lecture Notes in Computer Science, Springer-Verlag, Vol. 140 (1982), pp. 330-338.
£11] Karp R. M., Reducibility among combinatorial problems. I n : Complexity of
Computer Computations (R. E . Miller and J. W. Thatcher, eds.), Plenum Press,
New York (1972).
[12] Marcus M. and Mine H., On the relation between the determinant and the permanent, Illinois J. Math. 5 (1961), p p . 376-381.
[13] Ryser H. J., Combinatorial Mathematics, Cams Math. Monograph no. 14 (1963).
[14] Skyum S., A measure in which Boolean negation is exponentially powerful,
Inform. Process. Lett. 17 (1983), p p . 125-128.
[15] Skyum S. and Valiant L. G., A complexity theory based on Boolean algebra.
In : Proc. 22nd IEEE Symp. on Foundations of Computer Science ( 1981), pp. 244-253.
[16] Strassen V., Gaussian elimination is not optimal, Numer. Math. 13 (1969),
p p . 354-356.
[17] Strassen V., Vermeidung von Divisionen, J. Reine Angew. Math. 264 (1973),
p p . 182-202.
[18] Sturtivant C , Generalised symmetries of polynomials in algebraic complexity. In:
Proc. 23rd IEEE Symp. on Foundations of Computer Science (1982), pp, 72-79.
An Algebraic Approach to Computational Complexity
1643
[19] Valiant L. G., Completeness classes in algebra. In : Proc. 11th ACM Symp. on Theory
of Computing (1979), pp. 249-261.
[20] Valiant L. G., Reducibility by algebraic projections. In : Logic and Algorithmic,
Monograph. Enseign. Math. 30 (1982), pp. 365-380.
[21] Valiant L. G., Skyum S., Berkowitz S., and Backoff C, Fast parallel computation
of polynomials, SIAM J. Computing 12:4 (1983), pp. 641-644.
HARVARD UNIVERSITY
CAMBRIDGE, MA
Proceedings of the International Congress of Mathematicians
August 16-24, 1983, Warszawa
NA^TOY KOPELL
Forced and Coupled Oscillators
in Biological Applications
The title of this paper involves forced and conpled oscillators. There is
a subtitle as well : one approach to doing applied mathematics in an area of
sometimes overwhelming complexity. The examples are taken mainly
from the physiology of electrically excitable tissue, including nerve,
heart and smooth muscle. To make my point about modelling, I shall
also discuss an oscillatory chemical reaction known as the BelousovZhabotinskii (BZ) reaction, which is not exactly biological, but whose mathematical properties have much in common with the above tissues. For all of
these examples, I shall examine the question: how much of the extremely
complex pheonomena one sees is understandable on very general mathematical grounds, independent of detailed facts about the physiology/chemistry ? Though the motivation of the question is not initially mathematical,
it leads very quickly into deep mathematical problems.
The phenomena I shall discuss are all concerned with coupled systems
(finite or infinitely many), each of which is oscillatory or "almost" oscillatory in a sense I shall describe below. From a mathematical point of view,
a "biological" oscillator is any biological system which undergoes regular
periodic changes. In practice, however, oscillations occurring in biological/
chemical settings usually have some extra properties. For example, they
tend to be quite stable, in amplitude and form, to perturbations of the
system. Thus, they are effectively modelled by systems of differential
equations with stable limit cycles, in contrast to those oscillations in mechanical systems which are described by Hamiltonian equations or perturbations of them. (For an early example, see [33].)
The "almost" oscillatory systems — cardiac tissue, nervous tissue,
smooth musclfe such as intestine — are "excitable", a term hard to define
precisely, but easy to apply in practice. An excitable system (mathematical, biological or chemical) is one with a stable rest point, and having
[1645]