Simplifying Polynomial Constraints Over Integers to Make

Simplifying Polynomial Constraints
Over Integers
to Make Dependence Analysis More Precise
Vadim Maslov
[email protected], 301-405-2726
William Pugh
[email protected], 301-405-2705
Department of Computer Science
University of Maryland, College Park, MD 20742
Abstract. Why do existing parallelizing compilers and environments
fail to parallelize many realistic FORTRAN programs? One of the reasons is that these programs contain a number of linearized array references, such as A(M*N*i+N*j+k) or A(i*(i+1)/2+j). Performing exact
dependence analysis for these references requires testing polynomial constraints for integer solutions. Most existing dependence analysis systems,
however, restrict themselves to solving ane constraints only, so they
have to make worst-case assumptions whenever they encounter a polynomial constraint. In this paper we introduce an algorithm that exactly
and eciently simplies a class of polynomial constraints which arise in
dependence testing.
1 Introduction
In this paper we describe new techniques for simplifying 1 polynomial constraints
over integers. This work supersedes our previous work on dependence testing of
non-linear subscripts [Mas92] and allows us to handle polynomial constraints
that arise in a number of situations. This work is also an extension of the Omega
test [Pug92, PW92] | the system that simplies conjunctions of ane constraints over integers and exactly eliminates existentially quantied variables.
Polynomial constraints arise in a number of situations (see also [Mas92]):
When computing dependences between array references with linearized polynomial subscripts. Polynomial subscripts often appear as a result of generalized induction variable recognition.
When performing loop transformation known as symbolic blocking (tiling).
Let's consider some examples of problems with polynomial constraints. All
the problems are taken from the real-life programs and simplifying them exactly
is crucial for our ability to parallelize these programs.
1
What is simplication? The set of constraints b that is a result of simplication of
the set of constraints a has the following properties: (1) The order of b is lower than
the order of a, (2) Ideally, b is a set of ane constraints (or False if there are no
solutions), (3) b has the same set of solutions as a, that is, b is equivalent to a.
do i=p,p+L-1
do j=q,q+M-1
do k=r,r+N-1
S1:
A(M*N*i+N*j+k) =...
S2:
...= A(M*N*i+N*j+k)
enddo
enddo
enddo
MNiw + Njw + kw = MNir + Njr + kr
p i w ; ir p + L ? 1
q j w ; jr q + M ? 1
r k w ; kr r + N ? 1
(1)
p i w = ir p + L ? 1
q j w = jr q + M ? 1
(2)
r k w = kr r + N ? 1
M 1 ^ N 1
Fig. 1. Product of variable and symbolic constant(s)
do i=0,N-1
do j=0,i
S1:
A(i*(i+1)/2 + j) =...
enddo
do j=0,i
S2:
...= A(i*(i+1)/2 + j)
enddo
enddo
iw (iw +1)
2
+ jw = ir (ir2+1) + jr
0 i w ; ir N ? 1
0 j w iw
0 j r ir
(3)
0 j w = jr iw = i r N ? 1
(4)
Fig. 2. Product of two variables: triangular linearization
1.1 Rectangular symbolic linearization
The program in Figure 1 is one of many loop nests from the oil reservoir simulation program BOAST in the RiCEPS benchmark suite. This is rather typical
example of loop nest with linearized references, which are met quite often in real
programs, and [Mas92] discusses in length why linearization is used.
To be able to parallelize the i, j and k loops, we need to prove that the ow
dependence from the statement instance S1 [iw ; jw ; kw ] to the statement instance
S2 [ir ; jr ; kr ] is loop-independent. This dependence is described by constraints
(1). In Section 7.1 we demonstrate how our algorithm simplies (1) to (2).
All existing dependence analysis techniques (that we know of) except for one
fail to prove that this dependence is loop-independent. Symbolic delinearization
[Mas92] can prove this, but it has serious limitations discussed in Section 8.
1.2 Triangular linearization
Consider the program in Figure 2. Since the one-dimensional array A is a
linearized version of a triangular matrix A, a reference to A(i; j ) is expressed
as A(i*(i+1)/2 + j). Linearized triangular matrices are used quite often in
scientic codes.
Loop i cannot be parallelized unless we know that ow dependence from
S1 [iw ; jw ] to S2 [ir ; jr ] is loop-independent, that is, iw = ir . No existing dependence test (that we know of) can automatically prove this. The problem describing this dependence is (3) and our techniques simplify it to (4), which proves
that the dependence is loop-independent. Since we also know that jw = jr , we
can fuse the two j loops if we need to, and since existing techniques cannot
establish that jw = jr , they cannot perform fusion for this example.
1.3 Code generation for symbolic blocking (tiling)
We also need to simplify polynomial constraints when we perform a loop transformation known as loop blocking (tiling) [AK87]. This transformation is used
to improve the memory cache use. The detailed description of why the polynomial constrains appear in symbolic blocking and how techniques described in
this paper are used to simplify these constraints can be found in [MP94].
2 Our approach
Our basic approach is to try to transform (using factoring techniques described
in detail in Section 4) a general polynomial constraint into a conjunction of ane
constraints and one of the special forms that we later anize:
xy c
A hyperbolic inequality
xy = c
A hyperbolic equality
(5)
ax x2 + ay y2 c An elliptical inequality
ax x2 + ay y2 = c An elliptical equality
Here x and y are variables and ax , ay and c are constants.
Then we anize the special form constraints. For example, xy 5 is equivalent to (the rst conjunction of this is shown in Figure 3):
(x 1 ^ 2x + y 7 ^ x + y 5 ^ x + 2y 7 ^ y 1) _
(x ?1 ^ 2x + y ?7 ^ x + y ?5 ^ x + 2y ?7 ^ y ?1)
This is, perhaps, one of the few places where the fact that we are working with
integers makes things easier than if we were working with reals (it is not possible
to convert polynomial constraints over real variables into ane form). Details of
anization are given in Section 6.
Of course, not all polynomial constraints are of the form that we can factor
and anize. The details of the algorithm that systematically applies factoring
and anization are given in Section 3.
3 Algorithm to simplify polynomial constraints
In this section we present our top-level algorithm to simplify the conjunction of
a set of polynomial and ane constraints.
6
?
?
?
?
?
?
?
?
?
?
5
?
?
?
?
?
?
?
?
?
?
4
?
?
?
?
?
?
?
?
?
3
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
5 6
x
Fig. 3. Anizing inequality xy 5
7
8
9 10
y
2
1
0
0
1
2
3
4
xy = 5
xy 5 ?
x1
y1
y
?
2
7
? x ?2xy +
+7
y ?x + 5
Representing polynomial constraints. We use the following extension of
the Omega test framework to represent polynomial constraints. For each product
of regular variables that we encounter in a polynomial constraint we create a
product variable that represents it. Then we divide polynomial problem in two
parts: (1) Ane part that is original problem in which products were replaced
with product variables, (2) Product part that essentially is a denition of product
variables in terms of regular variables. For example, polynomial problem
Njw + kw = Njr + kr ^ 1 jw N ? Njr + N ^ q jw ; jr q + M ? 1
is represented as (product part uses := for dening product variables):
v1 + kw = v2 + kr ^ 1 v1 ? v2 + N ^ q jw ; jr q + M ? 1
v1 := Njw ; v2 := Njr
We further classify regular variables as: (1) Ane variables. These are variables that do not appear in products. We single them out because we can exactly
eliminate them using the Omega test. (2) Semi-ane variables. These are regular variables that appear in products. We cannot project them out using the
Omega test, because they are involved in polynomial constraints. In the above
example ane variables are kw ; kr ; q; M , semi-ane variables are jw ; jr ; N and
product variables are v1; v2 .
We can simplify the ane part of a polynomial problem using the Omega
test. However, when it comes to factoring and anization, we use denitions of
the product variables from the product part of the problem. Product variables
that become unused as a result of anization and/or factoring, are removed.
Problem SimplifyPolynomial(Problem p) Begin
Boolean change := True
Integer n := 0
pn = p
Do while (problem pn has polynomial constraints ^ change )
change := False
v := ane variables of pn
pn+1 := 9v s:t: pn
s:t: means \such that"
qn+1 := gist pn+1 given pn
n := n + 1
If we can determine that pn is unsatisable then Return(False)
For each constraint c in polynomial constraints of pn
Try to factor and anize constraint c
If (factoring and/or anization succeeds) change := True
EndFor
EndDo
p = pn ^ qn ^ qn?1 ^ ^ q1
If polynomial constraints remain in p
Use ane equalities in p to derive substitutions
Try to simplify or eliminate polynomial constraints using substitutions
If this produces additional ane equalities, repeat
EndIf
Return(p)
End
Fig. 4. Polynomial constraints simplication algorithm
Algorithm itself. We present the algorithm that simplies a polynomial prob-
lem in Figure 4. The algorithm applies factoring and anization as many times
as it can. Each anization lowers the order of polynomial constraint by 1. So
nally we either get ane problem or stop because no anization nor factoring
can be done. Thus we prove that the algorithm always terminates.
To satisfy conditions for factoring and anization we eliminate ane variables that stand in the way of factoring. Basically our goal is to get polynomial
constraint that has less variables than original constraint, to factor out the common term (or apply more intricate factoring, as in triangular delinearization
example) and to anize it.
Variables that are removed as a result of projecting out ane variables and
constraints involving these variables are memorized in qi problems. When simplication is nished, we use qi problems to restore the original problem. As
restoration goes on, we use new equalities and inequalities produced by anization to simplify restored polynomial constraints.
4 Factoring
We use the following techniques to transform a general polynomial constraint to
one of the forms (5). These techniques are described for inequality constraints,
but they work equally well for equalities.
Common term. If a factor x occurs in all terms of a constraint, except for a
constant term, we can factor this constraint. That is, we transform the constraint
n
X
i=1
ai xRi c
where ai and c are integer constants, x is a variable, each Ri is a product of
variables or the constant 1 to
9 y s:t: xy c ^ y =
n
X
i=1
ai Ri
So we reduce the order of the original polynomial constraint by 1, hopefully
making it ane, and we produce a hyperbolic constraint that can be anized.
Breaking quadratic constraint. As a more specialized case, a constraint of
the form:
axx + bx x ? ay y ? by y + c 0
2
2
2 2
where ax > 0, ay > 0, bx , by and c are known integer constants, x and y are
variables, is transformed to the following equivalent constraint (that involves
hyperbolic equality or inequality):
= 2a2x ay x ? 2ax a2y y + bx ay ? by ax
9 ; s:t: = 2a2x ay x + 2ax a2y y + bx ay + by ax
a2y b2x ? a2xb2y ? 4a2x a2y c
If the coecient of x2 (that is, a2x ) is not a square of some integer, we should
multiply the whole constraint by a positive integer constant which makes the
coecient of x a square. If after this the coecient of y2 (that is, a2y ) is not a
square, factoring cannot be done in integers, and therefore we give up on this
constraint.
Completing square. A constraint of the form:
axx2 + bx x + ay y2 + by y + c 0
where ax > 0, ay > 0, bx , by and c are known integer constants, x and y are
variables, is transformed to the following equivalent set of constraints (involving
elliptical equality or inequality):
= 2axx + bx
= 2ay y + by
9 ; s:t:
ay 2 + ax 2 ay b2x + ax b2y ? 4ax ay c
5 Representing integer division
To simplify constraint involving integer division we simply transform it into
equivalent polynomial constraint not involving integer division. That is, we
transform constraint L(bE=F c; :::) where E and F are ane expressions into
polynomial constraint:
9 t; s:t: L(t; :::) ^ tF + = E ^ 0 F ? 1
xy
For example, z = 2v is transformed into 9t s:t: t =2v ^ tz+ = xy ^ 0 z?1.
Then we use our regular techniques to simplify resulting polynomial constraint.
6 Anization
Let's consider the area described by a constraint (we discuss only -inequalities
here; inequalities with <, >, or operators can be converted into -inequalities):
xb x xe ^ y f (x)
(6)
00
We require this area to be convex, that is, 8x : xb x xe ) f (x) 0.
If it is not so, we can break the segment [xb; xe] into several segments, such
that this requirement is satised in each segment, and consider these segments
separately. When we have a convex area, the derivative of f (x) steadily decreases
as x increases, so we can break interval [xb; xe] into 4 intervals (some of them
empty):
xb x x1 ) 1 f 0 (x)
x1 x x0 ) 0 f 0 (x) 1
x0 x x?1 ) ?1 f 0 (x) 0
x?1 x xe )
f 0 (x) ?1
Anization Theorem. If the above conditions are satised, the non-ane
constraint (6) is equivalent to a conjunction of ane constraints:
dxb e x bxe c ^ y bf (x0 )c ^
Vdx1 ?1e
line (hi; bf (i)ci; hi + 1; bf (i + 1)ci) 0 ^
i=dxb e
Vbf (x0 )?1c
?1
?1
(7)
i=bf (x1 )c line (hdf+ (i)e; ii; hdf+ (i + 1)e; i + 1i) 0 ^
Vbf (x0 )?1c
?1
?1
i=bf (x?1 )c line (hbf? (i)c; ii; hbf? (i + 1)c; i + 1i) 0 ^
Vbxe ?1c
line (hi ? 1; bf (i ? 1)ci; hi; bf (i)ci) 0
i=bx?1 c
Here the function f+?1 (y) is inverse function of f (x) for x1 x x0 and f??1 (y)
is inverse function of f (x) for x0 x x?1. The function line (hx1 ; y1i; hx2 ; y2i)
gives back an expression that is zero along the straight line passing through the
points hx1; y1 i and hx2 ; y2i, positive to the left of that line (as we move from
hx1; y1 i to hx2 ; y2i) and negative to the right of that line:
line (hx1; y1 i; hx2; y2 i) = (x2 ? x1)(y ? y1 ) ? (y2 ? y1)(x ? x1 )
In the rest of this section we show how the anization theorem is applied to
hyperbolic and elliptical inequalities.
6.1 Anizing inequalities
Hyperbolic inequalities with positive constant. To anize the inequality
xy c
where c 1, we break the domain of f (x) = c=x into two convex areas: xy c 1 (x ?1 ^ y c=x) _ (x 1 ^ y c=x). Applying the anization
theorem to each area, we get that the inequality xy c 1 is equivalent to:
xp 1 ^ y 1 ^
c?1e
d^
c i) 0 ^ line (hl c m ; ii; h c ; i+1i) 0) _
(line (hi; ci i; hi+1; i+1
i
i+1
i=1
x ?1 ^ y ?1 ^
?1
j k
j k
^
(line (hi; ci i; hi?1; i?c1 i) 0 ^ line (h ci ; ii; h i?c1 ; i?1i) 0)
p
i=b1? cc
l m
Hyperbolic inequalities with non-positive constant. The inequality xy 0 is equivalent to (x 0 ^ y 0) _ (x 0 ^ y 0).
If c ?1, then the inequality xy c describes non-convex area between
two hyperbola branches. We transform this inequality to negation of positiveconstant hyperbolic inequality:
xy c :(x0y0 c0) where x0 = ?x; y0 = y; c0 = 1 ? c 1
Negation of conjunct produces disjunction of several constraints.
Elliptical inequalities. Anizing elliptical inequalities is similar to anizing
hyperbolic inequalities (see details in [MP94]).
6.2 Anization of equalities
An equality constraint of the form xy = c or ax x2 + ay y2 = c has a nite number
of integer solutions. We convert this constraint to ane form by enumerating
these solutions.
Hyperbolic equalities. Let's consider the hyperbolic equality xy = cp
. We replace it with a disjunction of several constraints. For each t = 1 up to b jcjc, if
t divides c then we add to the disjunction constraints:
(x = t ^ y = c=t) _ (x = c=t ^ y = t) _
(x = ?t ^ y = ?c=t) _ (x = ?c=t ^ y = ?t)
For example, the equality xy = 5 is equivalent to: (x = 1 ^ y = 5) _ (x =
5 ^ y = 1) _ (x = ?1 ^ y = ?5) _ (x = ?5 ^ y = ?1).
Elliptical equalities. Similar to hyperbolic equalities (see details in [MP94]).
6.3 Number of constraints generated
Forphyperbolic inequalities and equalities number of constraints generated is
O( c) where c is constant from (5). As our preliminary study shows, c values
are usually small, and that means that few constraints need to be generated.
Anizing polynomial constraint only over the feasible domain. Often,
we can further restrict number of constraints generated if we know the range of
the participating variables is limited. Before generating ane constraints, we nd
the rectangular bounding box for x and y (the Omega test has this capability):
Lx = min x; Ly = min y; Ux = max x; Uy = max y
Then constraints that do not intersect with the bounding box are not generated
at all.
For example, if we know that x is non-negative, then xy 2 is equivalent to
ane set of constraints x 1 ^ x + y 3 ^ y 1.
7 Examples
7.1 Rectangular delinearization
We start with computing p1 , the projection of (1) onto variables involved in products
(iw ; ir ; jw ; jr ; M and N ), and q1 , everything else:
w + Njw ? MNir ? Njr N ? 1
p1 1 ? N MNi
1 ? M j w ? jr M ? 1
We can factor and anize the polynomial constraints. The constraints in p1 imply
N 1, so we generate only one branch of hyperbola:
1N
1N
1 N (Miw?Mir+jw?jr+1) 1 Mi ?
w Mir+jw?jr+1
1 N (Mir?Miw+jr?jw+1) 1 Mi ?Mi +j ?j +1 Miw+jw = Mir+jr
r
w r w
1?M jw?jr M?1
1?M jw?jr M?1
1?M jw?jr M?1
We again eliminate all ane variables (N;jw ; jr ):
M (iw ? ir + 1)
p2 1 ? M Mir ? Miw M ? 1 11 M (ir ? iw + 1)
Anizing we get p2 1 M ^ iw = ir .
Having reduced p to ane form, we restore constraints involving eliminated variables and simplify:
iw = ir ^ Miw + jw = Mir + jr
MNiw + Njw + kw = MNir + Njr + kr
p p 2 ^ q2 ^ q 1 p i w ; ir p + L ? 1
q j w ; jr q + M ? 1
r kw ; kr r + N ? 1
Substituting ir for iw allows us to derive jw = jr , which in turn allows us to substitute
jr for jw deriving kw = kr . Finally we get the constraints (2).
7.2 Triangular delinearization
Before applying our algorithm to the problem (3), we convert integer division by 2 to
integer multiplication:
0 j 2 i2 < N
2t1+ = i21+i1 ^ 0 1
t1 ;
2
j 1 i1 < N
2
t
t
;
2+ = i2+i2 ^ 0 1
2
p 9 ; s:t: t +j = t +j ^ 0 i ; i N?1 i +20j 2
2
2
2+i2 1+i1+2j1+i1
1 1
2 2
1 2
2
0 j 1 i 1 ^ 0 j 2 i2
i1+2j1+i1 1+i2+2j2+i22
We now compute p1 , the projection of p onto variables involved in products, and
q1 , everything else needed so that p = p1 ^ q1 :
0 i1
0 j 2 i2 < N
0
i
0
N
2
p1 i + i2 1 + 3i + i2 q1 i + 2j +i2j11 i+1 i< +
2j1 + i21
2
1
2
2
1
2
1
2
i1 + i21 1 + 3i2 + i22
i1 + 2j1 + i21 1 + i2 + 2j2 + i22
We factor the polynomial constraints in p1 :
i21 + 3i1 ? i22 ? i2 0 (i1 ? i2 + 1)(i1 + i2 + 2) 2
i22 + 3i2 ? i21 ? i1 0 (i2 ? i1 + 1)(i2 + i1 + 2) 2
and anize the factored forms:
(i1?i2+1)(i1+i2+2) 2 i1?i2+1 1 ^ i1+i2+2 1 ^ (i1?i2+1) + (i1+i2+2) 3
(i2?i1+1)(i2+i1+2) 2 i2?i1+1 1 ^ i2+i1+2 1 ^ (i2?i1+1) + (i2+i1+2) 3
Replacing these two polynomial constraints with their ane equivalent and simplifying
yields: p1 0 i1 = i2 . Since p1 is completely ane, we are done. We combine p1 and
q1 and simplify, yielding:
i1 = i2 ^ 0 j 2 i2 < N ^ 0 j 1 i 1 < N
p p 1 ^ q1 i2 + 2j2 + i22 1 + i1 + 2j1 + i21
i1 + 2j1 + i21 1 + i2 + 2j2 + i22
By substituting i1 for i2 , we get the nal result (4):
i2 := i1
i2 := i1
p
0 j1 ; j2 i1 < N i02 = ji11 ^ ij12 <=Nj1
0 j 1 ; j2 i 1 < N
2j 1 + 2j ^ 2j 1 + 2j
j j ^ j j
2
1
1
2
2
1
1
2
8 Related Work
Polynomial constraints simplication vs delinearization. In this para-
graph we compare our polynomial constraints simplication algorithm with symbolic delinearization [Mas92].
First, we prove that our algorithm exactly simplies all problems that can
be handled by symbolic delinearization. The essence of delinearization is transforming the constraints:
Ne1 + e2 = 0
e1 = 0
1 ? N e2 N ? 1 to e2 = 0
where e1 , e2 are expressions, and N is a variable. Using our algorithm, we substitute e2 = ?Ne1 into the inequality 1 ? N e2 N ? 1. Factoring and
simplifying produces: N (1 ? e1 ) 1 ^ N (1 + e1 ) 1. Anizing both inequalities we get e1 0 ^ e1 0 and therefore e1 = 0. Substituting this equality to
the original problem, we nally prove that e1 = e2 = 0.
Symbolic delinearization has several serious restrictions that are not present
in our algorithm:
It can handle only subscript functions linearized according to FORTRAN
rules: reference A(i1 ; i2; :::; in) to array A(0 : D1 ; 0 : D2 ; :::; 0 : Dn ) is converted to A(i1 + D1 i2 + + D1 Dn?1in ). We call this rectangular linearization. Triangular linearization (see Section 1.2) that is used quite often
in scientic codes is not handled.
Even for the case of rectangular linearization it cannot handle constraints
imposed by triangular iteration space.
Parafrase-2. In [HP91] the authors propose to use a symbolic version of Baner-
jee's inequalities for dependence testing, but it is known that Banerjee's inequalities do not detect independence in case of linearized subscript functions [Mas92].
To alleviate the inexactness of Banerjee's inequalities, Haghighat and Polychronopoulos propose to detect monotonically increasing and decreasing subscript function using the nite dierences method [HP93]. When the subscript
function is monotonically changing, the reference cannot hit the same memory
cell on the next iteration, and therefore no output dependence can exist from the
reference to itself. Our induction variable recognition system also can discover
that the closed form of induction variable is monotonically changing and we are
able to use this fact to prove the absence of the output dependence.
However, when monotonicity cannot be proven | it happens, for example, for
program in Figure 2 | Haghighat and Polychronopoulos nite dierence method
cannot be used and their techniques cannot prove that the ow dependence in
this example is loop-independent and output dependence does not exist.
Other approaches. A number of computer algebra books and papers [KL92,
DST88] are devoted to solving polynomial constraints over the complex and real
numbers. Since we are interested in polynomial constraints over the integers,
we cannot directly use their results. In [PB94] authors discuss approximation
of quadratic constraints with linear constraints that is similar to our work. The
factoring techniques that they described may be useful within a framework of
our algorithm.
9 Conclusion
We have presented an algorithm that exactly simplies conjunctions of ane
and polynomial constraints over integers (polynomial problem). That is, the
algorithm produces either equivalent ane problem or equivalent polynomial
problem whose order is lower than that of original problem. In the process our
algorithm can detect that polynomial problem has no solutions. If the problem is
completely anized then the Omega test answers satisability question exactly.
Otherwise detection of unsatisability is not guaranteed.
Our algorithm is expandable: we can add more sophisticated factoring techniques to it, and using the anization theorem we can consider anizing constraints of order 3 and more. We think that expansion of our algorithm should
be guided by practical needs of the concrete application.
More experiments are needed to prove that the concrete set of techniques
described in this paper is sucient for anization of polynomial problems that
arise in parallelizing compiler analyses (or extend these set accordingly).
References
[AK87] J. R. Allen and K. Kennedy. Automatic translation of Fortran programs to
vector form. ACM Transactions on Programming Languages and Systems,
9(4):491{542, October 1987.
[DST88] J. H. Davenport, Y. Siret, and E. Tournier. Computer Algebra, Systems and
Algorithms for Algebraic Computation. Academic Press, 1988.
[HP91] M. Haghighat and C. Polychronopoulos. Symbolic dependence analysis for
high-performance parallelizing compilers. In Advances In Languages And
Compilers for Parallel Processing, August 1991.
[HP93] M. Haghighat and C. Polychronopoulos. Symbolic analysis: A basis for parallelization, optimization and scheduling of programs. In Utpal Banerjee et al.,
editor, Languages and Compilers for Parallel Computing. Springer-Verlag, August 1993. LNCS vol. 768; proceedings of the Sixth Annual Workshop on
Programming Languages and Compilers for Parallel Computing.
[KL92] Deepak Kapur and Yagiti Lakshman. Elimination methods: an introduction.
In Bruce Donald, Deepak Kapur, and Joseph Mundy, editors, Symbolic and
Numerical Computation for Articial Intelligence. Academic Press, 1992.
[Mas92] Vadim Maslov. Delinearization: an ecient way to break multiloop dependence equations. In ACM SIGPLAN '92 Conf. on Programming Language
Design and Implementation, San Francisco, California, June 1992.
[MP94] Vadim Maslov and William Pugh. Simplifying polynomial constraints over
integers to make dependence analysis more precise. Technical Report CS-TR3109.1, Dept. of Computer Science, University of Maryland, College Park,
February 1994.
[PB94] Gilles Pesant and Michel Boyer. Linear approximations of quadratic constraints. In Principles and Practice of Constraint Programming Workshop,
May 1994.
[Pug92] William Pugh. The Omega test: a fast and practical integer programming
algorithm for dependence analysis. Communications of the ACM, 8:102{114,
August 1992.
[PW92] William Pugh and David Wonnacott. Going beyond integer programming
with the Omega test to eliminate false data dependences. Technical Report
CS-TR-3191, Dept. of Computer Science, University of Maryland, College
Park, December 1992. An earlier version of this paper appeared at the SIGPLAN PLDI'92 conference.
This article was processed using the LaTEX macro package with LLNCS style