Making a Success out of Early Failures

Making a Success out of Early Failures
Abhik Roychoudhury
C.R. Ramakrishnan
I.V. Ramakrishnan
R.C. Sekar Abstract
Promoting early failure of unsuccessful computations is a powerful optimization that
enhances determinism in the evaluation of logic programs. This optimization has two principal components: 1) identifying and extracting conditions that yield information about the
eventual success or failure of predicates, 2) exploiting these conditions to avoid unnecessary
computation. For the rst part sophisticated compile-time global analysis methods exist that
compute the necessary conditions for a predicate and its constituent clauses to succeed. In
this paper we address the other problem, namely the exploitation of necessary conditions.
Specically we develop a technique for selecting tests from necessary conditions based on an
analysis of their costs and benets. The resulting optimized program does not create any
more choice points or failure paths than the original program. More importantly, success path
lengths in the optimized program can never grow any longer than the corresponding paths
in the original program. We also discuss possible renements to improve our optimization.
1 Introduction
The ability of logic programming languages such as Prolog to try alternative computation paths
is a powerful and attractive feature. At the same time, many predicates yield solutions on
only a few of the alternative clauses dening them. In such cases, backtracking through all
of the clauses can lead to signicant amount of wasteful computations that are performed on
computation paths that fail eventually. To minimize this overhead, failure-bound branches of
computation must be identied as early as possible and avoided, i.e. we want to promote early
failure. Promotion of early failure will in turn lead to detection and exploitation of determinacy.
A lot of research eorts have focussed on automatic detection and exploitation of determinacy. Some of the early approaches rely on the presence of cuts [15, 24], making use of
determinacy information made explicit by the programmer. A more general notion of determinacy namely functionality [7, 8] stipulates a predicate to be functional when the number of
distinct answers that a functional predicate can yield is at most one. The notion of mutual exclusion [17] denes a predicate to be mutually exclusive when always at most one of its clauses
is applicable for computing solutions. Another alternative approach is to optimize the backtracking operation itself, and thereby reduce its costs. These optimizations are broadly referred
to as shallow backtracking [2, 10, 18, 26].
However, none of these methods can be used to systematically extract determinacy information that is not immediately available within a clause. For extraction of determinacy information
R. C. Sekar's address is : Dept. of Computer Science, Iowa State University, Ames IA 50010. E-mail :
First three authors are at : Dept. of Computer Science, SUNY at Stony Brook, Stony
Brook, NY 11794-4400. E-mail : fabhik, cram, [email protected]
[email protected]
from deeper levels of a program, we need a comprehensive approach based on global analyses
that compute necessary conditions [4, 5, 23] of the program clauses. At run-time, at every
backtracking point, only those clauses whose necessary conditions (potentially disjunctions) are
satisable are selected. Thus the notion of determinacy supported in this paper is one of mutual
exclusion. However, in our framework, we explicitly carry around the necessary conditions. So
our general notion is not \determinacy" but rather \early failure".
But, although necessary condition based approaches have the potential for improving determinism, eective exploitation of the information contained in necessary conditions is dicult.
Naively testing whether a path of computation has potential to succeed before beginning the
search leads to performance degradation due to repeated testing of necessary conditions. Moreover additional choice points may be created by introduction of these conditions. In this paper,
we develop a systematic framework in which we can reason about the costs and benets of introducing tests based on necessary conditions. Using this framework we develop a technique to
provide formal guarantees that in the optimized program, no new choice points or failure paths
are added, and any new tests introduced are paid for by eliminating equivalent tests that exist
deeper down in the program.
Promotion of early failure has been studied in other domains as well, in particular for constraint logic programs (CLP) [14] and bottom-up evaluation of LP/CLP [11, 12, 25]. The major
dierence between methods like [12, 14] and our method is that our method is conservative where
tests are introduced only if they can be paid back whereas these methods eagerly introduce tests
and later remove redundant ones. Section 5 compares our approach with these methods with
the help of examples.
The problem addressed by this paper has relationship with other well-studied research directions such as partial evaluation and program specialization [13, 16, 21, 22]. One important
dierence between our method and such techniques is that partial evaluation does not give any
performance guarantees of the transformed program w.r.t. the original program. Another major dierence is that typically partial evaluators do not take care of disjunctions in general, i.e.
when one computation path is being specialized the partial evaluator does not have knowledge
about the other computation paths. Section 5 discusses these issues in details.
Hence, the main contributions of this paper could be summarized as follows : i) the development of a framework to exploit the determinism exposed by analysis techniques which
compute \success-patterns", \necessary conditions" etc. while providing formal guarantees. ii)
explicit handling of all necessary conditions as disjunctions in both the analysis as well as in the
optimization phase. This enables us to do multivariant specialization and hence we can obtain
substantial speedups even while giving formal guarantees about the transformed program.
The rest of the paper is organized as follows. The following section gives the notations and
terminology used throughout the paper. In Section 3 we present our program transformation.
Theorems for the soundness and termination of our technique as well as performance guarantees
appear in Section 4. Section 5 provides a comparison of our approach with other related work.
Finally, extensions to our technique and possible future work are discussed in Section 6.
2 Notations and Conventions
We use V to denote the set of variables ; F to denote the set of uninterpreted function symbols ;
P to denote the set of predicate symbols ; ? denotes the set of terms built in the usual way. We
denote variables by X; Y; Z ; function symbols by f; g; h; terms by t; u; v; w; predicate symbols
by p; q; r; s and clauses of a predicate by ; . Substitutions are denoted by , ; the notation
[X 7! t] means that X assumes the substitution t. The value of variable X under substitution is denoted by (X ). All these symbols may appear with or without subscripts and superscripts.
Lists of variables are denoted by symbols X; Y ; Z ; denotes anonymous variables.
An elementary unication operation is of the form X = f (X ) where f is an n-ary function
symbol in F , and cardinality of X is n. A program is denoted by symbols P; Q; P0; Q0; P00; Q00
etc. Each clause in a program is of the form p(X ) :? q1 (X1); : : :; qn (Xn ) where the unications
in the body are all elementary unication operations. Note that this form does not restrict
the set of programs we consider, since all programs can be readily transformed into this form.
Clauses are denoted by symbols ; ; (possibly subscripted).
A elementary constraint has the form p(t1; t2 ; : : :; tn ) where each ti 2 ? and p 62 P . Constraints (denoted by ' and ) are built using conjunction and disjunction over elementary
constraints. A constraint '(X ) is parametrized w.r.t variables in X . When there is no ambiguity, '(X ) may be written as '. Logical implication (denoted by `)'), dened as usual in terms
of substitutions, forms a partial order over the set of constraints. Satisability of a constraint
is the standard notion. e.g. (X = a _ X = b) ^ (X 2 g ) means that for the constraint to
be satisable X can either take a or b as a substitution and must be ground. We denote the
application of a substitution to a constraint ' by '[ ]. e.g. (X = a ^ Y = b)[X 7! a] is
(Y = b).
We assume familiarity with the standard notions of SLD derivations and SLD derivation
trees. We use ! to denote derivations. An answer substitution is the substitutions computed
for the variables in the goal by a successful derivation. For simplicity of discussion, we consider
positive programs without control features (such as cut) or side eects. Prolog-style strategy of
evaluation is assumed.
3 The transformation algorithm
3.1 Preliminary Denitions
Denition 1 (Clause-Condition) A constraint ' is a clause-condition for a clause i the
following holds: 8 successful derivations ! with answer substitution ! , if is used in ! then
'[! ] is true.
Denition 2 (Annotated Program) An annotated program is a set of annotated clauses of
the form q(X) : ?NjB where q (X ) is the head, N is the neck and B is the body. Furthermore
N ('; D), where ' is the clause-condition and D is a set of tests.
Denition 3 (Context) Let ! be a derivation and ! be any substitution computed by ! upon
reaching a program point . The context at , denoted C , is a constraint such that 8! C [! ]
is true.
We use symbols C , C 0, C 00(possibly subscripted or superscripted) to denote contexts.
Denition 4 (Success-constraint) The success-constraint of a program builtin say
built in(X ), denoted (built in(X)), is a constraint such that (built in(X)) is true whenever
built in(X ) succeeds.
Transform(P) returns Pt
1.
for each clause 2 P do D := /* Initialize the D value of each clause */
2.
3.
4.
5.
6.
7.
9.
10.
Pt := P
for each unmarked clause 2 Pt do
while is unmarked do
(introduced; Pt ) := IntroduceTest(; Pt )
if (introduced = false) then
mark as \done"; remove the neck of end
return Pt
Figure 1: Algorithm Transform
3.2 Technical Exposition
The starting point for our transformation is a program that has been annotated with the necessary conditions for a predicate and its constituent clauses to succeed. In particular, each clause
is annotated with a clause condition that must be satised in order for the clause to be used
in a successful SLD-derivation. These annotations can be derived using abstract interpretation
techniques [3] such as those of [5, 23].
The transformation algorithm is composed of a top-level procedure Transform which iterates through the program clauses and moves tests into their bodies. The actual movement of
tests is performed using IntroduceTest. This procedure uses a function called Select to identify
the tests to be introduced, and then invokes AbsorbTest to determine if the cost of the promoted
test can be absorbed further down. If so, the test is introduced, or otherwise, an alternative test
is selected for introduction. The procedure terminates either if no test can be introduced, or
when a test is successfully introduced. A detailed description of these procedures is given below.
Transform(P):It takes the annotated program P as the input and returns a transformed program which is a horn-logic program without the annotations. Transform repeatedly picks a
clause from P, and attempts to move as many tests from the neck into the body as possible.
This may result in generation of new (annotated) clauses which are also added to P. The process
continues until no more tests can be moved from the neck of any clause. Tests are moved from
the neck using IntroduceTest described below.
IntroduceTest(; P): It takes an annotated clause in P as input and returns another annotated program with specialized clauses. From 's clause-condition it rst selects a set of
mutually exclusive constraints f 1; 2; : : :; n g using Select.1 Each i is a conjunct of elementary constraints. Next we extract the corresponding test from each i , using ExtractTest, and
introduce them into the body of . Once the bunch of tests is selected, IntroduceTest checks
whether the cost of introducing them will be completely paid back using AbsorbTest. In this
case is replaced by n specialized clauses. Specically, q (X ) : ?('; D) j B is replaced
by n clauses of the form i q (X ) : ?('; D [ fti g) j ti ; Bi0 , where Bi0 is the specialized body,
and ti = ExtractTest( i). If any new predicate is created due to the specialization of B, then
1
Two constraints are mutually exclusive when they are not simultaneously satisable.
IntroduceTest(; P) returns (introduced; Pt )
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
Let be of the form qC (X ) : ?('; D)jB;
introduced := false
while (introduced = false ^ f 1 ; :::; n g Select('; C ; D) 6= nil) do
Q0 := nil; introduced := true;
for i := 1 to n do
ti := ExtractTest( i )
(B0 ; ; T0 ; P0 ) := AbsorbTest(B; C ^ i ; fti g; P)
if T0 is empty then
Let Q0 be the clause qC (X ) : ?('; D [ fti g) j ti ; B0
Q0 := Q0 [ fQ0g [ (P0 ? P)
else
D := D [ ftig
introduced := false; break; /* Select alternative tests */
endif
endfor
endwhile
if introduced then
Pt := P [ Q0 ? fQg
else
Pt := P
return (introduced; Pt )
Figure 2: Algorithm IntroduceTest
the clauses of those newly-created predicates are also generated and put in the transformed
program that is returned by IntroduceTest. In case any of the ti 's cannot be absorbed then
IntroduceTest selects another bunch of mutually exclusive constraints and repeats the process. Finally, if no more tests can be moved from the clause-condition of into its body, then
IntroduceTest(; P) reports failure and Transform picks up another clause from P. We now
specify :
Requirements for Select : Whenever Select('; C ; D) returns a set of constraints f 1; :::; ng
the following conditions must hold:
W
Soundness of Specialization: ' ) ni i
Avoiding choice points : 8(C [] ) 88 8i8j (i 6= j ) :( i ^ j )))
Compatibility with context: 8i (' ^ C ^ i) is satisable
Non-redundancy: 8i ((ti 62 D) ^ :(C ) i))
=1
We now describe AbsorbTest. Rather than doing absorption checks followed by specialization
as two separate phases, in our design both are interleaved. In our algorithmic description of
AbsorbTest, project('; X) denotes the projection of constraint ' w.r.t the set of variables X
and cost(T; ) denotes the cost of executing a set of tests T under substitution .
AbsorbTest(B; C ; T; P): It is invoked by IntroduceTest to verify whether the cost of introducing T = fti g under context C from the neck of a clause in program P into its body B will be
completely paid back. As shown in Figure 3, AbsorbTest is dened as a set of equations. We
assume that the set of equations dening AbsorbTest is evaluated by an underlying top-down
AbsorbTest(true; C ; T; P) = (true; C ; T; P).
AbsorbTest((r0(X ); B); C ; T; P) = ((r0 (X 0 ); B0 ); C 00 ; T00 ; P00 )
where (r0 (X ); C 0 ; T0 ; P0 ) = AbsorbLit(r(X ); project(C ; X ); T; P),
(B0 ; C 00 ; T00 ; P00 ) = AbsorbTest(B; C 0 ^ C ; T0 ; P0 ).
AbsorbClauses([ ]; C ; T; P) = ([ ];false; false; nil; P).
AbsorbClauses([RjR]; C ; T; P) = ([R0 jR0 ]; C 00 ; T00 ; P00 )
where R ('; D)jB(R) ,
(B(R ) ; C (R) ; T (R) ; P0 ) = AbsorbTest(B(R) ; C ; T; P)
(R0 ; C (R) ; T(R) ; P00 ) = AbsorbClauses(R; C ; T; P0 )
R0 ('; D)jB(R ) , T00 = T(R) [ T (R) , C 00 = C (R) _ C (R) .
0
0
AbsorbLit(built in(X0 ); C ; T; P) = (built in0 (X 0 ); C 0 ; T0; P)
where (built in0 (X ); T0 ) = Remove(T;built in(X ); C )
C 0 = C ^ (built in(X )). /* Success-constraint of builtin is added */
AbsorbLit(r(X ); C ; T; P) = (fail; C ; T; P)
where there is no clause R of r in P s.t. ' ^ C is satisable (' denotes the clause-condition of R).
AbsorbLit(r(X ); C ; T; P) = (rC (X 0 ); C (R) ; T(R) ; P(R) [ R00 )
where R 6= [ ] is the list of neck and body of those clauses of r in P for which ' ^ C is satisable
(' denotes the clause-condition),
(R)
(R0 ; CS
; T(R) ; P(R) ) = AbsorbClauses(R; C ; T0 ; P)
0
X = R 2R depend(X; C ; B(R ) ), R00 = frC (X ) : ?R0 j R0 2 R0 g.
0
0
0
Figure 3: Denition of AbsorbTest
xpoint engine, such as XSB [20]. The rst equation is applicable when we have a null body i.e.
B = true. When B is nonempty, then the second equation is used which essentially iterates over
the literals in B, invoking AbsorbLit for each of them. AbsorbLit(r(X ); C ; T; P) again has to deal
with two cases. If r(X ) is a built-in, then it is specialized and the set of tests not yet absorbed
is computed using Remove. For user-dened r(X ), we identify R, the set of clauses of r whose
clause conditions are compatible with C . If no such clauses exist then we have reached a failure
path, and therefore we specialize r(X ) to fail. Otherwise, the clauses in R are specialized by
using AbsorbClauses(R; C ; T; P). This iterates over the clauses in R, specializing each of them
by invoking AbsorbTest on the body. Thus, for each of the clauses, it nds out which pieces of
the test are absorbed. A piece is considered to be absorbed if and only if it is absorbed in all
the clauses of0 R. Once the specialized clause-bodies are returned by AbsorbClauses, AbsorbLit
computes X , the arguments of the specialized literal corresponding to r(X ), using the notion
of dependent variable set (refer [19] for precise denition). We now specify:
Requirements for Remove : Whenever Remove(T; b(X); C ) returns (b0(X 0); T0) (where b
and b0 are program buitins) the following hold:
Soundness of Removal: T0 T
Soundness of Specialization: 8(C [] ) (b0(X 0) , b(X ) ))
Soundness of Test Absorption:
8C [0] ) 8(( ) ^ ( ) ti)) ) Cost(T; ) ? Cost(T0; ) Cost(b(X ); ) ?
Cost(b0(X ; ))
4 Termination, Soundness and Performance guarantees
We make use of the usual x-point construction approach for proving termination and performance guarantees of our transformation.
4.1 Termination
The termination of the transformation algorithm is guaranteed by the following theorem (proof
appears in [19]).
Theorem 1 Let P be any annotated logic program. Then the computation Transform(P)
terminates in a nite number of steps.
4.2 Performance Guarantee
We have a notion of sound program transformation (refer [19] for precise denition) which is
based on the concept of a Resolution tree which is an SLD tree in which the edge between any
two nodes N1 and N2 is labelled by the operations that need to be performed in going from N1
to N2 (refer [19] for formal denition) We dene Succ(P; G; C ) as the set of root-to-leaf success
paths in the Resolution tree whose answer substitution is such that C [] is satisable. To dene
a sound program
transformation, we dene the concept of similarity mapping. The similarity
(P0 ;G0 )
mapping I(P;G;C ) is a bijective mapping between Succ(P; G; C ) to Succ(P0; G0; C 0) (where G0 is
the specialization of G w.r.t. C ) with certain characteristics (Refer [19] for complete denition).
We now introduce sound program transformation as:
Denition 5 (Sound Transformation) Let P be a0 program and G be a goal. Then P0 is a
;G)
sound transformation of P over G i there exists I((PP;G;true
.
)
Using this denition of sound transformation, we establish that the cost of success paths
in the transformed program will be no higher than the corresponding paths in the original
program. To establish this result we develop the notion of bounded success paths. Informally, a
sound transformation P0 of P over G, is said to bound the success paths of G in P ( denoted
P0 G P ) provided that for any root-to-leaf success path E in Succ
(P; G; C ) (where C holds
(P0 ;G)
whenever G is invoked), we can guarantee that the length of I(P;G;C )(E ) does not exceed the
length of E . Our performance guarantee is given by the following theorem (proof appears in
[19]).
Theorem 2 (Performance Guarantee) Let P be an annotated program and P0 = Transform(P).
Then 8q C 2 PredSet(P) and 8 (C [ ] ) P0 qC (X ) P)
5 Related Work
5.1 Notions of determinacy for top-down evaluation of LP
One notion of determinacy was used by Mellish [15] where a predicate is dened to be determinate if any goal involving that predicate can never return \more than one possible solution".
Sawamura and Takeshima [24] use the same notion when they dene determinacy as \succeeding
at most once". In both of these works, the determinacy detection methods are heavily dependent
on user supplied cuts, which essentially encourages less declarative style of programming.
The dependence on operational constructs like cuts is much reduced in the notion of functionality by Debray and Warren [7]. Cardinality analysis by Braem et al. [1] extends on the idea
of functionality analysis where they estimate the numbers of answers of predicates. Among the
dierent existing Prolog systems, Mercury has integrated determinacy analysis into its compiler
[9]. This notion of determinacy analysis, where again, the number of answers of a user-dened
predicate is of importance, also crops up in works of Debray and Hermenegildo in the domain
of non-failure analysis [6]. Our method fundamentally diers from these methods in the fact
that rather than creating a classication of the program predicates (such as functional/nonfunctional) we compute the necessary condition of every program clause and then use them
selectively to promote early failure/determinacy. This enables us to promote early failure even
if a predicate cannot be inferred to be strictly \determinate".
Shallow backtracking methods (studied by Carlsson, Hickey and Mudambi, Van Roy et al.,
and Zhou et al. [2, 10, 18, 26]) try to make the backtracking operation cheaper when backtracking occurs due to failure of head unication. However, none of these schemes can propagate
determinacy information across user-dened predicates which would be essential for optimizing
deep backtracking.
The idea of mutual exclusion by Post [17] is a more operational notion of determinacy, where
the concern is on the number of applicable clauses rather than the number of answers of a
predicate. This notion is closest to our method of promoting determinacy. But here also the
authors have resorted to a classication of the program predicates rather than systematically
promoting early failure in every clause. Moreover, the works on mutual exclusion analysis do
not allow for extraction of determinacy information hidden under recursive calls as in Figure 4.
2
Only by looking at the recursive call in the second clause, we can infer that the second clause
p([a]).
p([a|X]) :- p(X).
Figure 4: Illustrative example
succeeds when its only argument is of the form [a,a|Y].
5.2 \Early Failure" in other evaluation strategies
The problem of early failure has been studied for CLP by Mariott and Stuckey [14]. Ramakrishnan and Srivastava [25] use constraint propagation techniques to achieve early failure for
bottom-up evaluation. Their method however suers from possible non-termination problems.
These problems are avoided by Kemp and Stuckey [12] when they perform analysis on an abstract domain.
The techniques employed in [12, 14] start with a source program, generate an intermediate
program in which tests are \eagerly" introduced and nally eliminate tests that can be shown
to be redundant. Note that unlike our \conservative" approach where tests are introduced only
if they can be paid back (by the elimination of equivalent tests deeper down), in the eager
approach tests can be repeated lower down. However, in the CLP context, the gains achieved
through early testing can be substantial. On the other hand our conservative approach is more
2
Calling mode of p/1 is (ground)
appropriate for top-down evaluation of Prolog programs, since redundant testing is indeed a
factor that can lead to performance degradation.
Moreover, unlike in these works, since the necessary conditions computed by us are potential
disjunctions, our specialization is inherently more aggressive. As an example, consider the
following program:
p(X,Y) :- X = a, q(X,Y).
r(X,Y) :- X = b, q(X,Y).
q(X,Y) :- X = Y.
Let the calling modes of p/2 and r/2 be (ground,ground). q/2 is here called in two dierent
contexts, and therefore any method which adds and removes constraints which are true for all
possible calling patterns (such as [14]), will not be able to evaluate away q/2. In our method,
we will compute the context of q(X,Y) as (X = a) _ (X = b), and hence generate two pecialized
versions of q/2 as shown below.
p(X,Y) :- X = a, Y = a, q1.
r(X,Y) :- X = b, Y = b, q2.
q1.
q2.
The calls to q1 and q2 can then be removed straightforwardly.
5.3 Partial Evaluation techniques
In our method, we perform multivariant specialization to pay back for the cost of tests that are
pulled up. This has similarities with partial evaluation and program specialization techniques
[21, 22]. However generally, partial evaluators do not give performance guarantees about the
transformed program. In order to concretize this argument let us consider a generic partial
evaluator like Mixtus. Mixtus's \left propagation of bindings" converts say p(X,X), X = term
to p(term,term). Note that this operation roughly corresponds to our notion of \pulling up of
tests" as far as equality tests are concerned. However, in partial evaluation this will be done
indiscriminately potentially leading to duplication of structures thereby risking performance
degradation of the transformed program.
Also, partial evaluators do not take care of disjunctions in general. To illustrate the point
let us consider the example if Figure 4. Any call that selects the rst clause will also select
the second, although eventually the second clause will fail, if the rst succeeds. But, since the
argument of p/1 is of the form [a] or [a|X] and there is a recursive call in the second clause,
hence we can conclude that the second clause always succeeds with an argument [a,a|Y].
This type of reasoning is not possible in a partial evaluation method since there is no scope
for simplifying and combining the necessary conditions of dierent computation paths. In our
method, we will able to perform such reasoning, since we explicitly compute and use the necessary
conditions as disjuncts.
Current extensions of partial evaluation, such as conjunctive partial evaluation [13] also do
not look at other computation paths when one computation path is being specialized. More
recent extensions by Pettorossi et al. [16] allow disjunctive denitions of newly dened predicates and and a new transformation rule called \case-split" rule to achieve greater determinism
in the transformed program. Their approach can be potentially used to manipulate dierent
conjuncts in certain cases, particularly when bindings/equality constraints are involved. Our
approach allows for explicit manipulation of conjuncts of necessary conditions with general constraints, in particular equalities, arithmetic inequalities, types etc. Moreover, we provide formal
performance guarantees about the transformed program.
6 Discussions
We now discuss some possible renements to our method. In our approach we considered
promoting tests that do not bind program variables. As discussed earlier, promoting operations
that create variable bindings can in general increase the cost of other operations. Clearly, it is
possible to allow variables bindings to be created by newly introduced operations in those cases
where we can guarantee that there are no increases in execution cost. Whether promoting such
safe operations will increase determinism substantially is a question of interest.
Also, in our transformation, a literal in the clause body can get specialized several times
which can (potentially) cause code blow-up. It is desirable to develop methods to limit the
code-size increase by limiting the number of versions of a predicate that can be generated. While
it is straight-forward with our approach to satisfy prespecied code-size increase limits, such an
approach does not balance the cost of code size increase with the benet of early introduction of
many more tests. It remains an open problem to develop such methods for containing code-size
increase using a cost/benet model.
References
[1] C. Braem, B. Le Charlier, S. Modart, and P. Van Hentenryck. Cardinality analysis of prolog. In Proceedings
of International Symposium on Logic Programming, pages 457{471, 1994.
[2] M. Carlsson. On the eciency of optimising shallow backtracking in compiled Prolog. In Proc. of the Sixth
International Conference on Logic Programming, 1989.
[3] P. Cousot and R. Cousot. Abstract interpretation and application to logic programs. J. Logic Prog., 13:103{
179, 1992.
[4] S. Dawson. Theory and practice of deterministic evaluation of logic programs. PhD thesis, State University
of New York at Stony Brook, December 1995.
[5] S. Dawson, C.R. Ramakrishnan, I.V. Ramakrishnan, and R.C. Sekar. Extracting determinacy in logic
programs. Proceedings of the 10th ICLP, pages 424{438, 1993.
[6] S. Debray, P. Lopez-Garcia, and M. Hermenegildo. Non-failure analysis of logic programs. In Proceedings of
1997 ICLP, pages 48{62, 1997.
[7] S. Debray and D.S. Warren. Functional computations in logic programs. ACM TOPLAS, 11(3):451{481,
July 1989.
[8] R. Giacobazzi and L. Ricci. Detecting determinate computations by bottom-up abstract interpretation. In
Proceedings of European Symposium on Programming, pages 167{181, 1992.
[9] Fergus Henderson, Zoltan Somogyi, and Thomas Conway. Determinism analysis in the mercury compiler. In
Nineteenth Australasian Computer Science Conference, pages 337{346, 1996.
[10] T. Hickey and S. Mudambi. Global compilation of prolog. J. Logic Prog., 7:193{230, 1989.
[11] D. B. Kemp, K. Ramamohanarao, I. Balbin, and K. Meenakshi. Propagating constraints in recursive deductive databases. In Proc. of the First North American Conference on Logic Programming, pages 981{998,
1989.
[12] D.B. Kemp and P.J. Stuckey. Optimizing bottom-up evaluation for constraint queries. J. Logic Prog., 26:1{30,
January 1996.
[13] M. Leuschel, D. De Schreye, and A. de Waal. A conceptual embedding of folding into partial deduction :
Towards a maximal integration. In Proc. of the Joint International Conference and Symposium on Logic
Programming, pages 319{332, 1996.
[14] K. Mariott and P.J. Stuckey. The 3 R's of optimizing constraint logic programs: Renement,removal and
reordering. In proceedings of 20th POPL, pages 334{344, 1993.
[15] C.S. Mellish. Some global optimizations for a prolog compiler. J. Logic Prog., 2:43{66, 1985.
[16] A. Pettorossi, M. Proietti, and S. Renault. Reducing nondeterminism while specializing logic programs. In
Proc. of 24th POPL, pages 414{427, 1997.
[17] K. Post. Mutually exclusive rules in logic programming. In Proc. of the 1994 International Symposium on
Logic Programming, pages 472{486, 1994.
[18] P. Van Roy, B. Demoen, and Y.D. Willems. Improving the execution speed of compiled prolog with modes,
clause selection and determinism. In Proceedings of TAPSOFT'87, pages 111{125, March 1987.
[19] A. Roychoudhury, C.R. Ramakrishnan, I.V. Ramakrishnan, and R.C. Sekar. Making success out of early
failures. Technical Report CS-TR-97/04, State University of New York at Stony Brook, September 1997.
[20] K. Sagonas, T. Swift, and D.S. Warren. The XSB Programmers' manual, version 1.6. Dept. of Computer
Science, SUNY at Stony Brook, 1996.
[21] D. Sahlin. The mixtus approach to automatic partial evaluation of full prolog. In Proc. of the Second North
American Conference on Logic Programming, 1990.
[22] D. Sahlin. An Automatic Partial Evaluator for Full Prolog. PhD thesis, Swedish Institute of Computer
Science, March 1991.
[23] T. Sato and H. Tamaki. Enumeration of success patterns in logic programs. Theoretical Computer Science,
34:227{240, 1984.
[24] H. Sawamura and T. Takeshima. Recursive unsolvability of determinacy, solvable cases of determinacy and
their applications to prolog optimization. Proceedings of 1985 ICLP, pages 200{207, 1985.
[25] D. Srivastava and R. Ramakrishnan. Pushing constraint selections. J. Logic Prog., 16:361{414, 1993.
[26] N. Zhou, T. Takagi, and K. Ushijima. A matching tree oriented abstract machine for prolog. In ICLP'90,
pages 159{173, 1990.