Computing Story Trees

Computing Story Trees
Alfred
Correira
D e p a r t m e n t of C o m p u t e r S c i e n c e
T h e U n i v e r s i t y of T e x a s at A u s t i n
Austin, Texas 78712
A theory of understanding (parsing) texts as a process of collecting simple textual
propositions into thematically and causally related units is described, based on the concept
of macrostructures as proposed by Kintsch and van Dijk. These macrostructures are
organized into tree hierarchies, and their interrelationships are described in rule-based
story grammars related to the Kowalski logic based on Horn clauses. A procedure for
constructing and synthesizing such trees from semantic network forms is detailed. The
implementation of this procedure is capable of understanding and summarizing any story it
can generate using the same basic control structure.
1. Introduction
One of the most difficult tasks in the field of computational linguistics is that of processing (parsing or
understanding) bodies of connected textual material,
from simple narratives like fairy tales and children's
stories, to complex technical articles like textbooks
and encyclopedia articles. When effective parsers
were created capable of processing single sentences
(Woods, 1970), (Schank, 1975b), (Norman and Rumelhart, 1975), (Winograd, 1972), it was quickly realized that these same techniques were not in themselves
adequate for the larger task of processing sequences of
sentences. The understanding of paragraphs involves
more knowledge than and different knowledge from
that necessary for sentences, and the structures produced by a text parser need not look like the structures
of the sentences parsed individually.
However, the original impetus for current trends in
text processing was the effort to solve problems of
reference at the sentential level, in particular anaphora
and ellipsis (Charniak, 19722). For example, in the
paragraph
John wanted to marry Mary. He asked
her if she would marry him, but she refused.
John threatened to foreclose the mortgage on
the house where Mary's old sick father lived.
They were married in June.
Simple-minded syntactic techniques are generally insufficient to resolve referents of the form of the
"they" in the last sentence above. The human understander - and potentially the computer understander as
well - requires real-world knowledge about threats,
familial ties and marriage to realize that "they" refers
to John and Mary.
Experiments with text processing led to such procedural constructs as frames (Minsky, 1975; Charniak
and Wilks, 1976; Bobrow and Winograd, 1977),
scripts and plans (Schank and Abelson, 1977), focus
spaces (Grosz, 1977), and partitioned networks
(Hendrix, 1976), among others. These efforts involved conceptual structures consisting of large, cognitively unified sets of propositions. They modelled
understanding as a process of filling in or matching the
slots in a particular structure with appropriate entities
derived from input text.
There have also been rule-based approaches to the
text
processing
problem,
most
notably
the
template/paraplate notion of Wilks (1975), and the
story grammars of Rumelhart (1975). Although both
approaches (procedures and rules) have their merits, it
is a rule-based approach which will be presented here.
This paper describes a rule-based computational
model for text comprehension, patterned after the
theory of macrostructures proposed by Kintsch and
van Dijk (1978). The rules are notationally and conceptually derived from the Horn clause, especially as
described by Kowalski (1979). Each rule consists of
sets of thematically, causally, or temporally related
propositions. The rules are organized into a network
with the macrostructures becoming more generalized
approaching the root. The resulting structure, called
the Story Tree, represents a set of textual structures.
Copyright 1980 by the Association for C o m p u t a t i o n a l Linguistics. Permission to copy without fee all or part of this material is granted
provided that the copies are not m a d e for direct commercial a d v a n t a g e and the Journal reference a n d this copyright notice are included on
the first page. To copy otherwise, or to republish, requires a fee a n d / o r specific permission.
0362-613X/80/030135-15501.00
American Journal of Computational Linguistics, Volume 6, Number 3-4, July-December 1980
135
Alfred Correira
Computing Story Trees
The process of generating f r o m this n e t w o r k consists of choosing one of the rules to serve as the root
of a particular story tree and recursively instantiating
its d e s c e n d a n t s until terminal propositions are p r o d uced (Simmons and Correira, 1979). These propositions form the text of the generated story, requiring
only a final phase to produce English sentences f r o m
the propositions. Conversely, a text is understood if
its input sentences, p a r s e d into propositions, can be
m a p p e d onto rules and these rules recursively m a p p e d
onto more abstract rules until a single node (the root)
is achieved. Parsing and generating use the same rules
in a similar m a n n e r for p e r f o r m i n g their respective
tasks, and the rules lend themselves to a uniform tree
structure possessing an inherent summarizing property.
2. M a c r o s t r u c t u r e s a n d S t o r y G r a m m a r Rules
In this section the f u n d a m e n t a l notion of m a c r o structure, as p r o p o s e d and used by Kintsch and van
Dijk, is presented and then analyzed f r o m a c o m p u t a tional, rather than a psychological, standpoint.
An
effective r e p r e s e n t a t i o n for m a c r o s t r u c t u r e s , derived
f r o m H o r n clauses and organized into story trees, is
described, as well as a data base for the r e p r e s e n t a tion.
2.1 M a c r o s t r u c t u r e s
Kintsch and van Dijk (1975) present a system for
organizing an entire discourse into a hierarchy of m a crostructures, which are essentially m e t a p r o p o s i t i o n s .
The lowest level of a discourse textual representation
is the set of input propositions that corresponds semantically to the text sentences, clauses a n d / o r phrases. Propositions are conjoined by links of implication:
if proposition A implies proposition B, then A and B
are c o n n e c t e d , and the link is m a r k e d with the
strength of the connection, ranging f r o m (barely) possible to (absolutely) necessary. The propositions and
their connections reside in a text base. A text base
can be either explicit, if all the implied i n f o r m a t i o n
necessary for coherence is made explicit, or implicit, if
propositions that can be assumed to be k n o w n or implied are omitted, m text is an explicit data base by
itself, and all summaries of that text are implicit data
bases. A college physics text would have a much m o r e
explicit text base than after-dinner conversation. The
simple narrative texts examined in this p a p e r have a
text base b e t w e e n these two " e x t r e m e s . "
The sense in which " c o h e r e n c e " is used a b o v e is
not defined precisely. Kintsch and van Dijk argue that
c o h e r e n c e , or " s e m a n t i c w e l l - f o r m e d n e s s " , in a text
requires, for each proposition in the text, that it be
linked with one or m o r e preceding propositions. This
connection must exist for some reader in some context
constrained by conventions for knowledge-sharing and
136
assumption-sharing valid for that person in that context.
The result of this linking is a linear text base which
is then m a p p e d into a hierarchical structure in which
propositions high in the structure are m o r e likely to be
recalled (via summaries) than those low in the structure. At the top of the hierarchy can be found p r o p o sitions corresponding to rhetorical categories, such as
" p r o b l e m " and " s o l u t i o n , " or n a r r a t i v e categories,
such
as
"introduction,"
"complication,"
and
"re solution."
Kintsch and van Dijk introduce a n u m b e r of rules
for relating these macrostructures to sets of input textual propositions: i n f o r m a t i o n reduction (generalization), deletion (of less i m p o r t a n t propositions), integration ( c o m b i n i n g events with their p r e - and postconditions), and construction (which relates complex
propositions to their c o m p o n e n t sub-propositions).
T h e r e are two conditions that are always true regarding these macrostructures: a macrostructure must
be implied b y its s u b o r d i n a t e p r o p o s i t i o n s (i.e. enc o u n t e r i n g the s u b o r d i n a t e p r o p o s i t i o n s implies the
existence of the macrostructure), and ordered sets of
macrostructures collected together f o r m a meaningful
s u m m a r y of the text. Kintsch and van Dijk believe
that it is primarily m a c r o s t r u c t u r e s that are retained
when a text is u n d e r s t o o d b y a h u m a n reader and that
the m a c r o s t r u c t u r e s are c r e a t e d as the text is being
processed.
2.2 M a c r o s t r u c t u r e s as C o m p u t a t i o n a l C o n s t r u c t s
As evidence in support of their theory, Kintsch and
van Dijk p r e s e n t a n u m b e r of psychological experim e n t s in recall and s u m m a r y with h u m a n subjects
using as a text a 1 6 0 0 - w o r d n a r r a t i v e t a k e n f r o m
Boccaccio's Decameron (the Rufolo story). As a c o m putational entity, a macrostructure is a node in a story
tree whose immediate descendants consist of the subordinate p r o p o s i t i o n s by which the node is implied,
and is itself a d e s c e n d a n t of the m a c r o s t r u c t u r e it
(partially) implies. E v e r y macrostructure in this tree is
the root of a derivation tree whose terminals are simple propositions.
E a c h level of the tree shares the attribute of summarizability, i.e. a s u m m a r y of the text m a y be extracted f r o m any level of the tree, b e c o m i n g less specific as the s u m m a r y level a p p r o a c h e s the root. The
lowest level s u m m a r y is the original text itself; the
highest level (the root) is a title for the text.
The ability to give meaningful (coherent) s u m m a r ies for a text is one attribute of c o m p r e h e n s i o n for
that text, and any procedure yielding trees possessing
the s u m m a r y p r o p e r t y can be said to partially understand the text. Consideration must also be given to
classification schemas and rules for paraphrase, ana-
American Journal of Computational Linguistics,
Volume 6, Number 3-4,
July-December 1980
Alfred Correira
Computing Story Trees
phora, and question-answering. F u r t h e r m o r e , given an
a p p r o p r i a t e data base internalizing the relationships
b e t w e e n a macrostructure and its subordinate m a c r o s tructures or simple propositions (microstructures) and
a summary derived f r o m a story tree, it is possible for
a procedure to reconstruct to a certain degree of detail
the original text f r o m which the tree was derived. The
degree of detail recovered is directly d e p e n d e n t on the
relative distance f r o m the nodes forming the s u m m a r y
to the original input propositions (the leaves) of the
text tree.
H o w is this subordinate relationship a m o n g p r o p o sitions to be described formally to a c o m p u t a t i o n a l
process? One simple formulation is in the form of a
rule
(BUY A R U F O L O
TH
GOODS)
(LOAD A R U F O L O
TH
SHIP
(SAIL A R U F O L O
TO
CYPRUS
WITH
GOODS)
MEANS
SHIP)
is a meaningful rule. ( H e r e case notation is used: A
for Agent, T H for T H e m e , etc.) On the other hand,
(TRADINGVOYAGE
IN
<:
SHIP
A RUFOLO
TO
WITH
GOODS
MEANS
CYPRUS)
(SAIL A R U F O L O
TO
CYPRUS
(LOAD A R U F O L O
TH
SHIP
(BUY A R U F O L O
TH
GOODS)
(BUY A R U F O L O
TH
SHIP)
WITH
SHIP)
GOODS)
meaning " y o u may assert the truth (presence) of macrostructure A if you can find the (nearly) contiguous
propositions B, C, and D present in the input text."
" N e a r l y " m e a n s that an allowable level of "noise,"
perhaps in the form of irrelevant side information, m a y
be present between the specified propositions (a p r o b lem not addressed here).
is nonsensical. The rules of coherence that order the
a n t e c e d e n t propositions m a y involve several criteria:
causal c o n n e c t e d n e s s (B c a u s e s / i s the result of C,
which c a u s e s / i s the result of D), or temporal ordering
(B h a p p e n s b e f o r e / a f t e r C, which h a p p e n s b e f o r e /
after D), etc. N o t e that the ordering of a n t e c e d e n t
propositions can be tied to textual order within the
f r a m e w o r k of ordinary H o r n clauses by adding extra
arguments to the propositions. This has b e e n done in
the t h e o r y of definite clause grammars ( C o l m e r a u e r ,
1978; Pereira and Warren, 1980).
This rule f o r m closely resembles in structure and
meaning the H o r n clause notation. The general clause
has the format
The a b o v e T R A D I N G V O Y A G E rule can be interpreted in either of two ways. If we are given the
proposition
A < = B,C,D
C[1] ..... C[m] < = A[1] ..... A[n]
(TRADINGVOYAGE
where C[1] ..... C[m] are elementary propositions f o r m ing the consequent, and A[1] ..... A[n] are e l e m e n t a r y
propositions forming the antecedent. If the propositions in a clause contain the variables x[1] ..... x[i], then
the clause has the interpretation
for all x[1] ..... x[i],
A[1] and ... A[n] implies
C [ 1 ] o r ... C [ m ]
IN S H I P
A RUFOLO
TO
WITH
GOODS
CYPRUS)
in a text, or a s u m m a r y of a text, we m a y infer the
chain of events
(BUY A R U F O L O
TH
(BUY A R U F O L O
TH G O O D S )
SHIP)
(LOAD A R U F O L O
TH
(SAIL A R U F O L O
TO CYPRUS
SHIP
WITH
GOODS)
MEANS
SHIP)
If the subscript m for a clause is zero or one, then that
clause is referred to as a H o r n clause. If m = l and
n = 0 , the H o r n clause is called an assertion.
and, if given these events in order in a text, we m a y
infer that Rufolo was performing a T R A D I N G V O Y AGE.
There are several differences b e t w e e n the Kowalski
logic and the logic adopted here. One of these has to
do with the ordering of the antecedent propositions.
In a true H o r n clause, the ordering is irrelevant and
A < = B,C,D is as good a rule as A < = C,D,B, etc.,
i.e. the antecedents can be proved in any order. The
ordering in the system described here is governed by
rules of coherence. F o r example, the rule:
Because of this capability for use in two directions,
this rule f o r m can be used b o t h in parsing and generating tasks. A parser can avail itself of these rules to
group sets of propositions in a text into units headed
by macrostructures (which are the consequents of the
rules). These can be further grouped into more generalized macrostructures recursively to yield a story tree.
A generator can proceed in the other direction, starting with the top node of a tree and expanding it and
its constituents d o w n w a r d recursively (by using the
rules that operate on the precondition propositions) to
arrive at a tree whose terminals f o r m a coherent text.
(TRADINGVOYAGE A RUFOLO
IN S H I P T O C Y P R U S )
<=
(BUY A R U F O L O
TH
WITH
SHIP)
GOODS
American Journal of Computational Linguistics, Volume 6, Number 3-4,
July-December 1980
137
Alfred Correira
Computing Story Trees
2.3 Extended Horn Clauses
A f t e r several early experiments with this rule f o r m
on a simple text (the Margie story - Rumelhart, 1975)
it was discovered that this simple rule form, although
capable of handling the computations necessary for the
text at hand, did not bring out several inherent attributes that macrostructures generally have in c o m m o n .
F o r example, in a T R A D I N G V O Y A G E rule, several
divisions can be recognized.
In order to go on a
T R A D I N G V O Y A G E , R u f o l o must have the m o n e y
for a ship and goods and the n e c e s s a r y k n o w l e d g e
a b o u t competition to m a k e a profit. G i v e n these, Rufolo will follow a certain sequence of actions; he will
obtain a ship and goods, load the goods o n t o the ship
and then sail to Cyprus. The result of this effort will
be that R u f o l o is in a position to sell or trade his
goods (and perhaps m a k e a profit). Therefore, we can
b r e a k the T R A D I N G V O Y A G E rule into several natural groupings:
TRADINGVOYAGE
RULE :
HEAD :
(TRADINGVOYAGE
IN S H I P
PRE :
(POSSESS
A RUFOLO
TO
A RUFOLO
EXP :
(MAKE A R U F O L O T H
(CALCULATIONS
TYPE
WITH
GOODS
CYPRUS)
TH WEALTH)
MOD
MERCHANTS
USUAL) )
(BUY A R U F O L O
A SHIP)
(BUY A R U F O L O T H G O O D S )
(LOAD A R U F O L O T H S H I P W I T H G O O D S )
(SAIL A R U F O L O T H S H I P T O C Y P R U S )
POST :
(MAKE A R U F O L O
TH
PROFIT)
Structurally, this rule f o r m will be referred to as an
" e x t e n d e d " H o r n clause ( E H C ) . The first part of the
rule is the H E A D of the rule, and represents the m a crostructure pattern. The second part is the P R E c o n dition for the rule. The propositions in the precondition are the conditions which must be true, or can be
m a d e true, before Rufolo can e m b a r k on an episode of
TRADINGVOYAGE.
The third part is the E X P a n sion of the rule. If R u f o l o goes on a T R A D I N G V O Y A G E , t h e n these are the ( p r o b a b l e ) actions he
will take in doing so. The final part of the rule is the
POSTconditic~n of the rule, which consists of the p r o p ositions t h a t ' w i l l b e c o m e true u p o n the successful
completion (instantiation) of the T R A D I N G V O Y A G E
rule.
The resulting rule f o r m is related conceptually and
historically to the notion of a script as developed by
Schank and Abelson (1977) (el. N o r m a n and Rumelhart, 1975). The precondition sets the stage for the
invocation of a rule. It describes the setting and the
138
roles of the characters involved in the rule. The expansion consists of the actions normally t a k e n during
the invocation of the rule. The postcondition is the
result of these actions. W h e n used in a script-like
role, a rule is activated w h e n its precondition has b e e n
satisfied, and its e x p a n s i o n can t h e n be sequentially
instantiated.
A rule can also be used as a plan. A plan is a data
structure that suggests actions to be t a k e n in pursuit of
some goal. This c o r r e s p o n d s to activating a rule according to its postcondition, i.e. employing a rule because its postcondition contains the desired effect. If
one has the goal " m a k e m o n e y " one might wish to
employ the T R A D I N G V O Y A G E rule to achieve it.
This extension of the H o r n clause structure serves
two purposes.
First, b y separating the propositions
s u b s u m e d under a macrostructure into three parts, the
E H C rule f o r m renders it u n n e c e s s a r y to label the
roles that the individual propositions play in the m a crostructure. A macrostructure will usually have preconditions, e x p a n s i o n ( s ) , and a p o s t c o n d i t i o n set,
which would have to be labeled ( p r o b a b l y functionally) in a simple H o r n clause system. Secondly, it serves
as a m e a n s of s e p a r a t i n g those p r o p o s i t i o n s which
m u s t be true b e f o r e a rule can be i n v o k e d
( p r e c o n d i t i o n s ) f r o m those w h o s e success or failure
follows the invocation of the rule.
A rule m a y have multiple preconditions if there are
several sets of circumstances under which the rule can
be invoked. Thus a rule for watching a drive-in movie
could have the f o r m
WATCH
DRIVE-IN
HEAD :
(WATCH
A PERSON
MOVIE
RULE:
TH DRIVE-IN
MOVIE)
PRE :
(OR ( ( P E R S O N IN CAR)
(CAR IN D R I V E - I N ) )
((SEE A P E R S O N T H S C R E E N )
(CAN A P E R S O N T H
(READ A P E R S O N T H
LIPS))))
A rule m a y also have several expansions attached
to it, one for each potential instantiation of the rule.
Thus if a couple w a n t a child t h e y could e m p l o y a
rule:
POSSESS
HEAD :
(POSSESS
RULE :
A COUPLE
TH
CHILD)
EXP :
(OR ( C O N C E I V E A C O U P L E T H C H I L D )
(ADOPT A COUPLE TH CHILD)
(STEAL A COUPLE TH CHILD)
(BUY A C O U P L E
TH
CHILD))
where each of the propositions, C O N C E I V E , A D O P T ,
STEAL, B U Y expands by a complex rule of the same
f o r m as the POSSESS rule.
American Journal of Computational Linguistics, Volume 6, Number 3-4,
July-December 1980
Alfred Correira
Computing Story Trees
A rule can have but a single postcondition, composed of simple propositions, since by definition the
postcondition is the set of propositions that are true if
the rule succeeds. If a postcondition could contain an
expandable proposition, then that proposition could
fail independently of the rule for which it is part of the
postcondition - since it could have its own preconditions - thus contradicting the definition of the postcondition.
2.4 Indexed Rule N e t w o r k of Extended Horn Clauses
The rules are stored in memory in a semantic network. However, unlike the usual semantic networks
for case predicates, where the individual nodes
(tokens) are c o n n e c t e d by case relations (Simmons
and Chester, 1977), the semantic links in the E H C
rule network are based on the story tree structure.
Each rule node (or instantiation of one) in the
network may have an arc for each part of an E H C
rule: precondition, expansion, and postcondition. All
the case relations in the head of the proposition are
kept in a single list attached to the node as arc-value
pairs. Negation is marked by embedding the node in a
( N O T O F ...) proposition form. For example, if a
point is reached in a narrative where John is expected
to kiss Mary but he does not, then the story tree
would contain at that point a node that would print as
( N O T O F (KISS A J O H N T H MARY)).
Each node in the database represents either a class
object or an instance of a class object. Every class
object points to all of the tokens subsumed by it in the
network. For example, the class object representing
D O G would have pointers to each of its tokens,
D O G 1 , D O G 2 , D O G 3 , etc., which represent individual
objects of the class DOG.
The database retrieval functions utilize a kind of
" f u z z y " partial matching to retrieve potential rules to
be applied to a proposition. Partial matching allows
the rule-writer to combine rules that only differ in one
or two minor arc names, but which all share the same
major arc names; only one rule need be written, specifying the major case relations and ignoring the minor
ones (which can be checked for in the preconditions of
the rules). Partial matching allows the generator to
bring more operators to bear at a given point in the
story construction. However, the parser pays the price
for this flexibility by having to examine more alternatives at each point in its parsing where rules apply.
The function which queries the database for the
existence of facts (instantiatcd propositions), uses a
" c o m p l e t e " p a t t e r n - m a t c h i n g algorithm, since " J o h n
ate cake last night" is not deemed a sufficient answer
to the question " W h o ate all the cake last night at
Mary's place?".
American Journal of Computational Linguistics,
3. Story Analysis Using Extended Horn Clauses
This section describes the use of the Extended
H o r n Clause rule form and illustrates the process of
rule-writing using several paragraphs from the Rufolo
story as an example. The notion of rule failure is also
presented as an integral feature of parsing and generating narrative texts.
3.1 Writing Rules Using Extended Horn Clauses
The E H C rule form divides the propositions subsumed by a macrostructure into three categories: the
preconditions, the expansions, and the postconditions.
The preconditions define the applicability of a particular r u l e . When a precondition has been satisfied, then
a rule's expansions and postcondition c o r r e s p o n d
roughly to the traditional a d d / d e l e t e sets of problemsolving systems. The add set consists of the ordered
list of actions to be taken as a result of the rule's invocation plus the postcondition states of the actions.
Members of the delete set are propositions of the form
( N O T O F N O D E ) appearing in the expansion and
postcondition of a rule.
The question then becomes one of mapping a text
into rules of this structure. T o illustrate this process
consider the following streamlined version of two paragraphs from the Rufolo story:
Rufolo made the usual calculations that merchants make. He purchased a ship and loaded it
with a cargo of goods that he paid for out of his
own pocket. He sailed to Cyprus. When he got
there he discovered other ships docked carrying
the same goods as he had. T h e r e f o r e he had to
sell his goods at bargain prices. H e was thus
brought to the verge of ruin.
H e found a buyer for his ship. He bought a
light pirate vessel.
H e fitted it out with the
equipment best suited for his purpose. He then
applied himself to the systematic looting of other
people's property, especially that of the Turks.
After a year he seized and raided many Turkish
ships and he had doubled his fortune.
At the top level of the story we have two complex
actions taking place: a trading voyage and a pirate
voyage. A T R A D I N G V O Y A G E requires a merchant,
a ship, some trade goods and a destination on a seacoast (for example, an island). So we e m b o d y this in
the rule
TRADINGVOYAGE
RULE :
HEAD :
(TRADINGVOYAGE**
PRE :
(X I S A M E R C H A N T )
(Z I S A I S L A N D )
(Y I S A G O O D S )
(W I S A S H I P )
A X TH
Volume 6, Number 3-4, July-December 1980
Y TO
Z)
139
Alfred Correira
Computing Story Trees
Given that these conditions are true, then Rufolo plans
his strategy, obtains both ship and goods and sails to
Cyprus with them (the function of the asterisks is
notational only and is used to distinguish those predicates that have rule expansions from those that do
not):
TRADINGVOYAGE
RULE :
HEAD :
(TRADINGVOYAGE**
A X TH
Y TO
A X TH
U)
EXP :
(MAKE*
A X TH
(CALCULATIONS
NBR
PL
(PURCHASE**
MOD
TYPE
USUAL))
W R2
(LOAD*
A X TH
W WITH
(SAIL*
A X TH
W TO
(TRADE**
MERCHANT
A X RI
(PROFIT
Y)
Y)
Z)
Y FOR
A X TH
MODAL
EXPEC))
POST:
(NOT
OF
(WANT*
A X TH
(DOUBLE*
PURCHASE
A X TH
U)))
RULE :
HEAD :
(PURCHASE**
A X RI
Y R2
Z)
The T R A D E rule describes a trading scenario under conditions of heavy competition:
PRE :
(POSSESS*
A X TH
(W I S A
The postcondition of the P U R C H A S E rule is that
the wealth used to buy the ship and goods is no longer
in the possession of its original owner. This would
have been inserted into the postcondition of T R A D I N G V O Y A G E if the P U R C H A S E rule had been
incorporated into it directly.
The difference b e t w e e n the POSSESS* and
POSSESS** propositions in the P U R C H A S E rule is
illuminating. The POSSESS* in the precondition does
not have a rule attached to it, so it cannot be expanded. If a merchant does not already possess wealth
when he reaches this point, he cannot go about obtaining it through rule expansion (although the generator
can fudge and assert that he does have wealth at this
point to move the story generation along). If the precondition of the P U R C H A S E rule had contained a
POSSESS** instead, then the merchant would be able
to apply the POSSESS rule above to acquire wealth if
he did not have any. The convention, as illustrated
above, is for the expandable propositions to be suffixed with two asterisks, and for terminal elements to be
suffixed with a single asterisk. This feature is only a
notational device.
Z)
PRE :
(POSSESS*
Finally, smaller rules make for more conciseness in
summaries generated from the rules.
WEALTH))
TRADE
EXP :
(POSSESS**
A X TH
Y)(POSSESS**
A X TH
Z)
RULE :
HEAD :
(TRADE**
POST :
(NOT O F
(POSSESS*
POSSESS
RULE:
A X TH
W))
(Z I S A
PRICE
MOD
A X TH
Y)
PRE :
(POSSESS*
Z)
GOOD)
(ADVERSE-TRADE**
(POSSESS**
A X TH
WEALTH)
(SELL*
(NOT
A X TH
OF
A X TH
Y FOR
(POSSESS**
Y)
Z)
A X TH
Y))
POST :
EXP :
A X TH
(MAKE*
Y)
A
X TH
ADVERSE-TRADE
One might consider doing without the P U R C H A S E
rule. Its precondition could have been pushed into the
precondition for the T R A D I N G V O Y A G E rule and its
expansion inserted into T R A D I N G V O Y A G E at the
same point where the P U R C H A S E is now. But this
rule-splitting is not just a stylistic choice; there are
several advantages to this strategy. First, keeping rule
size small makes the rule easier to read and write.
Experience with writing rules showed that an average
rule size of three to five propositions proved to be
most efficient in this respect. Second, smaller rule size
cuts down on the number of permutations of a rule
that need to be recognized by the parser, due to missing propositions or the transposition of propositions
that describe more or less contemporaneous events.
140
Y FOR
EXP :
HEAD :
(BUY*
A X TH
PRE :
(PROFIT
MOD
GREAT))
RULE :
HEAD :
(ADVERSE-TRADE**
A
X TH
Y)
PRE :
(X I S A
MERCHANT)
EXP :
(DISCOVER*
(SHIP
NBR
(MERCHANT
(CARRY*
A X TH
MANY
NBR
INSTR
MOD
DOCKED
SOME
SHIP
TH
MOD
POSSBY
OTHER)))
(GOODS
SAMEAS
Y))
It should be noted that the postcondition of the
T R A D E rule, the MAKE**, will fail because Rufolo
does not SELL* his goods for a profit. Failure in a
rule is covered in the next section, and will not be
further discussed here.
American Journal of Computational Linguistics,
Volume 6, Number 3-4,
July-December 1980
Alfred Correira
Computing Story Trees
Because of his failure at trading, Rufolo attempts
a n o t h e r scheme to double his wealth (which is his
original goal in the story); he turns pirate.
rules that describe the narrative structure of the text.
F o r the Rufolo story, we have:
STORY
PIRATEVOYAGE
RULE :
HEAD :
(RUFOLO-STORY**
(PIRATEVOYAGE**
A X TH Y MEANS
Z)
(SETTING
SHIP
(POSSESS**
TYPE
PIRATE
A X TH
MUD
LIGHT)
A X)
POST :
(INTEREST*
VAL
(TWICE*
RI
A X TH
MODAL
W R2 Y))
(LIVE*
(U I S A N A T I O N A L I T Y )
COMMERCE
NOLONGER)
A X MANNER
SPLENDOR
(REMAINDER
EXP :
(FITOUT*
A X TH
(EQUIPMENT
(SEIZE*
MOD
Z WITH
EPISODE
BESTSUITED))
HEAD :
A X TH
(SHIP N B R
POSSBY
U DURATION
YEAR))
(WANT*
A X TH
( D O U B L E * A X TH Y)))
A X T H Y)
DOUBLE*
POSSESS*
A X TH W)
The first thing that Rufolo must do is to come to
possess a pirate ship, which he does by selling his m e r chant vessel and using the funds to buy a light vessel:
RULE :
HEAD :
(POSSESS**
DURING
(DAYS
OF
X)))
RULE :
EXP:
(OR
(NOT O F
OF
(EPISODE)
MANY
POST:
POSSESS
(EPISODE)
Z)
(Y I S A W E A L T H )
(W I S A W E A L T H
A X)
EXP :
PRE :
(Z I S A
RULE :
HEAD :
A X TH
Y)
((INTERLUDE) (EPISODE))
((COMPLICATION)
(EPISODE))
((COMPLICATION)
(RESOLUTION)) )
The Rufolo story is basically a S E T T I N G followed
b y an E P I S O D E , where E P I S O D E is defined recursively as either an I N T E R L U D E , which leads into an
episode followed by a new E P I S O D E , a C O M P L I C A T I O N followed by a new E P I S O D E , or a C O M P L I CATION
and a R E S O L U T I O N if the C O M P L I C A T I O N does not cause any new problems to arise (due
to rule failure - see next section).
PRE :
(NOT O F
(POSSESS*
(POSSESS**
A X TH
The S E T T I N G rule
A X TH WEALTH))
Z)
SETTING
RULE :
EXP :
(SELL*
A X TH
Z)
(BUY*
HEAD :
A X T H Y)
(SETTING
POST:
(NOT O F
(POSSESS*
A X TH
Z))
(LIVE*
N o t e that the two POSSESS** rules a b o v e would be
c o m b i n e d into a single rule in the database.
Rufolo outfits his newly purchased ship as a pirate
vessel and then over the course of a year uses it to
seize the ships of m a n y Turks, until at the end he has
doubled his wealth. The postcondition of this activity
is that he no longer wishes to double the a m o u n t of
his (old) wealth and that he n o w possesses twice as
much wealth as before.
These rules do not thoroughly map the exact m e a n ing of the two paragraphs above, but they do give a
flavor for the process of rule-writing, and linguistic
precision is easy to ensure by use of additional rules to
pin down precisely every nuance in the text, particularly with respect to selling and buying and change of
ownership.
Rule-writing using E H C s follows standard t o p down p r o g r a m m i n g practice. At the top level are the
American Journal of Computational Linguistics,
A X)
PRE :
A X LOC
(W I S A W E A L T H
(POSSESS*
(NOT O F
Y DURING
VAL
Z)
CERTAINAMOUNT)
A X T H W)
(SATISFY*
A X WITH
W))
POST:
(WANT*
A X TH
(DOUBLE*
A X T H W))
will create a c h a r a c t e r and his e n v i r o n m e n t - the
LIVE*
and POSSESS*
- and give the character a goal
- the W A N T * .
Experience with simple narrative text leads to the
belief that it is relatively easy to learn to p r o g r a m
using E H C rules, and that their expressive p o w e r is
limited only by the ingenuity of the rule-writer. E x p e r i m e n t s with o t h e r text f o r m s - e n c y c l o p e d i a articles, m a g a z i n e articles, dialogues, etc. - need to be
p e r f o r m e d before a final judgement can be m a d e concerning the fitness of the E H C as a general f o r m for
processing.
Volume 6, Number 3-4,
July-December 1980
141
Alfred Correira
Computing Story Trees
3.2 S u c c e s s and Failure in EHC Rules
4.1 Generation
The idea of rule success or failure in the E H C rule
f o r m is tied to the domain the rules a t t e m p t to model real life m o t i v a t e d b e h a v i o r (actions) w h e r e things
can, and do, go wrong, and actors fail to achieve their
goals. In real life, R u f o l o ' s decision to go on a T R A D I N G V O Y A G E does not guarantee that the voyage
will be successful, or even begun. F o r a given rule,
one of three conditions m a y arise. First the rule's
preconditions m a y fail to be true and the rule cannot
be applied. In the T R A D I N G V O Y A G E rule, if R u f o 1o does not have the m o n e y necessary for trading in
Cyprus or does not possess the knowledge to successfully c o m p e t e , then the T R A D I N G V O Y A G E c a n n o t
be completed despite R u f o l o ' s best intentions. A rule
is not invoked until one of its precondition sets has
b e e n successfully processed (i.e. each precondition in
the set has b e e n successfully f u z z y - m a t c h e d ) .
The paradigm used b y T E L L T A L E for story generation is similar to that used b y M e e h a n (1976) in his
T A L E S P I N program, which w r o t e " m e t a n o v e l s " concerning the interrelationships, b o t h physical and m e n tal, of the motives and actions of a n t h r o p o m o r p h i z e d
animals possessing simple b e h a v i o r patterns. M e e h a n
described a story as an exposition of events that occur
w h e n a m o t i v a t e d creature a t t e m p t s to achieve some
goal or fulfill some purpose, such as satisfying a need
or desire, solving a p r o b l e m , doing a job, etc.
T A L E S P I N was basically a p r o b l e m - s o l v e r that operated on a d a t a b a s e consisting of living entities, inanim a t e objects, and a set of assertions (rules) typifying
the relationships a m o n g them, and describing o p e r a tions for affecting those relationships. The rules were
organized into plans b a s e d on conceptual relatedness,
which aided in retrieval of specific rules.
Once this condition is satisfied then the rule can be
i n v o k e d and R u f o l o can start his T R A D I N G V O Y A G E . At each point of expanding the macrostructure,
i.e as Rufolo p e r f o r m s each step in the expansion of
the rule, he is subject to success or failure. H e m a y
succeed in buying the goods, or he m a y fail, and the
same is true for buying the ship and the loading of the
ship with goods, etc. If he does m a n a g e to p e r f o r m all
the actions in an expansion, then the rule is said to
succeed, and its p o s t c o n d i t i o n p r o p o s i t i o n s can be
asserted as being true. In the T R A D I N G V O Y A G E
rule, an-assertion that Rufolo m a k e s a profit would be
instantiated.
M e e h a n e m p h a s i z e d the use of plans as a m e c h a nism for generating stories.
Plan a c t i v a t i o n was a
process of assigning a goal to a c h a r a c t e r and t h e n
calling up any relevant plans that were indexed b y this
goal.
In the E H C rule f o r m the precondition governs the
ability of a rule to be invoked. If a rule is considered
to be a plan in the M e e h a n sense above, then instead
of indexing the rule b y its postcondition propositions,
it can be indexed b y its precondition if the precondition contains information relating to motivation. F o r
example, the rule,
But if an expansion fails, a different logic applies.
If Rufolo has loaded the ship with his goods, but the
ship burns in the harbor, then the last expansion p r o p osition, sailing to Cyprus, cannot be performed. The
rule is said to fail, and the postcondition propositions
are negated.
In the T R A D I N G V O Y A G E rule, an
assertion that Rufolo does not m a k e a profit would be
instantiated b e c a u s e no o t h e r e x p a n s i o n options remain.
MARRY
Rule failure is an important c o n c e p t with respect to
narratives. M a n y narratives consist of a series of episodes in pursuit of a goal; often each episode, except
the last, represents an instance of a failed rule. If a
rule prior to the last succeeded, then the goal would
have b e e n achieved and no further episodes would
have been forthcoming, (unless a new goal were chosen). The mechanism of rule failure is a natural one
for analyzing this type of narrative pattern.
(ASK*
4. Generation and Parsing
In this section a procedure for using ma'crostructures, e m b o d i e d in the E H C r u l e - f o r m , to generate
and parse stories, and a p r o g r a m , B U I L D T A L E /
T E L L T A L E , that implements it are discussed.
142
RULE :
HEAD :
(ASK-MARRY**
A X TH
Y)
PRE :
(X I S A MAN)
(Y I S A W O M A N )
(WANT*
A X TH
(MARRY*
A Y TH
X))
EXP :
(GO* A X T O Y)
A X TH
(ACCEPT*
Y IF
A Y TH
(MARRY*
A Y TH
X))
X)
POST:
(MARRY*
A Y TH
X)
is a rule that can be activated if some X, w h o must be
a man, W A N T s Y, w h o must be a w o m a n , to m a r r y X.
E a c h rule could contain propositions in its preconditions that restrict the set of objects that can instantiate
the variables of the rule.
T h e last p r o p o s i t i o n in the precondition, the
W A N T * , is a goal statement; it states that the M A R R Y rule can be i n v o k e d w h e n o n e p e r s o n w a n t s to
m a r r y another. In the process of generating a story, if
a point is reached where a m a n develops the goal of
wanting to m a r r y a w o m a n , the M A R R Y rule can be
American Journal of Computational Linguistics, Volume 6, Number 3-4,
July-December 1980
Alfred Correira
Computing Story Trees
invoked to attempt to satisfy that want. Of course, if
the ASK* or A C C E P T * fails, then the postcondition
becomes ( N O T OF (MARRY* ...)) instead.
The main advantage of inserting the W A N T * into
the preconditions is that the generator need not, as a
result, contain special logic to examine the postconditions of a rule to ascertain which rules to apply at a
point in the story. The normal rule activation mechanism (a rule can only be applied if one of its precondition sets succeeds) will automatically weed out unwanted rules because of the information relating to
motivation.
T E L L T A L E generates stories (sequences of propositions) based on such rules contained in its database,
either under user control or directed by the program
itself. The database is a semantic network of nodes
created by building a network of the case-predicate
rules supplied by the user. The input to T E L L T A L E
is the root of a story tree, which informs T E L L T A L E
as to the type of story which is to be generated (fairy
tale, etc.). The output from T E L L T A L E is an instantiated story tree whose terminals are the propositions
of the text. The program S U M M A R I Z E computes
summaries from the story tree. An annotated example
rule-base can be found in Appendix A for generating a
set of fairy tales. A sequence of propositions for a
story generated from that rule-base is shown in Appendix B. Two summaries of the story are shown in
Appendix C.
Starting with the root, T E L L T A L E executes until
no nodes remain to be expanded, because all have
been reduced to t e r m i n a l propositions or some depth
limit has been reached. A traversal of the terminals of
the resulting story will yield the proposition text of the
story; any higher level traversal will yield a summary
of the story.
4.2 Parsing
This section describes a procedure, B U I L D T A L E ,
for using macrostructures to parse stories. The goal in
writing the generator-parser system has been to create
a program capable of understanding those texts that it
can generate from a rule-base. Diagrammatically
RULEI
I
ISUMITEXTIPARI
ISUMITEXT
BASEIGENITREEIMARIPROPISERITREEIMARIPROP
--->1
I
I--->llZEI--->l
I--->IIZEI ....
I
I
I
I
I D
I A I
I
I B
IOSITI
IIONSl
C
>
IOSIT
lIONS
The tree that results from the story generator should
yield the same bottom-level terminal propositions as
the tree that results from the parsing of the terminal
propositions of the story, i.e. the output from part B in
the diagram above should be the same as the output
from part D. The story trees might or might not be
identical, depending on whether the story grammar is
ambiguous.
The philosophy of the understander can be summarized by recalling a concluding statement by Rumelhart (1975):
"It is my suspicion that any automatic 'story
parser' would require ... 'top-down' global structures .... but would to a large degree discover
them, in any given story, by the procedures developed by Schank (1975)."
The procedures developed by Schank emphasize the
" b o t t o m - u p " approach to story parsing. Starting with
the raw input data, emphasis is placed on integrating
each sentence into the developing narrative structure.
The structures used are scripts (and plans), and parsing integrates propositions into scripts. The scripts
form an implicit episodic structure, but the higher order relationships among the scripts are not generally
made specific (i.e. what structures form episodes, setting information, complications, resolutions, etc).
B U I L D T A L E is a program that combines the topdown and bottom-up approaches. As the parse progresses, B U I L D T A L E attempts to integrate the terminal propositions into the context of a story tree. It
manipulates the terminals bottom-up, integrating them
into macrostructures, while it is building story structure nodes (STORY, S E T T I N G , EPISODE, etc.) topdown. If B U I L D T A L E successfully parses the text,
these two processes will converge to instantiate a complete story tree.
Starting with the root of a story tree and an unmarked string of text propositions, B U I L D T A L E executes, by an algorithm described in Section 4.4, until
one of three conditions occurs:
1) If the procedure exhausts the proposition
list before the S T O R Y node has been
built, then it fails.
2) If the p r o c e d u r e builds an instance of
S T O R Y but there still remain unmarked
propositions, then it fails.
3) If an instance of STORY is built and the
text stream is empty, the the procedure
returns the instantiated root.
A terminal-level traversal of the resulting tree will
yield the original input text proposition stream; higher
level traversals will yield summaries of the text.
4.3 The Relationship B e t w e e n Generating and Parsing
There are several differences between the B U I L D T A L E and T E L L T A L E procedures. First, whereas
the parser is restricted to propositions from the input
American Journal of Computational Linguistics, Volume 6, Number 3-4,
July-December 1980
143
Alfred Correira
Computing Story Trees
text for its terminals, the generator is free to build a
terminal any time one can be generated. Second, the
generator is free at each step to choose any rule that
will m a t c h a p r o p o s i t i o n it is trying to expand, and
also to use any of a rule's preconditions or expansions
in any order. The parser must be careful in its choice
of rules and in how it examines the preconditions and
expansions in a rule, always examining t h e m in order
of decreasing length, to ensure that it does not build a
story tree and find left-over u n m a r k e d text propositions when done.
The generator builds a proposition b y first instantiating its precondition, expansion, and postcondition,
and then attaching t h e m to the instantiated head. The
g e n e r a t o r knows at all times the father of the proposition it is instantiating; in fact it knows every ancestor
of that proposition b e t w e e n it and the root, since it
operates strictly top-down.
T h e parser o p e r a t e s primarily as a l e f t - c o r n e r
b o t t o m - u p p r o c e d u r e , with overall direction supplied
b y some t o p - d o w n processing. When the parser builds
a structure, it c a n n o t be sure at that time that the
structure is indeed the correct one to be i n t e g r a t e d
into the tree at that point, i.e. it does not yet k n o w the
correct path to the root. The parser must, therefore,
save the context information of its build decisions, so
that they can be undone (or at least ignored) if they
are later found to be in error. In fact, the final structure of the tree can only be assigned after all the textual propositions have b e e n analyzed. This is in agreem e n t with the s t a t e m e n t of van Dijk (1975, pg. 11):
"Strictly speaking, a definite hierarchical structure
m a y be assigned to a discourse sequence of p r o p ositions only after processing of the last propositions. F o r long discourses this would m e a n that
all other propositions are kept in some m e m o r y
store."
Structures are built as the parse progresses, and some
might be discarded or rearranged. Their final position
in the tree cannot be determined until all the propositions have b e e n examined.
Some previous parsers solved this p r o b l e m by resorting to higher-level languages like C O N N I V E R and
P L A N N E R , paying the price in higher computational
costs. A conscious effort was made with this project
to avoid the e x p e n s e of resorting to a higher-level
language by having LISP p e r f o r m the necessary b o o k keeping to handle the backtracking involved in undoing an incorrect choice (build). In B U I L D T A L E , the
b o o k k e e p i n g is accomplished b y pushing context information onto the LISP control stack. Usually, w h e n a
build is p e r f o r m e d , instead of returning (popping the
L I S P stack), a f u r t h e r descent is m a d e in order to
integrate the next proposition. If a build is later found
144
to be in error, then a F A I L function a u t o m a t i c a l l y
causes LISP to b a c k up in its stack to the point where
the build was made and undo it, since all the information that was around w h e n the first decision was m a d e
to build is still present on the stack.
These differences should not obscure the very real
similarities b e t w e e n the two processes. T E L L T A L E
and B U I L D T A L E use the same functions to analyze
the preconditions, expansions and the postcondition of
a rule. In fact, the "basic" control structure of T E L L T A L E is a special case of the control structure of
B U I L D T A L E . The difference b e t w e e n the two occurs
at build time. In B U I L D T A L E , w h e n a node in the
tree is built, a check is m a d e to see if this node
m a t c h e s the r o o t of the derivation tree being built.
This m a y not be the case since the node m a y be m a n y
levels lower in the tree than the root in question, and
these levels will need to be built b e f o r e the derivation
tree is complete. O f course, if the node should m a t c h
the root, then it is returned.
T E L L T A L E , on the o t h e r hand, n e v e r descends
more than a single level in the tree at a time. W h e n a
build is p e r f o r m e d , it will always derive f r o m the derivation tree being processed. The node and the root
always m a t c h , and the n o d e is returned.
A t build
time, when B U I L D T A L E decides whether to call itself
recursively (to add the next higher level in the tree) or
to p o p the stack (returning the derivation tree root),
T E L L T A L E will always p o p the stack.
G e n e r a t i o n and parsing use the same g r a m m a r with
different terminating conditions to define the control
paths through the grammar. T h e y resemble each other
in computing as output a derivation tree whose terminals are the propositions of the text. This fact was
b o r n e out during the i m p l e m e n t a t i o n of the system.
T E L L T A L E was c o d e d first, and the e v e n t u a l
T E L L T A L E / B U I L D T A L E control structure for processing preconditions, expansions, and p o s t c o n d i t i o n s
was d e b u g g e d and tested b y g e n e r a t i n g m a n y s t o r y
trees. B U I L D T A L E grew out of T E L L T A L E b y adding the build-time r e c u r s i o n / b a c k u p m e c h a n i s m to the
control structure.
The symmetric relationship b e t w e e n generation and
parsing with respect to the c o m p u t a t i o n of derivation
trees is one significant feature of the E H C rule system.
4.4 T h e Basic C o n t r o l S t r u c t u r e
B U I L D T A L E and T E L L T A L E are essentially large
s e p a r a t e P R O G - d e f i n e d L I S P functions t h a t effect
different initialization conditions, and then execute the
same set of underlying functions. Below is a description of the basic control structure shared b y T E L L T A L E and B U I L D T A L E :
American Journal of Computational Linguistics, Volume 6, Number 3-4,
July-December 1980
Alfred Correira
I: For each proposition being examined:
1) If it exists instantiated in the data base,
then return the instantiation,
whether it is found negated or not;
2) if there are no rules for the node,
then it can be asserted;
3) otherwise, for each rule until one succeeds:
a) for each precondition in the rule
until one succeeds,
do I for each proposition in the
precondition;
b) if no precondition succeeds,
then fail;
c) otherwise, for each expansion
until one succeeds,
do I for each proposition in the
expansion;
d) if an expansion succeeds, then for
each proposition in the
postcondition,
do I and return the instantiated
node (rule success);
e) otherwise, for each proposition
in the postcondition,
do I for the negation of the
proposition and generate
a (rule) failure.
" D o I" means "follow the procedure labelled 'I'"
which is the entire basic control structure outlined above.
Since postcondition propositions do not have rule expansions, they never perform step 3 above. Also, rule
failure is not marked by negation (another mechanism
is used) so, if a negated node is encountered, it will
never fall through to step 3. Finally, it should be noticed that, although it is possible for the algorithm to
generate erroneous propositions in step 2 if exploring
an incorrect path during parsing ( B U I L D T A L E ) , these
propositions will be undone during backup when the
path has been recognized as incorrect.
5. Extracting S u m m a r i e s From Story Trees
One of the principal reasons for choosing the
Kintsch and van Dijk macrostructure theory was the
resulting p r o p e r t y of summarizability; the ability to
produce c o h e r e n t summaries is one mark of intelligence in a parser. The summarizer produces various
strings of propositions from the story tree which form
summaries of a text. One such string is composed of
the terminals and represents the complete story. A n y
sequence of propositions output by the summarizer is
a well-formed input to the parser. The system is thus
able to parse all proposition sequences it can generate.
Since the summary feature is inherent in the trees
as they are given to the summarizer, a simple leveltraversal algorithm would have been sufficient to gen-
Computing Story Trees
erate useful output. However, this would have resulted in needless redundancy of propositions (since some
are checked repeatedly by the preconditions of the
rules and have pointers to them at many places in the
tree). Therefore, the summarizer remembers what it
has already o u t p u t in the summary, so as never to
repeat itself.
Another area of repetition concerns attributes for
objects in the story. To avoid repeating an object's
attributes, the summarizer keeps a list of objects that
have already appeared in at least one proposition, and
whenever it encounters in a new proposition an object
not on this list, it outputs it with all of its properties
and then places it on the list of e x p a n d e d objects.
Since no time-markers are put on an object's properties, they are all printed out at once, even if some of
those properties are not attached to the object until
much later in the story; this reveals a weakness in the
procedure that can be corrected by the introduction of
time-markers for objects (actions already possess timemarkers).
One attribute of story trees is that, at their higher
nodes, one can read off the syntactic structure of the
story. F o r example suppose a story consists of a setting followed by three episodes.
As a summary,
"setting plus three episodes" is usually not very interesting; t h e r e f o r e the summarizer has the ability to
recognize and descend below these story structure
nodes in the final summary. These nodes are treated
like all other nodes to the tree building procedures,
but the summarizer descends below the nodes to print
their descendants, no matter what level summary is
being computed.
There is also the question of summarizing the macrostructures (rules). By definition, these nodes are
expandable, i.e., they have a rule for expanding them
in the rule-base. Maerostructures are not marked with
a N O T if they fail; only simple propositions - terminals - are. However, whether a script achieves its goal
or not is vital information to be included in any reasonable summary produced from the tree. Therefore,
when summarizing a macrostructure, the summarizer
outputs b o t h the head (the macrostructure pattern)
and its postcondition (if the script fails to achieve its
goal, the postcondition will have been negated).
Finally, the summarizer outputs only those propositions it recognizes as non-stative descriptions; it never
outputs state descriptions. The reason for this is that
a stative always describes the attribute(s) of some
object, and can therefore be output the first time that
that object appears in an active proposition.
The summarizer is an algorithm that, given a story
tree and a level indicator, scans the nodes of the tree
at that level, and applies the following rules to each
node:
American Journal of Computational Linguistics, Volume 6, Number 3-4, July-December 1980
145
Alfred Correira
Computing Story Trees
1) If the node is a story structure node, then
summarize its sons.
2) If the node has already been processed,
then skip it.
3) If the node is marked as a script, return
its head followed by its postcondition
propositions.
4) If the node is a stative, then skip it.
For each case value in a proposition to be output (for
example, R U F O L O 1 , D O U B L E * l , and W E A L T H 1 in
(WANT*I
A RUFOLO1
TH (DOUBLE*I
A
R U F O L O 1 T H W E A L T H 1 ) ) ) , the following rule is
applied:
5) If the case value has not appeared in a
previously accepted proposition, print it
and all its attributes; otherwise, just print
the case value itself.
For example, if R U F O L O 1 has been mentioned in
a proposition prior to W A N T * I in a summary, then
only R U F O L O 1 will be present in the W A N T * I proposition, and not the fact that R U F O L O 1 is a
M E R C H A N T 1 from R A V E L L O 1 , etc.
An initial version of an English language generator
was written that applies a set of rules to the output of
the summarizer to produce well-formed English texts
(Hare and Correira, 1978). This generator uses rule
forms and the summarizer output to solve some of the
problems involved in: reasonable paragraphing and
sentence connectivity, elision of noun groups and
verbs, and pronominalization of nouns. The decisions
are based, in part, on the structure of the story tree
and the positions of individual propositions in that
tree.
6. D i s c u s s i o n a n d C o n c l u s i o n s
The task of text processing requires solutions to
several important problems. The computational theory
for macrostructures using Extended H o r n Clauses was
designed with the following goals in mind. A computational model should have some degree of psychological validity, both to provide a humanly useful representation of textual content and organization and to
ensure that the task of rule-writing is as natural as
possible for the human grammar producer. It should
be conceptually simple, in both design and implementation, taking advantage of similarities between generation and parsing, and it should offer a rigorous data
structure that is uniform and avoids the growth of ad
hoe large-scale structures.
The computational macrostructures realized by the
E x t e n d e d H o r n Clause notation succeed in many ways
in satisfying these goals. They appear to resemble the
structures humans build mentally when reading narrative texts. The story tree is a logical way to organize
these macrostructures, with the terminals of a particular story tree comprising the actual textual propositions, and the interior nodes containing the instantiat146
ed heads of rules (corresponding to macrostructures).
The story tree has the summary property: if the tree is
truncated at any level, then a "meaningful" (coherent)
summary of the original text can be read off directly.
The generality of the macrostructure propositions increases as one nears the level of the root (going from
the level of specific actions to the rules that contain
them, to the story categories that contain these rules),
which can be considered as the title for the text at its
terminals.
The concept of rule failure takes the E H C out of
the strictly logical system of the normal (Kowalskitype) H o r n clause logic, since failure in a normal logic
system means something different from failure here.
In narratives, failure needs to be recorded, since it is
one source of "interest" in the resulting story; striving,
failing, and striving again is a c o m m o n occurrence in
simple narratives. These failure points, and their consequences, have to be r e c o r d e d in the story tree
(whereas, in normal logic systems, failure points are
invisible to the final result) and, furthermore, they
restrict the choice of paths that can r e a s o n a b l y be
expected to emanate from the point of failure. The
failure mechanism is tailored for narratives involving
entities exhibiting motivated behavior.
O t h e r text
forms, such as technical or encyclopedia articles,
would probably not require the failure mechanism.
The underlying approach
in B U I L D T A L E /
T E L L T A L E is that of a problem-solver, as was also
true of Meehan's story-writer. A rule-base, organized
as a hierarchy of story trees, is used to generate a
particular, instantiated, story tree by an inference procedure augmented via the rule failure mechanism detailed in Section 3.2. Each instantiated tree is treated
as a context, consisting of the events, objects, and
their relationships, relating to a particular story. The
facts and rules in the rule-base serve as a model for
the possible states in the microworld formed by that
story tree. These states are categorized using linguistic entities, such as S E T T I N G , E P I S O D E , C O M P L I C A T I O N , and R E S O L U T I O N .
The problem-solving approach, coupled with the
story grammar concept, is a natural one for processing
most forms of narratives. Analogous systems of rules
could be used for processing other large text forms,
although the categories involved would be different.
N o t e s on t h e A p p e n d i c e s
The rules are all written in a case predicate notation (Fillmore, 1968). The general form for such a
predicate is
(HEAD ARC1 VALUEq . . .
ARC In] VALUE[n] )
The H E A D of a case predicate is either a verb or a
noun form; because no formal lexicon was maintained
for the T E L L T A L E / B U I L D T A L E
program, verb
forms were marked with an asterisk and objects were
left unmarked. The ARCs are standard case relations,
American Journal of Computational Linguistics, Volume 6, Number 3-4,
July-December 1980
Alfred Correira
Computing Story Trees
such as Agent, THeme, LOCation, INSTance, etc.,
although no attempt was made to be strictly correct
linguistically with their every use, and a few relations
were created for the sake of convenience. When a
good arc name did not suggest itself, then an arbitrary
arc name - R1, R2, etc. - was used instead.
The
V A L U E can be either a verb form, an object, or another case predicate.
The case predicates used in the program were written to enhance readability. For example, in the fairytale story (Appendix B), the case predicate
(WANT*I
TO P O S S E S S I
A GEORGEI
TH MARYI)
can be rendered into English as "George wants to
possess Mary". The sequence
George
TO P O S S E S S
DESIRE
HEAD:
RULE:
(OR
(OR
A X TH Y)
(Y I S A P R I N C E S S
SEX FEMALE
RULE:
( A S K - M A R R Y * * A X TH Y)
( R E S C U E * * A X TH Y F R O M Z)
( Q U E S T * * A X TH Y)
(PRAY** P A R T F O R A X T H Y))
A S K RULE:
HEAD:
( A S K - M A R R Y * * A X T H Y)
PRE:
(WANT* T O P O S S E S S A X T H Y)
(Y I S A P R I N C E S S )
EXP:
(GO* A X TO Y)
(ASK* A X T H Y IF (MARRY* A Y TH X))
( A C C E P T * A Y TH X)
POST:
(MARRY* A Y TH X)
Z)
R E S C U E RULE:
HEAD:
( R E S C U E * * A X TH Y F R O M Z)
PRE:
Z)
SEX M A L E
SEX M A L E
(X M O D
(WANT*
( A C T I O N A X TH Y)
EXP:
(Y I S A PLACE)
(X M O D BRAVE)
EXP:
HEAD:
F A I R Y S T O R Y RULE:
HEAD:
( F A I R Y S T O R Y * * A X TH Y)
EXP:
( S E T T I N G A X)
( E P I S O D E A X TH Y)
POST:
(LIVE* A X TH Y M A N N E R
HAPPILY_EVER_AFTER)
(OR
M O T I V E RULE:
HEAD:
( M O T I V E A X TH Y)
PRE:
( D E S I R E * A X TH Y)
ACTION
F A I R Y T A L E RULE:
HEAD:
(FAIRYTALE*)
EXP:
( F A I R Y S T O R Y * * A X TH Y)
C H A R RULE:
HEAD:
(CHAR I N S T X)
PRE:
(OR (X ISA K N I G H T
(X I S A P R I N C E
EXP:
( A C T I O N A X TH Y)
PERSON T MOD BEAUTIFUL)
(Y I S A H O L Y O B J E C T M O D L O S T ) )
A p p e n d i x A - R u l e - b a s e for Fairytale
(CHAR I N S T X)
EXP:
( M O T I V E A X TH Y)
(CHAR I N S T X)
EXP:
can be rendered as "George goes to Irving.
slays Irving. George rescues Mary.".
L I V E RULE:
HEAD:
(LIVE* A X L O C Y D U R I N G
PRE:
RULE:
HEAD:
( E P I S O D E A X TH Y)
( D E S I R E A X TH Y)
PRE:
(GO*3 A G E O R G E I TO I R V I N G I )
(SLAY*I A G E O R G E I TH I R V I N G I )
(RESCUE*I A G E O R G E I TH M A R Y I )
S E T T I N G RULE:
HEAD:
( S E T T I N G A X)
PRE:
(LIVE* A X L O C Y D U R I N G
EPISODE
(Z I S A TIME)
PERSON
PERSON
T)
T))
HANDSOME))
(WANT* T O P O S S E S S A X TH Y)
(Y I S A P R I N C E S S )
( T H R E A T E N * * A Z TH Y)
EXP:
(GO* A X TO Z)
(SLAY* A X TH Z)
( R E S C U E * A X T H Y)
POST:
(MARRY* A Y TH X)
American Journal of Computational Linguistics, Volume 6, Number 3-4,
July-December 1980
147
Alfred Correira
Computing Story Trees
T H R E A T E N RULE:
HEAD:
( T H R E A T E N S S A X T H Y)
PRE:
(X I S A D R A G O N A N I M A T E T)
(X S O D EVIL)
(Y I S A P R I N C E S S )
(WANT • T O P O S S E S S A X TH Y)
EXP:
(CARRY s~ P A R T O F F A X TH Y T O Z)
C A R R Y RULE:
HEAD:
(CARRY ss P A R T O F F A X TH Y TO Z)
PRE:
(X I S A D R A G O N )
(Y I S A P R I N C E S S )
(Z I S A
D E N P O B J T)
EXP:
(GO~ A X T O Y)
( C A P T U R E • A X TH Y)
(FLYS A X TH Y TO Z)
Q U E S T RULE:
HEAD:
( Q U E S T s~ A X TH Y)
PRE:
(CHAR I N S T X)
(Y I S A H O L Y O B J E C T M O D LOST)
(WANT • TO P O S S E S S A X TH Y)
EXP:
(GO • A X TO O R A C L E )
( R E V E A L • A O R A C L E TH P L A C E OF Y)
(GO s A
X T O PLACE)
( F I N D s A X T H Y)
POST:
( P O S S E S S s A X T H Y)
PRAY RULE:
HEAD:
(pRAyss P A R T F O R A X TH Y)
PRE:
(CHAR I N S T X)
(WANT s T O P O S S E S S A X TH Y)
(Z I S A GOD)
(W I S A C H U R C H P O B J T)
EXP:
(OR ((GOS A X T O W)
(KNEEL s A X P R E P (IN TH
(FRONT P R E P (OF TH A L T E R ) ) ) )
(pRAys A X P R E P (TO TH Z)
P R E P (FOR TH Y))
((U I S A P R I E S T SEX M A L E P E R S O N T)
(GO • A X TO U)
(PAY • A X TH U E X P E C T
( I N T E R C E D E s A U P R E P (WITH TH Z)
P R E P (FOR TH Y)))
(pRAys A U T O Z F O R Y)
(GRANT s A Z T H ( P R A Y E R P O S S B Y U))))
POST:
( P O S S E S S S A X TH Y)
148
JOHN
ISA PRINCE
SEX MALE
PERSON
T)
G E O R G E I S A K N I G H T S E X M A L E P E R S O N T)
L A N C E L O T I S A K N I G H T S E X M A L E P E R S O N T)
PARSIFAL
ISA KNIGHT
SEX MALE
PERSON
T)
MARY ISA PRINCESS SEX FEMALE
GUENEVIERE ISA PRINCESS
S E X F E M A L E P E R S O N T)
PERSON
T)
( H O L Y _ G R A I L I S A H O L Y _ O B J E C T P O B J T)
( S A C R E D _ C R O S S I S A H O L Y _ O B J E C T P O B J T)
( C A M E L O T I S A PLACE)
( M O N T S A L V A T I S A PLACE)
(ONCE_UPON A TIME
I S A TIME)
(IRVING
ISA DRAGON
ANIMATE
T)
(CARMEN
ISA DRAGON
ANIMATE
T)
Appendix B - Text of Fairytale
(FAIRYTALE*I)
( L I V E S 2 A (GEORGEI I S A K N I G H T I S E X M A L E 2
PERSON T MOD BRAVEI) LOC (CAMELOTI ISA
PLACEI) DURING (ONCE_UPON A TIMEI ISA
TIMEI))
D E S I R E S 2 A G E O R G E I TH (MARYI I S A
PRINCESSI SEX FEMALEI PERSON T
MOD BEAUTIFULI))
(WANT~I T O P O S S E S S I A G E O R G E I TH M A R Y I )
(GOal P A R T TOI A G E O R G E I T O M A R Y I ) )
(ASKSl A G E O R G E I TH M A R Y I IF
(MARRY~I A M A R Y I TH G E O R G E I ) )
(NOT OF ( A C C E P T S l A M A R Y I T H G E O R G E I ) )
(NOT OF ( M A R R Y S 2 A M A R Y I TH G E O R G E I ) )
(WANTS2 T O P O S S E S S 2 A
(IRVINGI I S A D R A G O N I A N I M A T E T
M O D EVILI) TH M A R Y I )
(GOS2 P A R T TO2 A I R V I N G I T O M A R Y I )
( C A P T U R E S l A I R V I N G I TH M A R Y I )
(FLYSl A I R V I N G I TH M A R Y I P R E P
(TO TH D E N I ) )
(GO~3 P A R T TO3 A G E O R G E I T O I R V I N G I )
(SLAYSl A G E O R G E I TH I R V I N G I
(RESCUESl A G E O R G E I TH M A R Y I
( M A R R Y S 4 A M A R Y I TH G E O R G E I )
(LIVES3 A G E O R G E I TH M A R Y I
MANNER HAPPILY EVER AFTERI)
Appendix C - Summaries of Fairytale
(FAIRYTALESl)
( ( L I V E S 2 A (GEORGEI I S A K N I G H T I S E X M A L E 2
PERSON T MOD BRAVEI) LOC (CAMELOTI ISA
PLACEI) DURING (ONCE_UPON_TIMEI ISA
TIMEI))
( D E S I R E S 2 A G E O R G E I TH (MARYI I S A
PRINCESSI SEX FEMALEI
MOD BEAUTIFULI))
American Journal of Computational Linguistics,
PERSON
Volume 8, Number 3-4,
T
July-December 1980
Alfred Correira
Computing Story Trees
W A N T * ] TO P O S S E S S * I A G E O R G E I TH MARYI)
GO*I P A R T TO1 A G E O R G E I TO MARYI)
ASK*I A G E O R G E I TH M A R Y J IF
M A R R Y * I A M A R Y I TH G E O R G E I ) )
N O T OF (ACCEPT*I A M A R Y I TH G E O R G E I ) )
N O T OF ( M A R R Y * 2 A MARYI TH G E O R G E 1 ) )
TO P O S S E S S 2 A
I R V I N G I I S A D R A G O N I M O D EVILI)
TH M A R Y I )
C A R R Y * * 2 P A R T OFFI A I R V I N G I
TH MARYI TO DEN])
(GO*3 P A R T TO3 A G E O R G E I TO I R V I N G I )
(SLAY*I A G E O R G E I TH I R V I N G I
(RESCUE*I A G E O R G E I TH M A R Y I
( M A R R Y * 4 A M A R Y I TH G E O R G E I )
(LIVE*3 A G E O R G E q TH MARYI
MANNER HAPPILY EVER AFTERI)
(FAIRYTALE* I)
( ( L I V E * 2 A (GEORGEI I S A K N I G H T I SEX M A L E 2
P E R S O N T M O D B R A V E I ) L O C (CAMELOTI ISA
PLACEq) D U R I N G ( O N C E _ U P O N A TIME1 I S A
TIMEI ) )
( D E S I R E * 2 A G E O R G E 1 TH (MARYI I S A
P R I N C E S S I SEX F E M A L E I P E R S O N T
MOD BEAUTIFULI ) )
(WANT*I TO P O S S E S S I A G E O R G E I TH M A R Y I )
( A S K - M A R R Y * * 2 A G E O R G E I TH MARYI)
(NOT OF ( M A R R Y * 2 A MARYI TH G E O R G E ] ) )
( R E S C U E * * 2 A G E O R G E ] TH MARYI
F R O M (IRVINGI I S A D R A G O N I A N I M A T E T
M O D EVILq ) )
M A R R Y * 4 A MARYI TH G E O R G E I )
L I V E * 3 A G E O R G E I TH M A R Y I
M A N N E R H A P P I L Y E V E R A F T E R ] ))
References
Bobrow,
D.
G.
and
Collins,
A.
1975.
Representation
and
Understanding. New York: Academic Press.
Bobrow, D. G. and Raphael, B. 1974. "New programming languages for AI research". Computing Surveys, Vol. 6, No. 3, pp.
155-174.
Bobrow, D. and Winograd, T. 1977. "An overview of KRL, a
knowledge representation language." Cognitive Science, Vol. 1,
No. 1, pp. 3-46.
Charniak, E. 1972. "Toward a model of children's story comprehension." AI-TR-266. Cambridge, Mass.: M.I.T.A.I. Lab.
Charniak, E. and Wilks, Y. 1976. Computational Semantics. New
York: North-Holland.
Colmerauer, A.
1978. "Metamorphosis grammars" in Natural
Language Communication with Computers, ed. L. Bole, New
York: Springer Verlag.
Fillmore, C. 1968. "The case for case" in Universals in Linguistic
Theory, eds. E. Bach and R. Harms, New York: Holt, Rhinehart
and Winston.
Grosz, B. 1977. "The representation and use of focus in dialogue
understanding." Technical Note No. 151. Menlo Park, California: Stanford Research Institute.
American Journal of Computational Linguistics,
Hare, D. and Correira, A. 1978. "Generating connected natural
language from case predicate story trees." unpublished manuscript.
Hendrix, G. G. 1976. "Partitioned networks for modelling natural
language semantics." Dissertation. Austin, Texas: Department
of Computer Sciences, The University of Texas.
Kintsch, W. and van Dijk, T. 1978. "Recalling and summarizing
stories." in Current Trends in Textlinguistics, ed. Wolfgang Dressier, de Gruyter.
Kowalski, R. 1979. Logic for Problem Solving, New York: Elsevier
North-Holland.
Lehnert, W. 1977. "Question answering in a story understanding
system." Cognitive Science, Vol. 1, pp. 47-73.
Meehan, J. 1976. "The metanovel: writing stories by computer."
Dissertation. New Haven, Connecticut: Department of Computer Sciences, Yale University.
Meyer, B. 1975. The Organization of Prose and its Effects on Recall.
Amsterdam: North-Holland.
Minsky, M. 1975. "A framework for representing knowledge." In
The Psychology of Computer Vision. P. Winston ed. New York:
McGraw-Hill.
Norman, D. A. and Rumelhart, D. E. 1975. Explorations in
Cognition. San Francisco: W. H. Freeman and Company.
Pereira, F. and Warren, D. 1980. "Definite clause grammars for
language analysis - a survey of the formalism and a comparison
with augmented transition networks". Artificial Intelligence 13,
pp. 231-278.
Rumelhart, D. E. 1975. "Notes on a schema for stories." in Representation and Understanding, eds. D. Bobrow and A. Collins,
New York: Academic Press.
Schank, R. C. 1975. "The structure of episodes in memory." in
Representation and Understanding, eds. D. Bobrow and A. Collins, New York: Academic Press.
Sehank, R. C. 1975b. Conceptual Information Processing. New York:
North-Holland.
Schank, R. and Abelson, R. 1977. Scripts, Plans, Goals and
Understanding. New York: Wiley.
Simmons, R. F. 1978. "Rule-based computations on English." in
Pattern-Directed Inference Systems. Hayes-Roth, R. and Waterman, D. eds. New York: Academic Press.
Simmons, R. F. and Correira, A. 1979. "Rule forms for verse,
sentences and story trees." in Associative Networks - Representation and Use of Knowledge by Computers, ed. N. Findler, New
York: Academic Press.
Simmons, R. F. and Chester, D. 1977. "Inferences in quantified
semantic networks". Proceedings of Fifth IJCAI, pp. 267-274.
van Dijk, T. A. 1975. "Recalling and summarizing complex discourse." unpublished manuscript. Amsterdam: Department of
General Literary Studies, University of Amsterdam.
Wilks, Y. 1975. "A preferential, pattern-matching semantics for
natural language understanding". Artificial Intelligence 6, pp.
53-74.
Winograd, T. 1972. Understanding Natural Language. New York:
Academic Press.
Woods, W. A. 1970. "Transition networks grammars for natural
language analysis". Comm. ACM, Vol. 13, pp. 591-602.
Young, R. 1977. "Text Understanding: a survey." American Journal
of Computational Linguistics, Vol. 4, No. 4, Microfiche 70.
A l f r e d Correira is a S y s t e m s A n a l y s t f o r the C o m p u tation Center o f the University o f Texas at Austin. H e
received the M . A . degree in c o m p u t e r science f r o m the
University o f Texas in 1979.
Volume 6, Number 3-4,
July-December 1980
149