A Syntactic Constituent Ranker for Speculation and Negation

University of Oslo
: Department of Informatics
A Syntactic Constituent Ranker for
Speculation and Negation Scope Resolution
Erik Velldal
Jonathon Read†
†
Date:
21 September 2011
Lilja Øvrelid
Stephan Oepen
[email protected]
Venue:
Master Seminar in Language Technology
Department of Informatics
University of Oslo
Introduction
Task
Given an appropriate cue of speculation or negation, determine
its scope—the subsequence of the sentence that is affected.
For example:
I
{The unknown amino acid may be used by these species}.
I
Samples of the protein pair space were taken {hinstead ofi
considering the whole space} [...]
Some applications:
I
Information Extraction
I
Sentiment Analysis
Introduction
Our approach
Generate candidate scopes from constituents in syntactic trees;
use a supervised learner to rank these candidates
speculation
/ negation
cue
unlabeled
sentence
syntactic
constituent
parsing
datadriven
ranking
predicted
scope
Outline
1
Data sets and evaluation measures
2
A brief review of speculation and negation at LNS
3
A syntactic constituent ranker
4
Experiments
Outline
1
Data sets and evaluation measures
2
A brief review of speculation and negation at LNS
3
A syntactic constituent ranker
4
Experiments
Data Sets
BioScope Corpus (Vincze et al., 2008)
I
20,924 sentences annotated with speculation/negation cues
and associated scopes.
I
Sentences are drawn from biomedical abstracts, full papers
and clinical reports.
CoNLL-2010 Shared Task Evaluation Data (Farkas et al., 2010)
I
A further set of 5,003 sentences from biomedical papers,
annotated for speculation.
Data Sets
Data
Total
Sentences
Abstracts
Papers
CoNLL
11,871
2,670
5,003
Speculation
Sentences Cues
2,101
519
790
2,659
668
1,033
Negation
Sentences Cues
1,597
339
–
1,719
376
–
Note on data set splits for development and evaluation:
Speculation: all of the Abstracts and Papers are used for
development; the CoNLL set is held-out.
Negation: 90% of the Abstracts and Papers are used for
development; the remainder is held-out.
Preprocessing
I
BioScope XML converted to stand-off characterisation.
I
Tokenisation performed using the GENIA tagger
(Tsuruoka et al., 2005), supplemented with a finite-state
tokeniser
I
Part-of-speech tags from both the GENIA tagger (for higher
accuracy in the biomedical domain) and TnT (Brants, 2000)
I
Dependency parsing using MaltParser (Nivre et al., 2006)
stacked with the XLE platform (Crouch et al., 2008) with
the English grammar developed by Butt et al., (2002).
I
Constituent tree parsing with PET (Callmeier, 2002) and
the English Resource Grammar (Flickinger 2002).
Preprocessing
I
BioScope XML converted to stand-off characterisation.
I
Tokenisation performed using the GENIA tagger
(Tsuruoka et al., 2005), supplemented with a finite-state
tokeniser
I
Part-of-speech tags from both the GENIA tagger (for higher
accuracy in the biomedical domain) and TnT (Brants, 2000)
I
Dependency parsing using MaltParser (Nivre et al., 2006)
stacked with the XLE platform (Crouch et al., 2008) with
the English grammar developed by Butt et al., (2002).
I
Constituent tree parsing with PET (Callmeier, 2002) and
the English Resource Grammar (Flickinger 2002).
Preprocessing
I
BioScope XML converted to stand-off characterisation.
I
Tokenisation performed using the GENIA tagger
(Tsuruoka et al., 2005), supplemented with a finite-state
tokeniser
I
Part-of-speech tags from both the GENIA tagger (for higher
accuracy in the biomedical domain) and TnT (Brants, 2000)
I
Dependency parsing using MaltParser (Nivre et al., 2006)
stacked with the XLE platform (Crouch et al., 2008) with
the English grammar developed by Butt et al., (2002).
I
Constituent tree parsing with PET (Callmeier, 2002) and
the English Resource Grammar (Flickinger 2002).
Preprocessing
I
BioScope XML converted to stand-off characterisation.
I
Tokenisation performed using the GENIA tagger
(Tsuruoka et al., 2005), supplemented with a finite-state
tokeniser
I
Part-of-speech tags from both the GENIA tagger (for higher
accuracy in the biomedical domain) and TnT (Brants, 2000)
I
Dependency parsing using MaltParser (Nivre et al., 2006)
stacked with the XLE platform (Crouch et al., 2008) with
the English grammar developed by Butt et al., (2002).
I
Constituent tree parsing with PET (Callmeier, 2002) and
the English Resource Grammar (Flickinger 2002).
Preprocessing
I
BioScope XML converted to stand-off characterisation.
I
Tokenisation performed using the GENIA tagger
(Tsuruoka et al., 2005), supplemented with a finite-state
tokeniser
I
Part-of-speech tags from both the GENIA tagger (for higher
accuracy in the biomedical domain) and TnT (Brants, 2000)
I
Dependency parsing using MaltParser (Nivre et al., 2006)
stacked with the XLE platform (Crouch et al., 2008) with
the English grammar developed by Butt et al., (2002).
I
Constituent tree parsing with PET (Callmeier, 2002) and
the English Resource Grammar (Flickinger 2002).
Evaluation measures
Evaluation was performed using the scoring software of the
CoNLL 2010 Shared Task.
Prec =
tp
tp+fp
Rec =
tp
tp+fn
F1 =
2×Prec×Rec
Prec+Rec
A true positive requires identification of all words in scope.
Outline
1
Data sets and evaluation measures
2
A brief review of speculation and negation at LNS
3
A syntactic constituent ranker
4
Experiments
Supervised classification of cues
We employ a binary word-by-word linear support vector
machine classifier using the SVMlight toolkit.
A simpler approach is to treat the set of cues as near-closed
class, applying filtering to disregard all known non-cues before
training and applying the classifier.
Cues described using features of:
I
n-grams of lemmas up to three positions to the left; and
I
n-grams of surface forms up to two positions to the right.
Supervised classification of cues
We employ a binary word-by-word linear support vector
machine classifier using the SVMlight toolkit.
A simpler approach is to treat the set of cues as near-closed
class, applying filtering to disregard all known non-cues before
training and applying the classifier.
Cues described using features of:
I
n-grams of lemmas up to three positions to the left; and
I
n-grams of surface forms up to two positions to the right.
Supervised classification of cues
We employ a binary word-by-word linear support vector
machine classifier using the SVMlight toolkit.
A simpler approach is to treat the set of cues as near-closed
class, applying filtering to disregard all known non-cues before
training and applying the classifier.
Cues described using features of:
I
n-grams of lemmas up to three positions to the left; and
I
n-grams of surface forms up to two positions to the right.
Supervised classification of cues
Multi-word cues are handled in post-processing by matching
patterns observed in the development data lemmas.
Speculation
cannot {be}? exclude
either .+ or
indicate that
may,? or may not
no {evidence | proof | guarantee}
not {known | clear | evident | understood | exclude}
raise the .* {possibility | question | issue | hypothesis}
whether or not
Negation
rather than
{can|could} not
no longer
instead of
with the * exception of
neither * nor
{no(t?)|neither} * nor
Supervised classification of cues
Development Results: Speculation
Baseline
Word-by-word
Filtering
Prec
Rec
F1
90.49
94.65
94.13
81.16
82.26
84.60
85.57
88.02
89.11
Prec
Rec
F1
87.69
92.49
97.61
97.72
92.38
95.03
Development Results: Negation
Word-by-word
Filtering
Supervised classification of cues
Development Results: Speculation
Baseline
Word-by-word
Filtering
Prec
Rec
F1
90.49
94.65
94.13
81.16
82.26
84.60
85.57
88.02
89.11
Prec
Rec
F1
87.69
92.49
97.61
97.72
92.38
95.03
Development Results: Negation
Word-by-word
Filtering
Hueristics for scope resolution
Hueristics, activated by the cue’s part-of-speech operate over
dependency structures:
I
Coordinations scope over their conjuncts;
I
Prepositions scope over their argument and its
descendants;
I
Attributive adjectives scope over their nominal head and
its descendants;
I
Predicative adjectives scope over referential subjects and
clausal arguments, if present;
I
Modals inherit subject scope from their lexical verb and
scope over their descendants; and
I
Passive or raising verbs scope over referential subjects and
the verbal descendants.
Hueristics for scope resolution
Hueristics, activated by the cue’s part-of-speech operate over
dependency structures:
I
Coordinations scope over their conjuncts;
I
Prepositions scope over their argument and its
descendants;
I
Attributive adjectives scope over their nominal head and
its descendants;
I
Predicative adjectives scope over referential subjects and
clausal arguments, if present;
I
Modals inherit subject scope from their lexical verb and
scope over their descendants; and
I
Passive or raising verbs scope over referential subjects and
the verbal descendants.
Hueristics for scope resolution
Hueristics, activated by the cue’s part-of-speech operate over
dependency structures:
I
Coordinations scope over their conjuncts;
I
Prepositions scope over their argument and its
descendants;
I
Attributive adjectives scope over their nominal head and
its descendants;
I
Predicative adjectives scope over referential subjects and
clausal arguments, if present;
I
Modals inherit subject scope from their lexical verb and
scope over their descendants; and
I
Passive or raising verbs scope over referential subjects and
the verbal descendants.
Hueristics for scope resolution
Hueristics, activated by the cue’s part-of-speech operate over
dependency structures:
I
Coordinations scope over their conjuncts;
I
Prepositions scope over their argument and its
descendants;
I
Attributive adjectives scope over their nominal head and
its descendants;
I
Predicative adjectives scope over referential subjects and
clausal arguments, if present;
I
Modals inherit subject scope from their lexical verb and
scope over their descendants; and
I
Passive or raising verbs scope over referential subjects and
the verbal descendants.
Hueristics for scope resolution
Hueristics, activated by the cue’s part-of-speech operate over
dependency structures:
I
Coordinations scope over their conjuncts;
I
Prepositions scope over their argument and its
descendants;
I
Attributive adjectives scope over their nominal head and
its descendants;
I
Predicative adjectives scope over referential subjects and
clausal arguments, if present;
I
Modals inherit subject scope from their lexical verb and
scope over their descendants; and
I
Passive or raising verbs scope over referential subjects and
the verbal descendants.
Hueristics for scope resolution
Hueristics, activated by the cue’s part-of-speech operate over
dependency structures:
I
Coordinations scope over their conjuncts;
I
Prepositions scope over their argument and its
descendants;
I
Attributive adjectives scope over their nominal head and
its descendants;
I
Predicative adjectives scope over referential subjects and
clausal arguments, if present;
I
Modals inherit subject scope from their lexical verb and
scope over their descendants; and
I
Passive or raising verbs scope over referential subjects and
the verbal descendants.
Hueristics for scope resolution
Hueristics, activated by the cue’s part-of-speech operate over
dependency structures:
I
Coordinations scope over their conjuncts;
I
Prepositions scope over their argument and its
descendants;
I
Attributive adjectives scope over their nominal head and
its descendants;
I
Predicative adjectives scope over referential subjects and
clausal arguments, if present;
I
Modals inherit subject scope from their lexical verb and
scope over their descendants; and
I
Passive or raising verbs scope over referential subjects and
the verbal descendants.
Hueristics for scope resolution
In the case of negation an additional set of rules are applied:
I
Determiners scope over their head node and its
descendants;
I
The noun none scopes over the entire sentence if it is the
subject, otherwise it scopes over its descendants;
I
Other nouns or verbs scope over their descendants;
I
Adverbs with a verbal head scope over the descendants of
the lexical verb; and
I
Other adverbs scope over the descendants of the head.
If none of these rules activate then a default scope is applied by
labeling from the cue to the sentence final punctuation.
Hueristics for scope resolution
In the case of negation an additional set of rules are applied:
I
Determiners scope over their head node and its
descendants;
I
The noun none scopes over the entire sentence if it is the
subject, otherwise it scopes over its descendants;
I
Other nouns or verbs scope over their descendants;
I
Adverbs with a verbal head scope over the descendants of
the lexical verb; and
I
Other adverbs scope over the descendants of the head.
If none of these rules activate then a default scope is applied by
labeling from the cue to the sentence final punctuation.
Hueristics for scope resolution
In the case of negation an additional set of rules are applied:
I
Determiners scope over their head node and its
descendants;
I
The noun none scopes over the entire sentence if it is the
subject, otherwise it scopes over its descendants;
I
Other nouns or verbs scope over their descendants;
I
Adverbs with a verbal head scope over the descendants of
the lexical verb; and
I
Other adverbs scope over the descendants of the head.
If none of these rules activate then a default scope is applied by
labeling from the cue to the sentence final punctuation.
Hueristics for scope resolution
In the case of negation an additional set of rules are applied:
I
Determiners scope over their head node and its
descendants;
I
The noun none scopes over the entire sentence if it is the
subject, otherwise it scopes over its descendants;
I
Other nouns or verbs scope over their descendants;
I
Adverbs with a verbal head scope over the descendants of
the lexical verb; and
I
Other adverbs scope over the descendants of the head.
If none of these rules activate then a default scope is applied by
labeling from the cue to the sentence final punctuation.
Hueristics for scope resolution
In the case of negation an additional set of rules are applied:
I
Determiners scope over their head node and its
descendants;
I
The noun none scopes over the entire sentence if it is the
subject, otherwise it scopes over its descendants;
I
Other nouns or verbs scope over their descendants;
I
Adverbs with a verbal head scope over the descendants of
the lexical verb; and
I
Other adverbs scope over the descendants of the head.
If none of these rules activate then a default scope is applied by
labeling from the cue to the sentence final punctuation.
Hueristics for scope resolution
In the case of negation an additional set of rules are applied:
I
Determiners scope over their head node and its
descendants;
I
The noun none scopes over the entire sentence if it is the
subject, otherwise it scopes over its descendants;
I
Other nouns or verbs scope over their descendants;
I
Adverbs with a verbal head scope over the descendants of
the lexical verb; and
I
Other adverbs scope over the descendants of the head.
If none of these rules activate then a default scope is applied by
labeling from the cue to the sentence final punctuation.
Hueristics for scope resolution
In the case of negation an additional set of rules are applied:
I
Determiners scope over their head node and its
descendants;
I
The noun none scopes over the entire sentence if it is the
subject, otherwise it scopes over its descendants;
I
Other nouns or verbs scope over their descendants;
I
Adverbs with a verbal head scope over the descendants of
the lexical verb; and
I
Other adverbs scope over the descendants of the head.
If none of these rules activate then a default scope is applied by
labeling from the cue to the sentence final punctuation.
Hueristics for scope resolution
Development Results: Speculation Scope for Gold Cues
Abstracts
Default Baseline
Dependency Rules
F1
69.84
73.67
Papers
Default Baseline
Dependency Rules
45.21
72.31
Development Results: Negation Scope for Gold Cues
Abstracts
Default Baseline
Dependency Rules
F1
51.75
70.76
Papers
Default Baseline
Dependency Rules
31.38
67.16
Hueristics for scope resolution
Development Results: Speculation Scope for Gold Cues
Abstracts
Default Baseline
Dependency Rules
F1
69.84
73.67
Papers
Default Baseline
Dependency Rules
45.21
72.31
Development Results: Negation Scope for Gold Cues
Abstracts
Default Baseline
Dependency Rules
F1
51.75
70.76
Papers
Default Baseline
Dependency Rules
31.38
67.16
Outline
1
Data sets and evaluation measures
2
A brief review of speculation and negation at LNS
3
A syntactic constituent ranker
4
Experiments
Selecting candidate constituents
Learn a ranking function over candidate constituents within a
parse.
sb-hd mc c
sp-hd n c
d - the le
the
hd-cmp u c
aj-hdn norm c
aj - i le
n ms-cnt ilr
unknown
n - mc le
amino acid
hd-cmp u c
v vp mdl-p le
may
v prd be le
hd-cmp u c
be
v pas odlr
v np le
hd-cmp u c
p np ptcl le
sp-hd n c
used
by
d - pl le
w period plr
these
n pl olr
n - c le
species.
Selecting candidate constituents
Candidates are generated by following the path from the cue to
the root of the tree, for example:
I
modal verb : False
The unknown amino acid { may } by used by these species.
I
head – complement : False
The unknown amino acid { may be used by these species}.
I
subject – head : True
{The unknown amino acid may be used by these species}.
Selecting candidate constituents
Candidates are generated by following the path from the cue to
the root of the tree, for example:
I
modal verb : False
The unknown amino acid { may } by used by these species.
I
head – complement : False
The unknown amino acid { may be used by these species}.
I
subject – head : True
{The unknown amino acid may be used by these species}.
Selecting candidate constituents
Candidates are generated by following the path from the cue to
the root of the tree, for example:
I
modal verb : False
The unknown amino acid { may } by used by these species.
I
head – complement : False
The unknown amino acid { may be used by these species}.
I
subject – head : True
{The unknown amino acid may be used by these species}.
Alignment of constituents and scopes
Scope boundaries must align with the boundaries of
constituents for the approach to be successful.
Alignment can be improved by applying slackening rules,
including:
I
eliminating constituent-final punctuation;
I
eliminating constituent-final parenthesised elements;
I
reducing scope to the left when the left-most terminal is an
adverb and not the cue; and
I
ensuring the scope starts with the cue when it is a noun.
Alignment of constituents and scopes
Scope boundaries must align with the boundaries of
constituents for the approach to be successful.
Alignment can be improved by applying slackening rules,
including:
I
eliminating constituent-final punctuation;
I
eliminating constituent-final parenthesised elements;
I
reducing scope to the left when the left-most terminal is an
adverb and not the cue; and
I
ensuring the scope starts with the cue when it is a noun.
% of hedge scopes aligned with constituents
Alignment of constituents and scopes
95
90
85
BSA
BSP
80
1
10
20
30
n-best parsing results
40
50
Alignment of constituents and scopes
Analysis of the non-aligned items indicated that mismatches
arise from:
I
parse ranking errors (40%),
I
non-syntactic scope (25%) e.g.
“This allows us to { address a number of questions : what proportion
of [...] can be attributed to a known domain-domain interaction}?”,
I
divergent syntactic theories (16%),
I
parenthesised elements (13%) and
I
annotation errors (6%)
Alignment of constituents and scopes
In the papers:
I
A parse is produced for 86% of sentences.
I
Alignment of constituents and speculation scopes is 81%
when inspecting the first parse of sentences.
I
The upper-bound performance of the ranker is around
76% when resolving the scope of speculation.
I
A similar analysis indicates an upper-bound of 77% for
negation.
Alignment of constituents and scopes
In the papers:
I
A parse is produced for 86% of sentences.
I
Alignment of constituents and speculation scopes is 81%
when inspecting the first parse of sentences.
I
The upper-bound performance of the ranker is around
76% when resolving the scope of speculation.
I
A similar analysis indicates an upper-bound of 77% for
negation.
Training procedure
Given a parsed BioScope sentence:
I
The candidate consitutuent that corresponds to the scope is
labeled as correct;
I
all other candidates are labeled as incorrect.
I
Learn a scoring function (using SVMlight ).
Describe candidates with three classes of features that record:
I
paths in the tree;
I
surface order; and
I
rule-like linguistic phenomena.
Training procedure
sb-hd mc c
sp-hd n c
d - the le
the
hd-cmp u c
aj-hdn norm c
aj - i le
n ms-cnt ilr
unknown
n - mc le
amino acid
hd-cmp u c
v vp mdl-p le
may
v prd be le
hd-cmp u c
be
v pas odlr
v np le
hd-cmp u c
p np ptcl le
sp-hd n c
used
by
d - pl le
w period plr
these
n pl olr
n - c le
species.
Path features
Paths from speculation cues to candidate constituents:
I
specific, e.g. ‘‘v vp mdl-p le\hd-cmp u c\sb-hd mc c’’
I
general, e.g. ‘‘v vp mdl-p le\\sb-hd mc c’’
With lexicalised variants:
I
specific, e.g. ‘‘may : v vp mdl-p le\hd-cmp u c\sb-hd mc c’’
I
general, e.g. ‘‘may : v vp mdl-p le\\sb-hd mc c’’
Bigrams formed of nodes and their parents, e.g.
‘‘v vp mdl-p le hd-cmp u c’’ and
‘‘hd-cmp u c sb-hd mc c’’
Surface features
I
Bigrams formed of preterminal lexical types, e.g.
‘‘d - the le aj-hdn norm c’’, ‘‘aj-hdn norm c n - mc le’’, etc.
I
Cue position within candidate (in tertiles) , e.g. ‘‘33%-66%’’.
I
Candidate size relative to sentence length (in quartiles),
e.g. ‘‘75%-100%’’
I
Punctuation preceeding the candidate, e.g. ‘‘false’’
I
Punctuation at end of the candidate e.g. ‘‘true’’
Rule-like features
Detection of:
I
Passivisation
I
Subject control verbs occuring with passivised verbs
I
Subject raising verbs
I
Predicative adjectives
Detecting control verbs with a passivised verb
1. Cue is a subject control verb.
subject–
head
2. Find the first subject – head
parent of (1) on the head path.
(2)
3. Find the first
head – complement child of (2)
on the head path.
4. The right-most daughter of (3)
or one of its descendents is a
passivized verb.
5. The transitive head daughter
of the left-most daughter of (2)
is not an expletive it or there.
H
*
H
(3)
head–
complement
(5)
H
not
expletive
pronoun
subject
(4)
control passive
verb
verb
(1)
the unknown amino acid
<may>
be used by these species.
Using predictions from auxiliary systems
Combine with other systems by introducing features recording
matches with:
I
the start of the auxiliary prediction;
I
the end of the auxiliary prediction; or
I
the entirety of the auxiliary prediction.
In cases where an ERG parse is unavailable we back off to the
prediction of the dependency rules.
Outline
1
Data sets and evaluation measures
2
A brief review of speculation and negation at LNS
3
A syntactic constituent ranker
4
Experiments
Ranker optimisation
Speculation and negation in aligned items from papers (F1)
Random baseline
Path
Path+Surface
Path+Rule-like
Path+Surface+Rule-like
Speculation
Negation
26.76
78.10
79.93
83.72
85.30
21.73
79.28
78.88
89.47
89.24
Training with the first aligned constituent in n-best parses and
testing with m-best parses did not greatly impact performance,
but optimal values are: n = 1 and m = 3.
Ranker optimisation
Speculation and negation in aligned items from papers (F1)
Random baseline
Path
Path+Surface
Path+Rule-like
Path+Surface+Rule-like
Speculation
Negation
26.76
78.10
79.93
83.72
85.30
21.73
79.28
78.88
89.47
89.24
Training with the first aligned constituent in n-best parses and
testing with m-best parses did not greatly impact performance,
but optimal values are: n = 1 and m = 3.
n-best optimisation for combined system
F1
77
76
75
25
74
0
20
5
10
training n-best15
20
5
25 0
15
10 testing m-best
Development results
Resolving scope of gold cues (F1)
Default baseline
Constituent ranker
Dependency rules
Combined
Speculation
Negation
64.89
73.67
73.39
78.69
48.07
67.46
70.11
73.98
Evaluation results
We evaluate end-to-end performance for speculation on the
CoNLL-2010 Shared Task evaluation data.
Speculation
System
Prec
Rec
F1
Morante et al. (2010)
Cue classifier + combined
59.62
62.00
55.18
57.02
57.32
59.41
Evaluation results
Negation (cross-validated on abstracts)
System
Prec
Rec
F1
Morante et al. (2009)
Cue classifier + combined
66.31
68.93
65.27
73.03
65.79
70.92
System
Prec
Rec
F1
Morante et al. (2009)
Cue classifier + combined
42.49
61.77
39.10
71.55
40.72
66.30
Negation (on held-out papers)
Evaluation results
Negation (cross-validated on abstracts)
System
Prec
Rec
F1
Morante et al. (2009)
Cue classifier + combined
66.31
68.93
65.27
73.03
65.79
70.92
System
Prec
Rec
F1
Morante et al. (2009)
Cue classifier + combined
42.49
61.77
39.10
71.55
40.72
66.30
Negation (on held-out papers)
Conclusions
I
It is possible to learn a discriminative ranking function
for choosing subtrees from HPSG-based constituent
structures that match speculation scopes;
I
Combining the rule-based and ranker-based approaches
to resolve the scope of cues predicted by our classifier
achieves the best results to date on the CoNLL 2010 Shared
Task evaluation data;
I
The system is readily adapted to deal with negation, also
achieving state-of-the-art results.
Conclusions
I
It is possible to learn a discriminative ranking function
for choosing subtrees from HPSG-based constituent
structures that match speculation scopes;
I
Combining the rule-based and ranker-based approaches
to resolve the scope of cues predicted by our classifier
achieves the best results to date on the CoNLL 2010 Shared
Task evaluation data;
I
The system is readily adapted to deal with negation, also
achieving state-of-the-art results.
Conclusions
I
It is possible to learn a discriminative ranking function
for choosing subtrees from HPSG-based constituent
structures that match speculation scopes;
I
Combining the rule-based and ranker-based approaches
to resolve the scope of cues predicted by our classifier
achieves the best results to date on the CoNLL 2010 Shared
Task evaluation data;
I
The system is readily adapted to deal with negation, also
achieving state-of-the-art results.