III. Metonymy - Trinity College Dublin

ESSLLI 2007
19th European Summer School in Logic, Language and Information
August 6-17, 2007
http://www.cs.tcd.ie/esslli2007
Trinity College Dublin
Ireland
C OURSE R EADER
ESSLLI is the Annual Summer School of FoLLI,
The Association for Logic, Language and Information
http://www.folli.org
Metonymy.
Reading Materials (III) for the ESSLLI-07 course
on Figurative Language Processing
Birte Lönneker-Rodman
July 1, 2007
1
1.1
What is Metonymy?
Introduction
This definition follows (Kövecses, 2002, pp. 143–144).
In metonymy one entity, or thing (such as Shakespeare, Pearl Harbor,
Washington, glove in Examples 1 to 4), is used to indicate [. . . ] another
entity. One kind of entity “stands for” another kind of entity.
(1)
I’m reading Shakespeare.
(2)
America doesn’t want another Pearl Harbor.
(3) Washington is negotiating with Moscow.
(4)
We need a better glove at third base.
Exercise/Question: What might be the entities indicated? Can we
find literal paraphrases for these sentences?
The metonymically used word refers to an entity that provides mental
access to another entity (the one we refer to in the literal paraphrase). For
this to be possible, the entities must be related to one another. Particular,
systematically exploited relations give rise to larger groups or clusters of
metonymic expressions, in which members of particular classes of entities
stand for members of particular other classes of entities. These will be
discussed in Section 1.2 under the lable of “regular metonymy”.
Terminology. The entity that “stands for” another one can be called the
vehicle entity. The vehicle is thus the entity that provides mental access
to the other entity. The entity that becomes accessible (or: to which mental
1
access is provided) is called target entity; (cf. Kövecses, 2002, p. 145).
This can be represented by the following pattern:
(5) vehicle-entity for target-entity
A target entity, being an individual or instance, should not be confused
with a target domain as in metaphorical mappings, which is a class or concept, or rather a bundle of related classes and concepts (Metaphor chapter!).
As opposed to source and target domain in metaphor, vehicle and target
entity in a given metonymic expression are closely related to each other. The
relation underlying the metonymy is based on the “closeness” or contiguity
or vehicle and target entity in conceptual space. We will easily find out
why the entities in the examples mentioned above are “close” to each other:
What do they have to do with each other? Rather than belonging to different
conceptual domains, metonymically related entities are members of the same
domain. For example, both an author and written works belong to the
(artistic) production domain, containing the producer, the product,
the place where the product is made, etc. The coherence of such a domain
is thought to be brought about and ensured by the repeated co-occurrence
of such entities in the world, were we experience them as being (closely)
“together”. (Kövecses, 2002, pp. 145–146)
As with metaphorical mappings, though, some relations between some
types of entities tend to entail wide-spread, regular metonymic patterns
(whereas there are also many relations between many entities that are rarely
or never exploited for metonymy).
1.2
Regular metonymy as a form of regular polysemy
The regularity of some of the metonymic patterns has been noted by Apresjan (1973) and Nunberg (1995) (under the label of “systematic polysemy”),
among others. Examples of these regular patterns can also be found in
(Lakoff and Johnson, 1980, pp. 38-39). In what follows, small capitals will
be used to indicate classes of entities, or concepts. Therefore, the headlines
are all in small capitals.
the part for the whole
(6)
We don’t hire longhairs.
(7)
The Giants need a stronger arm in right field.
object used for user
2
(8)
The sax has the flu today.
(9)
The gun he hired wanted fifty grand.
(10)
The buses are on strike.
(11)
[flight attendant, on a plane]: Ask seat 19 whether he wants to swap.
(Nissim and Markert, 2003, p. 56)
the place for the event
(12)
Let’s not let Thailand become another Vietnam.
(13) Watergate changed our politics.
Other commonly mentioned regular metonymies include the tree for
the wood: The table is made of oak, cf. Nunberg (1995). There are
cross-linguistic differences as to which regular metonymies are available, cf. Nunberg (1995). Some, especially those involving place names and
other proper names, seem to be very wide-spread among different languages,
though.
Exercise/Question: Is the fruit for the tree a regularly exploited
metonymic pattern in your language?
As far as country names as vehicle entities are concerned, an English
corpus study (Markert and Nissim, 2003) reveals that three coarse-grained
mapping patterns account for most of the metonymies encountered (Nissim
and Markert, 2003, p. 57):
1. place for people subsuming different subtypes;
2. place for event
3. place for product.
See Section 2 below for further details on this and other corpus studies.
–
In spite of the possible grouping of metonymies into clusters, which presumably facilitates their understanding, a correct and detailed interpretation of metonymies always requires the activation of world knowledge. For
example, the reader has to figure out to which target entity the entity mentioned by the metonymic label is related (e.g., long hair and arms are parts
of persons) or which function the user of a used object is assumed to fulfil
(e.g. player or the sax, killer (using the gun), driver (but not passenger)
of the bus, passenger currently occupying a numbered seat). Probably because of the relative unpredictability of the function of the user, (Nissim and
Markert, 2003, p. 57) call (Example 11) “unconventional”.
3
1.3
Two senses per context?
When annotating a corpus for possibly metonymic place names, Markert
and Nissim (2003) found examples where two predicates are involved, triggering a different reading each, thus yielding a “mixed” reading. This occurs
especially often with coordinations and appositions.
(14)
. . . they arrived in Nigeria, hitherto a leading critic of the South
African regime . . . (Nissim and Markert, 2003, p. 57)
Example 14 invokes both a literal reading of Nigeria (triggered by “arriving in”) and a place-for-people reading (triggered by “leading critic”).
Similarly complex cases have been discussed by Nunberg (1995) (Examples 15 to 17):
(15) Yeats did not like to hear himself read in an English accent. (Contrast: I am often read in an English accent.)
(16) Roth is Jewish and widely read.
(17) The newspaper Mary works for was featured in a Madonna video.
Nunberg is of the opinion that the target expressions (metonymical vehicles) in these examples do not receive different (double) readings. Rather,
he argues, the predicates (i.e., what is “said about” them) have a transferred
reading so that they can be applied to the same (identical) entity referred to
by the vehicle. This view is not easily compatible with the cognitive linguistic viewpoint presented by Lakoff and Johnson (1980) or Kövecses (2002),
who do not discuss these cases.
1.4
How figurative are metonymies?
We refer back to the discussion of literal, non-literal and figurative linguistic
expressions in the session on Figurative Language.
By paraphrasing the metonymic expressions in our examples above with
cognitively similarly simple expressions, we have proven that the additional
naming criterion is fulfilled for metonymies. To decide whether they are
also figurative, we must investigate their image component. For instance,
do Examples (1) to (4) above evoke images?
Dobrovol’skij and Piirainen (2005) argue that metonymies which participate in regular polysemy do “not evoke any images” and “do not imply any
additional prgamatic effects”, but are merely a “powerful and near-universal
mechanism for denoting conceptually related entities in a most economical
and natural way”. The existence of an image component is only recognized
for certain idiom-like metonymies such as (18) or (19).
4
(18) a helping hand
(19) to keep an eye on someone/something
Whether speakers perceive an image component by consciously evoking
the literal meaning of the metonymic expression and combining this with
the context, or whether they do not think of the product as an image, is a
question that cannot be decided on the basis of linguistic facts alone. It is
not excluded that some speakers might perceive an image component, for
at least some of the metonymies discussed here.
As long as the image component question has not been resolved, it is
most safe to treat metonymies in general as non-literal (but non-figurative)
language, whereas some speakers might think of some or even all of them as
figurative.
2
Metonymy annotation and frequency
Country names. Annotating 1,000 occurrences of country names (with
a three-sentence context, from the British National Corpus) and labeling
them according to eleven categories, Markert and Nissim (2003) achieved
an inter-annotator agreement between two annotators of kappa 0.88. The
categories are as follows:
1. unsure: context not understandable for annotator,
2. nonapp: country name is part of a larger expression denoting a different named entity, e.g. US Open,
3. homonym: a name from a different semantic class, e.g. person name:
Professor Greenland,
4. literal, e.g. em coral coast of Papua New Guinea or Britain’s
current account deficit,
5. obj-for-rep: object for representation, e.g. This is Malta. (pointing
to a photograph),
6. obj-for-name: object for name, and name used as a mere signifier, e.g.
(from organization corpus, see below) Chevrolet is feminine because
of its sound (it’s a longer word than Ford, has an open vowel at the
end, [. . . ],
7. place-for-event: cf. Example (2) above,
5
8. place-for-people: This is often labeled place for institution
in cognitive linguistic reference works, but there are different subclasses (distinguished in the annotation scheme of Markert and Nissim (2003)). Examples include: America did try to ban alcohol and a
29th-minute own goal from San Marino defender Claudio Canti. A
country name can indeed also stand for (almost) the entire population
of the country, which makes palce for institution too narrow a
category;
9. place-for-product: the place for a product manufactured there (e.g.
em Bordeaux),
10. othermet: “unconventional metonymies”, cf. Example (11) above,
11. mixed: cf. Subsection 1.3 above.
Of the 1000 examples of country names, Markert and Nissim (2003) 61
(6.1%) were excluded as noise (nonapp, homonym), and 14 examples could
not be agreed on after discussion (unsure). This results in 925 clearly labeled
examples. 737 are literal. Among the remain metonymic ones, the largest
subgroup is place-for-people with 161 instances and place-for-event
metonymies constitute the smallest group (three members or 0.3% of the
entire data).
Organization names. Markert and Nissim (2006) report on the annotation efforts mentioned above, as well as on the annotation of organization names. They distinguish between five different subtypes of regular
metonymic mappings involving organization names:
1. organisation-for-members: The concrete referents are often underspecified (e.g. spokesperson, certain classes of employees, all members,
. . . ). Example: Last February NASA announced [. . . ].
2. organisation-for-facility, e.g. The opening of a McDonald’s is
a major event.
3. organisation-for-product, e.g. press-men hoisted their notebooks
and their Kodaks.
4. organisation-for-index, especially for stock index, e.g. Eurotunnel was the most actove stock.
6
5. organization-for-event, especially a particularly bad or otherwise
outstanding event associated with the organization, e.g. . . . the resignation of Leon Brittan from Trade and Industry in the aftermath of
Westland.1
In addition to these five categories, the labels object-for-name (see
above), other and mixed were available for manual annotation of 984 instances of organization names. The largest part of the organization name
instances in the corpus were literal (64.3%). Overall inter-annotator agreement for two annotators was found to be kappa 0.894. The highest reliability scores were achieved for object-for-name (6 instances) and organisation-for-facility (14 instances), but also the metonymic category with
the most members (organisation-for-members, 188 instances) had a kappa
score higher than the overall sample. Kappa was lowest for the more complex class mixed.
Markert and Nissim (2003) and Markert and Nissim (2006) point out
that kappa values for metonymy annotation are higher in “second runs”
on independent data, after a training-and-discussion phase involving
annotation on a first example set.
2.1
Availability
• The corpora annotated by Markert and Nissim have been made available at http://www.cogsci.ed.ac.uk/∼malvi/mascara/index.html
[30 June, 2007].
• A different corpus annotated for some types of metonymy (among
other things) is the ACE corpus (http://projects.ldc.upenn.edu/
ace/data/, [30 June, 2007]). The latest version of the annotation
guidelines is (LDC, 2005). The annotated data can be obtained from
the Linguistic Data Consortium (LDC), but this availability seems
to be restricted to sites participating in the ACE project (Automatic
Content Extraction).
1
Note that a human reader, even if unaware of the “economic scandal involving the
helicopter company Westland in Britain in the 1980s” (Markert and Nissim, 2006), will
infer that Westland stands for an event.
7
Figure 1: Similarity levels in regular metonymy.
3
3.1
Metonymy recognition as Word Sense Disambiguation (WSD)
Nissim and Markert (2003)
A class-based approach is supposed to take advantage of the regularities
of metonymic mappings, even if exemplified by different lexical material:
[W]hereas a classic (supervised) WSD algorithm is trained on
a set of labelled instances of one particular word and assigns
word senses to new test instances of the same word, (supervised)
metonymy recognition can be trained on a set of labelled instances of different words of one semantic class and assign literal
readings and metonymic patterns to new test instances of possibly different words of the same semantic class. (Nissim and
Markert, 2003, p. 58)
The similarities that follow from the regular mapping are illustrated by
Figure 1, taken over from (Nissim and Markert, 2003, p. 57).
Nissim and Markert (2003) use a decision list classifier and estimate
probabilities via maximum likelihood with smoothing (p. 58). Due to data
sparseness, the trained classifier will not be able to make a decision for all
cases in the test data, so accuracy and coverage are measured as follows:
8
Accuracy =
# correct decisions made
# decisions made
(1)
# decisions made
# test data
(2)
Coverage =
When the classifier cannot make a decision, a backoff decision is to assign
the most frequent class in the training data, which is literal in this case.
Accb is then the accuracy for the entire data, after this backoff method has
been applied. After backoff, coverage is always 1.0 or 100%. Precision,
recall, and f-measure for metonymies (i.e. for non-literal readings) are also
reported. When evaluating the results, a baseline is established by always
assigning the most frequent reading (literal) to each example in the test
data.
Test results are reported on the annotated corpus of 925 clearly labeled
examples of country names (cf. Subsection 2 above), using 10-fold crossvalidation.
3.1.1
The basic feature – Algorithm I
The feature on which the first and most important decision list classifier is
trained is called role-of-head. It is one feature, composed of
• the grammatical role (grammatical function) of the possibly metonymic
word (target word): active subject, subject of a passive sentence, direct
object, modifier in a prenominal genitive, other nominal premodifier,
dependent in a prepositional phrase – other functions are summarized
under the label “other”;
• the lemmatized lexical head of the possibly metonymic word, in a
dependency grammar framework.
Examples:
(20) England won the world cup.
Possibly metonymic word: England.
Value for role-of-head: subj-of-win
(21) Britain has been goverend by [. . . ].
Possibly metonymic word: Britain.
Value for role-of-head: subjp-of-govern
9
The feature was manually annotated. On average, target words with
grammatical function (GF) subj or subjp were predominantly metonymical,
whereas all other GFs were biased towards a literal reading of the target
word.
After training on the role-of-head feature alone, precision and accuracy
were found to be relatively high (accuracy as defined in Equation 1 above:
0.902), but coverage as defined in Equation 2 above was low (0.631). This
makes is necessary to resort to backoff (always assign literal) in 36.9% of
the cases and results in low recall (0.186) and low F-measure (0.298) for
metonymies.
Some problems with wrong assignment have been detected. In these
cases, the chosen feature is “too simplistic”. Main reasons for wrong assignment are:
1. semantically “empty” lexical heads, such as the verbs be and have, or
the preposition with, which can occur in both literal and non-literal
contexts (cf. border with Hungary vs. talk with Hungary);
2. ambiguity of the premod relation in noun-noun compounds (cf. US
operation: an operation in or by the US?).
3.1.2
Generalization – Algorithm II
Low coverage was identified as the main problem of the basic algorithm, and
it can be explained by data sparseness: many head words occur only once in
the data. Therefore, two methods were used to relax the identity-of-feature
criterion, in those cases where a decision could not be made based on the full
feature. This means that everything which can be classified by algorithm I
remains untouched, and generalization applies only to those examples that
would otherwise have been sent to backoff directly.
The first generalization method involves generalizing the lexical head to
similar words from Lin’s (Lin, 1998) thesaurus of content words sorted by
part-of-speech. This allows a generalization from Example (22) to Example (23), in case the feature subj-of-lose had not been seen in training:
(22) England won the World Cup.
(23) Scotland lost in the semi-final.
As soon as a decision can be made based on a similar word, the algorithm
stops, which means that less similar words are also less likely to influence
on the decision. Nevertheless, up to 50 similar words were examined to
find a feature seen in the training data: the more words are examined, the
10
Figure 2: Results without and with generalization over semantically similar
lexical heads.
better recall and precision of metonymies. For (up to) 50 similar words,
the F-measure value is 0.542, recall 0.410 and precision 0.802. Accuracy
without backup lowers a little, to 0.877, but accuracy with backup is better
than before. The sets dealt with by the different algorithms are visualized
in Figure 2.
Exercise/Question: What are possible reasons for the increase in
(overall) precision, using this generalization method?
3.1.3
Generalization – Algorithm III
The second generalization method is to drop head word information altogether and to base the decision on the preference indicated by the feature
grammatical function only, for which a separate decision list classifier has
been trained. This algorithm thus assigns all target words in subject position to the non-literal group (cf. information on grammatical function in
Subsection 3.1.1 above) and performs slightly better than the one using the
previous generalization method.
3.1.4
Combination
The best choice is to combine the two generalization methods and use GFbased decisions only if the target is a subject, thesaurus-based generalization
otherwise. This method yields a precision of 0.814, a recall of 0.510 and an Fmeasure of 0.627. Accuracy of the combined method is 0.894, and accuracy
after backup is 0.87.
11
3.1.5
Clean data
Remember that the data used for this experiment is very clean. First of
all, manual tagging removed all instances of names that were not country
names, as well as those whose metonymicity was undecidable for a human.
Second, the manual annotation of the training feature ensures high quality
of the data. The authors also measured the influence of automatical parsing,
on the same data set, and found that it reduces F-measures for non-literal
recognition by about 10%, as compared to training on manually annotated
features.
3.2
Peirsman (2006)
Peirsman (2006) experiments with different approaches to the problem discussed in Nissim and Markert (2003), using the same data set of mixed
country names (cf. Subsection 2 above) and an additional (separate) data
set for the country name Hungary, as well as the same evaluation methods.
He finds out that an unsupervised approach, based on a WSD algorithm
proposed by Schütze (1998) and reminding of LSA (Latent Semantic Analysis, (Landauer and Dumais, 1997)), fails to achieve the majority baseline
for accuracy. Nevertheless, this unsupervised WSD algorithm does find two
clusters that significantly correlate with the manually assigned literal-nonliteral labels (this is measured with the χ2 test).
In a different experiment, he includes subcategorization into different
classes of metonymical mappings (for the available subclasses, see Subsection 2 on corpus annotation). With grammatical function and lexical head
as features (similarly as in Nissim and Markert (2003)), Peirsman (2006)
trains a memory-based learning classifier (TiMBL2 ). This achieves an accuracy of 86.6% and an F-score of 0.612 and is thus able to nearly reproduce
the best results achieved by Nissim and Markert (2003), i.e. those of their
combined generalization algorithm (87.0% accuracy and an F-measure of
0.627).
A similar experiment was to enlarge the previous feature set by WordNet
hypernym synsets of the head’s first sense. Although semantically differently
grained hypernym synsets were experimented with, and even in spite of
manual sense disambiguation, this feature did not seem to have a noticeable
influence on accuracy. It improved recall and F-score slightly. It is not
sure whether this failure to improve accuracy is due to a mismatch between
WordNet synsets and the metonymical classes (semantic labels), or to the
2
http://ilk.uvt.nl/software.html [30 June, 2007].
12
high frequency of function words as lexical heads, or to other unknown
factors.
3.3
An adaptation of the approach by Gedigian et al. (2006)
In the session on Metaphor, we have discussed a metaphor classification
approach for Motion and Cure verbs by Gedigian et al. (2006). This verbcentered approach has recently been applied to metonymy recognition in an
experiment by Schneider (2007). For this experiment, verbs were chosen
from the FrameNet Statement frame: acknowledge, add, address, admit,
etc. A random subset of sentences from the Penn TreeBank containing
these verbs was labeled for literal, metonymic, or other3 . Out of 1,600
investigated sentences, 832 were literal and 420 metonymic, which brings
the ratio of literal examples among the relevant subset to 0.664.
Of special interest is here the Propbank argument ARG0, which labels
the speaker for these communication verbs. A (superficial) look at the annotated data reveals (at least the following) different types of metonymy:
• organization for members - proper name: Rolls-Royce Motor
Cars Inc. said it expects its U.S. sales to remain steady at about
1,200 cars in 1990 .
• organization for members or place for people, but no proper name:
– [. . . ] the SEC ’s office of disclosure policy , which proposed
the changes .
– The company said the plan , under review for some time , will
protect shareholders [. . . ].
• product for producer/written document for writer (probably in
interaction with a metaphor such as writing is verbal communication): Many of the letters maintain that investor confidence has
been so shaken [. . . ].
The annotations show biases for some of the verbs. To illustrate, remark,
observe and caution were not found to be used with a metonymic speaker
in the corpus, whereas the verbs state and address were used predominantly
with a metonymic speaker (each in 66% of their occurrences).
3
Including slightly different senses of the verb, e.g. The state agency ’s figures confirm
....
13
Schneider (2007) trained a Maximum Entropy classifier4 on this data,
testing different feature sets. Only accuracy measures are reported. The
classifier outperforms baseline accuracy (66.4%, achieved by assigning literal
throughout) even if trained on the verb as the only feature (accuracy on test
set: 78.14%). Not surprisingly, though, the best result of 91.5% accuracy
is achieved by using the feature ARG0 Type (semantic head of ARG0). A
combination of ARG0 Type with the verb as additional feature yields a
slightly lower accuracy (89.46%).
Building on the insight that there are relatively few and distinguishable groups of literal and metonymic usages, based on the semantic type of
ARG0’s head (see itemized list above), a generalization over the head words
was performed. This uses 26 general categories from an expanded version of
WordNet (Snow et al., 2006) as “type” of the argument, instead of using the
semantic head itself. In this case, hypernymic generalization helped improve
accuracy (94.3% on test set).
3.4
Markert and Nissim (2007)
Markert and Nissim (2007) report on the recent SemEval-2007 task on
Metonymy Resolution, in which five systems participated.
4
Maximum Entropy Modeling Toolkit for Python and C++ by Zhang Le, http:
//homepages.inf.ed.ac.uk/s0450736/maxent toolkit.html [30 June, 2007]. The experiment was performed with 100 iterations of the GIS algorithm.
14
4
Answers to the Questions
4.1
From Section 1.1
What might be the entities indicated? Can we find literal paraphrases for
these sentences?
(Kövecses, 2002, p. 144) proposes: (1’) one of Shakespeare’s works, (2’)
defeat in war, (3’) the American government, (4’) baseball player.
4.2
From Section 1.2
Is the fruit for the tree a regularly exploited metonymic pattern in
your language?
[Examples: lime - the fruit or the tree5 ; but apple?; Examples from
German]
4.3
From Section 3.1.2
What are possible reasons for the increase in (overall) precision, using this
generalization method?
The examples in S2 consist of cases with highly predictive content word heads as (a) function words are not included in the
thesaurus and (b) unpredictive content word heads like “have”
or “be” are very frequent and normally already covered by hmr
(they are therefore members of S1). Precision on S2 is very high
(84%) and raises the overall precision on the set S. (Nissim and
Markert, 2003, p. 60)
Also, generalization deals with a much higher proportion of metonymies
than the basic algorithm. Whereas the basic algorithm classifies 583 instances out of which 80 (31.6%) are metonymies, generalization classifies
another 147 instances, among which 68 (46%) are metonymies. Cf. Figure 3.
5
a juicy round fruit which is sour like a lemon but smaller and green, or the small
Asian tree on which this fruit grows (Cambridge Advanced Learner’s Dictionary, http:
//dictionary.cambridge.org/ [1 July, 2007])
15
Figure 3: Proportions of literal vs. metonymic usages classified (not necessarily correctly).
References
Apresjan, J. D. (1973). Regular polysemy. Linguistics, 142, 5–32.
Dobrovol’skij, D. and Piirainen, E. (2005). Figurative Language: Crosscultural and Cross-linguistic Perspectives. Elsevier.
Gedigian, M., Bryant, J., Narayanan, S., and Ciric, B. (2006). Catching
metaphors. In Proceedings of the 3rd Workshop on Scalable Natural Language Understanding, pages 41–48, New York City.
Kövecses, Z. (2002). Metaphor: a practical introduction. Oxford University
Press, New York.
Lakoff, G. and Johnson, M. (1980). Metaphors we live by. University of
Chicago Press, Chicago.
Landauer, T. K. and Dumais, S. T. (1997). A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction and
representation of knowledge. Psychological Review , 104, 211–240.
LDC (2005). Ace (automatic content extraction) english annotation guidelines for entities.
Lin, D. (1998). An information-theoretic definition of similarity. In Proceeding of the 15th International Conference on Machine Learning, pages
296–304, San Francisco, CA. Morgan Kaufmann.
Markert, K. and Nissim, M. (2003). Corpus-based metonymy analysis.
Metaphor and Symbol , 18(3), 175–188.
16
Markert, K. and Nissim, M. (2006). Metonymic proper names: A corpusbased account. In A. Stefanowitsch and S. T. Gries, editors, Corpusbased approaches to metaphor and metonymy, pages 152–174. Mouton de
Gruyter, Berlin and New York.
Markert, K. and Nissim, M. (2007). Semeval-2007 task 08: Metonymy resolution at semeval-2007. In Proceedings of the 4th International Workshop
on Semantic Evaluations (SemEval-2007), page 3641, Prague. ACL.
Nissim, M. and Markert, K. (2003). Syntactic features and word similarity
for supervised metonymy resolution. In Proceedings of the 41st Annual
Meeting of the Association for Computational Linguistics, pages 56–63.
ACL.
Nunberg, G. (1995). Transfers of meaning. Journal of Semantics, 1(12),
109–132.
Peirsman, Y. (2006). What’s in a name? the automatic recognition of
metonymical location names. In Proceedings of the EACL-2006 workshop
on Making Sense of Sense: Bringing Psycholinguistics and Computational
Linguistics Together , pages 25–32, Trento, Italy. ACL.
Schneider, N. (2007). A metonymy classifier. Manuscript.
Schütze, H. (1998). Automatic word sense discrimination. Computational
Linguistics, 24(1), 97–123.
Snow, R., Jurafsky, D., and Ng, A. Y. (2006). Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International
Conference on Computational Linguistics and 44th Annual Meeting of the
Association for Computational Linguistics, pages 801–808, Sydney, Australia. ACL.
17