Extending DBpedia with List Structures in Wikipedia Articles 1

人工知能学会研究会資料
SIG-SWO-036-07
Extending DBpedia with List Structures in Wikipedia Articles
Satoshi Tsutsui1
Takeshi Morita1
1
Takahira Yamaguchi1*
Keio University, Japan
Abstract: Ontologies are the basis of the Semantic Web. Owing to the cost of their construction and
maintenance, however, there is much interest in automating their construction. Wikipedia is considered a
promising source of knowledge because of its own characteristics. DBpedia extracts a large amount of
ontological information from Wikipedia. However, DBpedia focuses exclusively on infoboxes (i.e., tables
summarizing articles), and several works aim at extending DBpedia by using more information from
Wikipedia. This paper builds upon this line of work, and focuses on the section titles and list structure to
extend DBpedia. We develop an information extraction system using the list structure and extract more
than 20 million triples using section titles as predicates. This suggests that there is ample potential to
significantly expand the coverage of DBpedia.
1
DBpedia instance, and it becomes a resource because it is
the article itself. Citigroup is also disambiguated,
because it contains the link to the article. If the link does
not exist, the object is classified as a literal. As for the
property disambiguation, the DBpedia Mappings Wiki1
is used to find the correct mapping. In this case, Parent is
mapped to <http://dbpedia.org/ontology/parentCompany
>. This example is described in Figure 2.
Introduction
Ontologies are the core of the Semantic Web,* but a large
amount of manual work is required for its construction
and maintenance. To reduce this workload, studies have
been conducted for automatically constructing ontologies
in various ways. This is known as “ontology learning.”
Because automatic construction from text in natural
language remains difficult, Wikipedia is believed to be a
promising source of knowledge [4]. Hence, several
large-scale ontologies, such as DBpedia [1], YAGO [3],
KOG [6], are (semi-) automatically constructed from
Wikipedia.
This paper focuses on DBpedia [1], which is a
community effort to extract structured information from
Wikipedia. This effort involves manually constructing
ontologies with classes, properties, and hierarchies.
Moreover, DBpedia automatically extracts instance-level
information in the form subject-predicate-object
—comprising a resource description framework (RDF)
triple—from the tabular summary (or infobox) of an
article. Because numerous synonymous properties are
found in infoboxes, crowdsourcing efforts are used to
map infobox properties to the DBpedia ontology. For
example, Figure 1 shows a Wikipedia article for Citibank.
Because the infobox indicates that the parent company is
Citigroup, the triple is extracted as <Citibank, Parent,
Citigroup>. Citibank is easily disambiguated with a
Figure 1: A Simplified Wikipedia Article for Citibank.
*
3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan
{s_tsutsui, t_morita, yamaguti }@ae.keio.ac.jp
1
1
http://mappings.dbpedia.org/
version 2014), our work offers the potential to
significantly expand DBpedia. The triples we extracted
are available as RDF files on our webpage2.
2
Related Work
2.1
Figure 2: DBpedia Extraction Example
Ontology Learning from Wikipedia
Wikipedia, the Web-based open encyclopedia, is favored
as a reliable source of information, owing to its unique
characteristics [4]. It is constantly updated and improved
by many contributors around the world. Consequently, it
can cover a wide range of topics, making it useful as a
source for a general-purpose ontology. It also provides
highly structured information, such as tables including
infoboxes, lists, and a category system, which makes it
more useful for extracting ontological information than
natural-language texts. For these reasons, previous
research in the field of ontology learning has targeted
Wikipedia. We describe four of them.
DBpedia [1], which is a central interlinking hub in
the Linked Open Data cloud [5], extracts triples from
infoboxes and manually map them to DBpedia ontology
with crowdsourcing efforts. Yago [3] constructs an
ontology, heuristically aligning Wikipedia Categories
and WordNet [7] classes, and manually mapping infobox
attribute to properties in the ontoloty. Wu and Weld [6]
develop a system called KOG that constructs an ontology
by maping infobox classes (i.e., template names) to
WordNet classes with help of machine learning
techniques. These three works does not exploit the
in-article list structure.
Japanese Wikipedia Ontology (JWO) [23] extracts
a variety of ontological information from Japanese
Wikipedia. Even though it is available only in Japanese,
it is related to our work because it uses in-article list
structure to extract triples, which inspired our work.
However, the purpose is different in that its purpose is
not to extend DBpedia but to construct rich general
ontology from scratch. Hence JWO does not aim to map
the extracted triples into DBpedia ontology. Also, it
excludes less frequent section titles to maintain precision
whereas our work does not have limit section titles by
their frequency and attempts to extract as many triples as
possible.
Though infoboxes are a highly structured source
of information, they constitute a relatively minor part of
the information contained in Wikipedia. Thus, several
works aim at extending DBpedia, using additional
information from Wikipedia articles. Such research
focuses on abstracts [18], chronology pages [19], listing
pages [2], and cross-language information [20].
In this paper, we focus on the list structure in
sections of articles to extend DBpedia. Although
Paulheim and Ponzetto [2] exploited listing pages in
Wikipedia, no study exists to our knowledge that utilizes
the in-article list structure to extend DBpedia. Therefore,
we propose a system that extracts RDF triples from this
structure. For example, the Citibank article in Figure 1
contains the section Subsidiaries, and Citibank Canada is
listed in this section. Using this, we can extract the triple
<Citibank, Subsidiaries, Citibank Canada>. Citibank is
easily disambiguated with a DBpedia instance, and
Citibank Canada is transformed to a resource using
heuristics (see section 3). The correct mapping for
subsidiaries is <http://dbpedia.org/ontology/subsidiary>.
However, it is difficult to automatically find the target
property in the DBpedia ontology because, to do so,
semantics must be considered. Hence we extract triples
using section titles as predicates and then discuss the
challenges to map the section titles to DBpedia ontology
(see section 5). The example is briefly described in
Figure 3.
Figure3: List Structure Extraction
We extracted 26 million triples with an accuracy
of 81.5%. Of the correct triples, 93.9% constituted new
information to DBpedia. Because DBpedia currently
has 68 million infobox property triples (in DBpedia
2
https://googledrive.com/host/0B046sNk0DhCDfml4OFFNS05
leGtRbXQ0M0EtdTAxbFlSNXotaE5QZlZUaGUzXzBPST
BjbWM
2
2.2
SOFIE [17] extends Yago by extracting triples from
natural-language documents and maps them to the Yago
ontology.
Information Extraction in Natural
Language Processing
In the community of natural language processing,
(traditional) information extraction (IE) is an area of
research that attempts “to identify instances of a
particular prespecified class of entities, relationships and
events in natural language texts, and the extraction of the
relevant properties of the identified entities, relationships
or events” [8]. IE, and relation extraction in particular, is
related to ontology learning because many works extract
triples from natural-language texts. The extracted triples,
however, are merely three strings, e.g., <Citibank, Parent,
Citigroup>, and for the purpose of ontology learning they
should be disambiguated as described in Figure 2.
Recent IE has focused attention on minimizing
manual work, extracting without predefined relations,
and scalability to the Web. For example, a
state-of-the-art technique called Distant Supervision [9,
10] creates training data by heuristically aligning facts in
a knowledge base to the corresponding natural-language
text. This approach is promising in terms of reducing the
cost of generating training data, but the extracted
relations are limited to the source knowledge base.
Preemptive IE [11] does not specify the relations in
advance, but rather uses clustering twice, making
Web-scale extraction difficult. Reverb [13] and the
Wikipedia-based Open Extractor (WOE) [14] are not
limited to predefined relations, and as a result, they are
scalable for the Web. However, because they extract
relations in textual row expressions, it is more difficult to
disambiguate them with ontologies. Finally, the
Never-Ending Language Learner (NELL) [12] is a
system that literally “never ends,” continuously learning
by bootstrapping the manually defined initial seeds.
However, NELL requires manual supervision to avoid
semantic drift.
IE in natural language processing can benefit
ontology learning from Wikipedia, and several existing
projects adopt this approach. Aprosio et al. [15] extend
DBpedia using Distant Supervision. Dutta et al. [16] map
NELL triples to the DBpedia ontology. They use the
properties whose subject and object are the same in
NELL and DBpedia. This work is related to our work in
terms of mapping properties outside DBpedia to DBpedia.
We cannot use their method, however, because our
extracted properties are textual raw-section titles that are
not organized as comprehensively as they are in NELL.
3
Extraction Method
This section describes how we extracted triples from
Wikipedia. The method is heuristic, rule based, and
recall-oriented. That is, it attempts to extract as many
triples as possible using heuristic rules. For example,
three triples are extracted from the Citibank article
shown in Figure 1: <Citibank, Subsidiaries, Citibank,
N.A.(National Association) - … >, <Citibank,
Subsidiaries, Citibank Canada>, and <Citibank,
Subsidiaries, Citibank Texas, N.A. - … >.
Figure 4 shows the overview of our extraction
method. The method has five extraction steps: Extract
List Structure, Select Article, Refine Section Titles,
Refine List Elements, and Transform to RDF. The first
step collects list structures (Article, Section, List) from
an article. Next following three steps, which are
described later in details, refine them respectively and
outputs intermediate structures. In the figure, each
element in intermediate structures is marked with * after
refinement. Final Step converts the refined structures to
RDF triples. We implemented those steps using Java
with Apache Jena3 Java Wikipedia Library (JWPL).4
Figure 4: Overview of the Extraction System
Select Articles. Some articles, such as the discussion
page and the disambiguation page, are excluded because
they are inappropriate for extracting triples such that the
section title becomes a predicate and the element in the
list becomes an object. For example, a disambiguation
page helps users to locate the correct article when a
3
3
https://jena.apache.org
4
https://code.google.com/p/jwpl/
single article title has multiple meanings. Although such
pages use a list structure and often contain sections, the
sections titles are mostly unsuitable as predicates.
Moreover, DBpedia already mines the disambiguation
pages and extracts disambiguation triples.
In addition, listing pages are also undesirable for
extraction. For example, if we applied our method to the
page List of Japanese film directors (see Figure 5), the
extracted triple is <List of Japanese film directors, A,
Yutaka Abe>, which clearly does not contribute to
extending DBpedia. Thus, we exclude such pages
altogether. Rather, listing pages are useful for extracting
the type relation, and this has been studied previously in
[2]. Index pages (see Figure 5) are also excluded for this
reason.
Refine List Elements. Finally, each element in the list is
processed to find the resource to which the element refers.
We regard links to other articles as an indication of the
target resource. Thus, if the element is a link to a
resource, it immediately becomes the resource for the
object of the extracted triple. We also identified several
additional patterns that indicate a resource, as described
below. If the element does not match any of these, it is
extracted as a literal (i.e., as a string).
• [Resource]
• [Resource] (Additional information)
• [Resource] - Description
• [Resource]
(Additional
information)
Description
• Year - [Resource]
In addition, as shown in Figure 6, ISBN numbers
are often found in lists to refer to publications. This
information is also extracted as a resource using RDF
Book Mashup [21], an application program interface that
converts ISBN numbers to resources. However, uniform
resource identifiers (URIs) are not DBpedia resources, so
we performed this extraction independently from the
other triple extractions.
Figure 6: Example of a List Entry with an ISBN Number
4
Extraction Results
We extracted 26,764,662 triples with 350,456 different
properties (i.e., section titles). Of these, 7,262,043 were
triples whose objects are resources. The top five
properties are listed in Table 1. We also extracted
144,844 triples with 3,031 different properties by using
ISBN numbers. Because DBpedia has 68 million infobox
property triples (and 17 million whose objects are
resources), our work has the potential for greatly
expanding DBpedia.
We randomly sampled 200 triples from the
extracted triples, labeled each triple either correct or
incorrect, and checked whether the information is already
in DBpedia. Part of the samples is shown in Appendix
Table. The accuracy was 81.5% (using Wikipedia as the
ground truth), and 93.9% of the correct triples were new
information to DBpedia. Of the erroneous triples, 91.9%
of them were due to an incorrect property (i.e., section
title), and 8.1% were due to an object incorrectly
extracted as a resource (Incorrect O). The causes of the
incorrect properties fall into three types: a list structure in
Figure 5: Listing and Index Pages
Refine Section Titles. The title of the section is
examined in this phase. The section titles Reference(s)
and Note(s) are excluded because elements of those lists
are used for pointing to part of the article’s content and
should not be extracted independently. The section
External link(s) is also excluded because DBpedia
already extracts it. Moreover, some of section titles have
names indicating that the section is a list and should be
refined. For example, the section Track listing is often
used in articles of musical works, and should be
converted to Track. We manually identified several
patterns, and these were converted to the appropriate
names. The patterns were XX list(s), List(s) of XX and XX
Listing(s), and these are each converted to XX.
4
free text (35.1%), a sub-list structure (35.1%), and a
latent listing page (21.6%). A list structure in free text
(List in Text) occurs when the list structure is used
merely as part of the natural-language description. A
sub-list structure (Sub-List) is when the list is used only
in a subsection or nesting manner. A latent listing page
(Latent List) is a page with a structure similar to that in
Figure 5, without titles such as List of XX. The four error
types so far (Incorrect O, List in Text, Sub-List, and
Latent List) are shown with real examples in Appendix
Table.
our case because section titles are not so organized as
properties in ontology.
For example, top ten section titles whose subject
and
object
is
the
same
as
<http://dbpedia.org/ontology/director> is shown in table
2 with the number of overlapping pairs of subject and
object. The table indicates that most of selected section
titles are not corresponding to the property because the
property and the corresponding section titles have share
few or no resources as subject and object. It also suggests
that the usage of and in section titles can be a trouble
especially when they are used to concatenate two distinct
vocabularies, because, in order to map them to
ontological property, we need to identify whether each
object corresponds to each vocabulary. For example, if
someone’s article has the section titled “daughter and son”
and list of people in the section, in order to map them to
daughter or son in DBpedia ontology, we need to identify
each person is daughter or son, which is difficult to
perform automatically.
Table 1: Top Five Properties.
#
# of triples
1
predicate
(section title)
see_also
2,302,172
# of triples with
resource object
2,155,054
2
track
1,202,135
30,004
3
cast
923,793
197,039
4
discography
879,276
146,719
5
personnel
624,830
189,591
Table 2: Top ten section titles that share subject and object
5
with DBpedia property of director.
Discussion to Map Section Titles to
#
section title
# of overlapping pairs % for all pairs
1
cast
1,163
75.86%
2
see_also
107
6.98%
3
crew
33
2.15%
4
personnel
27
1.76%
5
filmmakers
21
1.37%
6
cast_and_crew 17
1.11%
7
credits
15
0.98%
8
cast_and_roles 15
0.98%
9
production
11
0.72%
8
0.52%
DBpedia Property
We extracted a number of triples using section titles as
predicates, but in order to extend DBpedia more
effectively, it is better to use DBpedia properties as
predicates. In this section, we discuss challenges to
automatically align section titles in Wikipedia articles
and properties in DBpedia ontology.
Considering the fact that DBpedia ontology has
2,795 properties and that our extracted triples have
350,456 properties, it is clear that section titles contain
synonymous properties corresponding to a given
DBpedia property. Hence, we want to automatically
locate a set of section titles corresponding to a property
in DBpedia ontology. For example, the set corresponding
to <http://dbpedia.org/ontology/subsidiary> includes
subsidiaries, subsidiary, major subsidiaries, affiliates,
affiliate companies, spin-off companies, etc. It is difficult
to automatically find these section titles.
The first approach we can think of is to exploit the
overlap of instances between two predicates from section
titles and DBpedia property, which is an approach in
ontology match communities when we want to align
relations between two ontologies [22]. However, this
approach is for ontology matching and not effective in
10 voice_cast
Anther way to find similar section titles for a
given DBpedia property is to use superficial features,
such as a string match, a synonym match, parts-of-speech
(POS) tags, and syntax. However, it is still difficult to
map in full automatic way because we have to deal with
word sense disambiguation problem to find synonyms.
For example, <http://dbpedia.org/ontology/subsidiary>
has the set of corresponding section titles: subsidiaries,
subsidiary, major subsidiaries, affiliates, affiliate
companies, spin-off companies, etc. To find all of them,
5
we need to disambiguate the word subsidiary, which has
two synsets as a noun in WordNet ([subordinate,
subsidiary, underling, foot soldier], [subsidiary company,
subsidiary]). Automatically selecting the correct synset is
difficult. The story is even more complicated when the
DBpedia property has multiple words, or when it uses
more ambiguous words such as work.
wikipedia infobox ontology. In Proceedings of the 17th
International Conference on World Wide Web (WWW), pp.
635-644 (2008).
7.
Miller, G. A. WordNet: a lexical database for English.
Communications of the ACM, Vol. 38, No. 11, pp. 39-41
(1995).
8.
Piskorski, J., & Yangarber, R. Information extraction: Past,
present and future. In Poibeau, T., Saggion, H., Piskorski,
6
J., & Yangarber, R. (Eds.), Multi-source, Multilingual
Conclusion and Future Work
Information Extraction and Summarization, pp. 23-49
We demonstrated the possibility of using the in-article
list structure for extending DBpedia, whose triples are
almost exclusively derived from infoboxes. Whereas
there are several proposals for extending DBpedia by
using Wikipedia information other than that contained in
the infobox, this is the first study to our knowledge that
utilizes the in-article list structure for extending DBpedia.
Using section titles as predicates, we extracted more than
20 million triples with an accuracy of 81.5%. The future
work is devoted to the automatic mapping between
section titles and DBpedia properties as well as
improving the accuracy of triples.
(2013).
9.
Mintz, M., Bills, S., Snow, R., & Jurafsky, D. Distant
supervision for relation extraction without labeled data. In
Proceedings of the Joint Conference of the 47th Annual
Meeting of the ACL and the 4th International Joint
Conference on Natural Language Processing of the
AFNLP, Vol. 2, pp. 1003-1011 (2009).
10. Hoffmann, R., Zhang, C., Ling, X., Zettlemoyer, L., &
Weld, D. S. Knowledge-based weak supervision for
information extraction of overlapping relations. In
Proceedings of the 49th Annual Meeting of the ACL:
Human Language Technologies, Vol. 1 pp. 541-550
(2011).
References
11. Shinyama, Y., & Sekine, S. Preemptive information
extraction using unrestricted relation discovery. In
1.
Lehmann, J., Isele, R., Jakob, M., Jentzsch, A.,
Proceedings of the main conference on Human Language
Kontokostas, D., Mendes, P. N., Hellmann, S., Morsey, M.,
Technology Conference of the North American Chapter of
van Kleef, P., Auer, Sö. & Bizer, C. DBpedia - A
the Association of Computational Linguistics
Large-scale, Multilingual Knowledge Base Extracted from
(HLT-NAACL), pp. 304-311 (2006).
12. Carlson, A., Betteridge, J., Kisiel, B., Settles, B., Hruschka
Wikipedia. Semantic Web Journal, Vol. 6, No. 2, pp.
2.
167-195 (2015).
Jr, E. R., & Mitchell, T. M. Toward an Architecture for
Paulheim, H., & Ponzetto, S. P. Extending DBpedia with
Never-Ending Language Learning. In Proceedings of the
Wikipedia List Pages. In NLP & DBpedia workshop at
Conference on Artificial Intelligence (AAAI) Vol. 5
12th International Semantic Web Conference (ISWC),
(2010).
13. Fader, A., Soderland, S., & Etzioni, O. Identifying
(2013)
3.
Fabian, M. S., Gjergji, K., & Gerhard, W. YAGO: A core
relations for open information extraction. In Proceedings
of semantic knowledge unifying wordnet and wikipedia.
of the Conference on Empirical Methods in Natural
In Proceedings of 16th International World Wide Web
Language Processing (EMNLP), pp. 1535-1545 (2011)
14. Wu, F., & Weld, D. S. Open information extraction using
Conference (WWW), pp. 697-706 (2007).
4.
Nakayama, K., Hara, T. and Nishio, S.: Wikipedia Mining
Wikipedia. In Proceedings of the 48th Annual Meeting of
for an Association Web Thesaurus Construction, In
the Association for Computational Linguistics (ACL), pp.
118-127 (2010).
Proceedings of International Conference on Web
15. Aprosio, A. P., Giuliano, C., & Lavelli, A. Extending the
Information Systems Engineering (WISE), pp. 322-334
5.
(2007)
Coverage of DBpedia Properties using Distant Supervision
Bizer, C., Heath, T., & Berners-Lee, T. Linked data-the
over Wikipedia. In NLP & DBpedia workshop at 12th
story so far. International Journal on Semantic Web and
International Semantic Web Conference (ISWC), (2013)
16. Dutta, A., Meilicke, C., & Stuckenschmidt, H.
Information Systems, Vol. 5, No 3, pp. 1-22 (2009).
6.
Semantifying Triples from Open Information Extraction
Wu, F., & Weld, D. S. Automatically refining the
Systems. In Proceedings of the 7th European Starting AI
6
Researcher Symposium (Frontiers in Artificial Intelligence
Automatic expansion of DBpedia exploiting Wikipedia
and Applications) , pp. 111-120 (2014).
cross-language information. In The Semantic Web:
17. Suchanek, F. M., Sozio, M., & Weikum, G. SOFIE: a
Semantics and Big Data (ESWC 2013 Proceedings), pp.
self-organizing framework for information extraction. In
397-411 (2013).
Proceedings of the 18th International Conference on
21. Bizer, C., Cyganiak, R., & Gauß, T. The RDF Book
World Wide Web (WWW), pp. 631-640 (2009).
Mashup: From Web APIs to a Web of Data. In Scripting
18. Gangemi, A., Nuzzolese, A. G., Presutti, V., Draicchio, F.,
for the Semantic Web Workshop at the ESWC, (2007)
Musetti, A., & Ciancarini, P. (2012). Automatic typing of
22. Suchanek, F. M., Abiteboul, S., & Senellart, P. PARIS:
DBpedia entities. In The Semantic Web–ISWC 2012 pp.
Probabilistic Alignment of Relations, Instances, and
65-81 (2012).
Schema. In Proceedings of the VLDB Endowment, Vol. 5,
19. Hienert, D., Wegener, D., & Paulheim, H. (2012).
No. 3, pp. 157-168 (2011).
Automatic classification and relationship extraction for
23. Tamagawa, S., Sakurai, S., Tejima, T., Morita, T., Izumi,
multi-lingual and multi-granular events from wikipedia.
N., & Yamaguchi, T. Learning a Large Scale of Ontology
Detection, Representation, and Exploitation of Events in
from Japanese Wikipedia. In IEEE/WIC/ACM
the Semantic Web (DeRiVE) workshop at 11th
International Joint Conference on Web Intelligence and
International Semantic Web Conference (ISWC), (2013).
Intelligent Agent Technology (WI-IAT), Vol. 1, pp.
20. Aprosio, A. P., Giuliano, C., & Lavelli, A. (2013).
279-286 (2010).
Appendix Table
The table shows 60 samples from the randomly sampled 200 triples for evaluation. It includes all the incorrect triples,
all the correct triples of which DBpedia has the same information, and ramdom 17 samples from correct triples that is
new information to DBpedia. The table header is described as follows.
#: The triples number.
Subject: The subject of the triple.
Predicate: The predicate of the triple.
Object: The object of the triple.
C: The value is 1 if the triple is correct, and 0 incorrect.
Dbp: The value is 1 if the DBpedia already has the same information as the triple, and 0 if does not have.
Reason: The reason why the triple is incorrect. Four types of reason are described in the section 4. If the triple is correct,
this value is vacant.
#
Subject
Predicate
1
dbpedia:May_2010_in_sports
days_of_the_month
Object
C
2
dbpedia:Julio_Iglesias_discography
compilation_albums
3
dbpedia:2001_Oakland_Raiders_season personnel
4
elcar_seven_passenger_s
dbpedia:Elcar_Seven_Passenger_Sedanedan-8-80_specifications snubbers all around
8-80
_(1926_data)
5
dbpedia:The_List_of_Adrian_Messenger production
6
dbpedia:Strength_athletics_in_Canada
1–5
US: 2
Platinum
99 Josh Taves DE
There were several screenplay drafts written—one by Vertigo co-writer Alec
Coppel—prior to the final draft by Anthony Veiller, who receives sole screen
credit.
north_america's_stronge Results courtesy of David Horne's World of Grip: http://www.davidhornest_man
gripmaster.com/strongmanresults.html
DBp
Reason
0 0
Latent List
0 0
Latent List
0 0
Latent List
0 0
Latent List
0 0
Latent List
0 0
Latent List
7
dbpedia:October_2010_in_sports
days_of_the_month
100m butterfly: Geoff Huegill 51.69 (GR) Ryan Pini 52.50 Antony James
52.50
0 0
Latent List
8
dbpedia:1930s
people
dbpedia:Frank_Sinatra
0 0
Latent List
9
dbpedia:Fire_It_Up_(Thousand_Foot_Kr
promotion
utch_song)
Fire It Up has been used by the MLB
0 0
List in Text
10
dbpedia:Cowra_breakout
breakout
The actions of the Australian garrison in resisting the attack averted a
greater loss of life, and firing ceased as soon as they regained control;
0 0
List in Text
11
dbpedia:Richpal_Singh_Mirdha
biography
MLA – 1993 (Indian National Congress),
0 0
List in Text
controversies
William Shockley was concerned about relatively high reproductive rates
among people of African descent, because he believed that genetics doomed
black people to be intellectually inferior to white people. He was strongly
criticized for this stand, which raised some concerns about whether criticism
of unpopular views of racial differences suppressed academic freedom.
0 0
List in Text
12
dbpedia:Academic_freedom
13
dbpedia:Word_play
examples
dbpedia:Marilyn_Manson
0 0
List in Text
14
dbpedia:DuMont_Television_Network
history
Down You Go, a popular panel show
0 0
List in Text
7
#
Subject
Predicate
Dbp
Reason
Object
C
Each player also has an anti-aircraft gun that is not housed on any of the
ships. This weapon is used to shoot down enemy reconnaissance aircraft/
attack squadrons that may be flying over the player's ships' airspace.
0 0
List in Text
15
dbpedia:Electronic_Battleship:_Advance fleet_and_weapons_syst
d_Mission
ems
16
dbpedia:Peninsula_Campaign_Confeder
ate_Order_of_Battle
army_of_northern_virgini
8th South Carolina: Col John W. Henagan
a
0 0
List in Text
17
dbpedia:University_of_Zakho
faculty_of_science
Mathematics
0 0
List in Text
18
dbpedia:Health_and_Social_Care_Act_2
background
012
changing the emphasis of measurement to clinical outcomes
0 0
List in Text
19
dbpedia:Alvin_Chau
programs
Fruko's Boogaloo
0 0
List in Text
20
dbpedia:Richard_Summerbell
research_in_mycology
Acremonium exuviarum, from shed skin of lizard
0 0
List in Text
21
dbpedia:East_Kolkata
it_hub-sector_v
SkyTech
0 0
List in Text
22
dbpedia:Death_and_the_Daleks
cast
dbpedia:List_of_Bernice_Summerfield_characters#S
0 0
Incorrect O
23
dbpedia:Christopher_Jaymes
filmography
dbpedia:The_Feed
0 0
Incorrect O
24
dbpedia:August_3
births
dbpedia:1953
0 0
Incorrect O
25
dbpedia:Haplogroup_M_(mtDNA)
subclades
M30c1a1
0 0
Sub-List
26
dbpedia:Live_Over_Europe_(DVD)
disc_2
"Love Gun"
0 0
Sub-List
27
dbpedia:Sicilian_Mafia
rituals_and_codes_of_co Always being available for Cosa Nostra is a duty - even if your wife is about
nduct
to give birth.
0 0
Sub-List
28
dbpedia:Renville_County,_North_Dakota communities
Prosperity
0 0
Sub-List
29
dbpedia:Center_Township,_Starke_Coun
geography
ty,_Indiana
Indian Hill at
0 0
Sub-List
30
dbpedia:Center_Township,_Starke_Coun
csy_models
ty,_Indiana
Ballast/Disp: 0.32 (shoal), 0.36 (deep)
0 0
Sub-List
31
dbpedia:FC_Carl_Zeiss_Jena
honours
dbpedia:Gauliga_Mitte
0 0
Sub-List
32
dbpedia:Simpals
the_history
Simpals creates the online social platform Yes.md, which became some kind
of symbiosis between the online games and nowadays social networks.
0 0
Sub-List
33
dbpedia:Greg_Mortenson
recognition
Wittenberg University (OH) 2010
0 0
Sub-List
34
dbpedia:Frontbench_Team_of_Menzies_
initial_team
Campbell
Transport - Lord Bradshaw
0 0
Sub-List
35
dbpedia:Giles_Lewin
discography
- Hung Up and Dry (1992)
0 0
Sub-List
invoice
(12) in the case of the supply of a new means of transport made in
accordance with the conditions specified in Article 138(1) and (2)(a), the
characteristics as identified in point (b) of Article 2(2);
0 0
Sub-List
Sub-List
36
dbpedia:Invoice
37
dbpedia:Electricity_sector_in_the_Domi responsibilities_in_the_el
50% of the South Distribution Company, EdeSur; and
nican_Republic
ectricity_sector
0 0
38
dbpedia:Solar_Pons
solar_pons_books
dbpedia:The_Final_Adventures_of_Solar_Pons
1 1
39
dbpedia:Aadmi_(1993_film)
cast
Mithun Chakraborty ...Vijay M. Srivastav
1 1
40
dbpedia:Freeport,_New_York
notable_residents
Jay Hieron, professional mixed martial arts fighter and IFL welterweight
champion
1 1
1 1
41
dbpedia:Lenny_Hambro
discography
Let Me Off Uptown (1996; Drive Archives) Gene Krupa and His Orchestra
Featuring Roy Eldridge, Don Fagerquist, Dolores Hawkins
42
dbpedia:Geoffrey_Simpson
filmography
dbpedia:Till_There_Was_You_(1990_film)
1 1
43
dbpedia:Cornershop
members
Ben Ayres – guitars, keyboards (1991–present)
1 1
44
dbpedia:Tsitana
species
Tsitana dicksoni Evans, 1955 – Dickson's Sylph
1 1
45
dbpedia:Valleys_of_Neptune_(song)
personnel
dbpedia:Jimi_Hendrix
1 1
46
dbpedia:Universal_Syncopations_II
personnel
Vesna Vasko-Caceres &mdash; voice (track 8)
1 1
47
dbpedia:Ken_Utsui
selected_filmography
dbpedia:Saimin_(film)
1 1
48
dbpedia:Luca_Barbarossa
albums_discography
1996: Sotto lo stesso cielo
1 0
49
dbpedia:Diocese_of_Ossory
bishops_of_ossory
St. Muccine (Feast date: March 4)
1 0
50
dbpedia:Morrow_(surname)
people_with_the_surnam
Byron Morrow (1911-2006), US television and film actor
e_morrow
1 0
51
dbpedia:Sargasso_Records
artists
dbpedia:Vinko_Globokar
1 0
52
dbpedia:Los_Angeles_Baptist_High_Sch
athletics
ool
Boys Soccer 1980 (Undefeated Champions), 1983, 1984, 1985, 1986, 1990
and 2009 Alpha League Champions
1 0
53
dbpedia:Antonio_J._Vicens
awards_and_decorations &nbsp;&nbsp;Army Reserve Component Overseas Training Ribbon
1 0
54
dbpedia:Google_Checkout
see_also
dbpedia:Online_banking
1 0
55
dbpedia:Structuralism_(philosophy_of_
mathematics)
bibliography
Shapiro, Stewart (1997), Philosophy of Mathematics: Structure and Ontology,
1 0
New York, Oxford University Press. ISBN 0195139305
56
dbpedia:Photograph_(Def_Leppard_son
g)
track
"Bringin' On the Heartbreak"
1 0
57
dbpedia:Graduate_School_of_Chinese_A
notable_alumni
cademy_of_Social_Sciences
dbpedia:Niu_Renliang
1 0
58
dbpedia:Sender_Films
recent_work
Reel Rock Film Tour 2011, part of an annual compilation of adventure film
shorts that is co-produced by Sender Films and Big UP Productions, featured
a short film called Sketchy Andy. &nbsp;Andy Lewis, or Sketchy Andy, is a
professional slackliner, highliner, and BASE jumper. He was recently featured 1 0
in the Super Bowl XLVI halftime show alongside Madonna. His appearance
created media attention not only towards himself and his Reel Rock film, but
also to the obscure sport of slacklining.
59
dbpedia:Carlton-Browne_of_the_F.O.
cast
Marie Lohr as Lady Carlton-Browne
1 0
60
dbpedia:Orbit_Express_Airlines
destinations
dbpedia:London_Luton_Airport
1 0
61
dbpedia:Leonardo_Corbucci
filmography
George Lucas 0514 (2006 short)
1 0
62
dbpedia:Labina_Mitevska
filmography
Prevrteno (2007) .... Woman in White
1 0
63
dbpedia:Erreà
team_sponsorships
dbpedia:Diósgyőri_VTK
1 0
64
dbpedia:AS_Béziers_Hérault
notable_former_players
dbpedia:Yoan_Audrin
1 0
65
dbpedia:The_Rhythm_of_the_Saints
personnel
dbpedia:J.J._Cale
1 0
8