Lecture 2, Part 1

Information Retrieval
ME-E2200 5 credits
Introduction to information retrieval, Boolean retrieval,
indexing
Antti Ukkonen
[email protected]
Based on earlier slides by Tuukka Ruotsalo Some slides are based on materials by Ray
Larson. Many slides are based on materials by Hinrich Schütze and Christina Lioma
Berry-Picking Model
A sketch of a searcher… “moving through many actions towards a
general goal of satisfactory completion of research related to an
information need.” (Bates 89)
Q2
Q1
Q0
Q4
Q3
Q5
Supporting interactive retrieval via “berry
picking model” can be very difficult. Let’s
start with a simpler approach.
Boolean Retrieval
Unstructured data in 1650: Shakespeare
30
5
Boolean retrieval
▪The Boolean model is arguably the simplest model to base an information
retrieval system on.
▪Queries are Boolean expressions, e.g., CAESAR AND
BRUTUS
▪The search engine returns all documents that satisfy the
▪Boolean expression.
31
6
Unstructured data
▪Which plays of Shakespeare contain the words BRUTUS AND CAESAR,
but not CALPURNIA?
▪One could grep all of Shakespeare’s plays for BRUTUS and CAESAR,
then strip out lines containing CALPURNIA
▪Why is grep not the solution?
▪Slow (for large collections)
▪grep is line-oriented, IR is document-oriented
CALPURNIA” is „post-filtering“
▪Other operations (e.g., find the word ROMANS near
▪“NOT
COUNTRYMAN ) not feasible
32
7
Inverted index
Term-document incidence matrix
Anthony and
Cleopatra
ANTHONY
BRUTUS
CAESAR
CALPURNIA
CLEOPATRA
MERCY
WORSER
...
Julius Caesar
1
1
1
0
1
1
1
1
1
1
1
0
0
0
The Tempest
Hamlet
0
0
0
0
0
1
1
Othello
0
1
1
0
0
1
1
Macbeth . . .
0
0
1
0
0
1
1
1
0
1
0
0
1
0
Entry is 1 if term occurs. Example: CALPURNIA occurs in Julius
Caesar. Entry is 0 if term doesn’t occur. Example: CALPURNIA
doesn’t occur in The tempest.
34
9
Incidence vectors
▪So we have a 0/1 vector for each term.
▪To answer the query BRUTUS AND
CAESAR AND NOT CALPURNIA:
▪Take the vectors for BRUTUS, CAESAR AND NOT
CALPURNIA
▪Complement the vector of CALPURNIA
▪Do a (bitwise) and on the three vectors
▪110100 AND 110111 AND 101111 = 100100
35
10
0/1 vector for BRUTUS
Anthony
and
Cleopatra
Julius
Caesar
The
Tempest
Hamlet
Othello
Macbeth . .
.
ANTHONY
BRUTUS
CAESAR
CALPURNIA
CLEOPATRA
MERCY
WORSER
...
1
1
1
0
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
1
1
0
1
1
0
0
1
1
0
0
1
0
0
1
1
1
0
1
0
0
1
0
result:
1
0
0
1
0
0
36
11
Bigger collections
▪Consider N = 106 documents, each with about 1000 tokens
▪
total of 109 tokens
▪On average 6 bytes per token, including spaces and punctuation
size of
document collection is about 6 ・ 109 = 6 GB
▪Assume there are M = 500,000 distinct terms in the collection
▪(Notice that we are making a term/token distinction.)
37
12
Can’t build the incidence matrix
▪M = 500,000 × 106 = half a trillion 0s and 1s.
▪But the matrix has no more than one billion 1s.
▪Matrix is extremely sparse.
▪What is a better representations?
▪We only record the 1s.
38
13
Inverted Index
For each term t, we store a list of all documents that contain t.
dictionary
postings
39
14
Sec. 1.2
Inverted index construction
Documents to
be indexed
Friends, Romans, countrymen.
Tokenizer
Friends
Token stream
Romans
Countrymen
roman
countryman
Linguistic modules
friend
Modified tokens
Indexer
Inverted index
friend
2
4
roman
1
2
countryman
13
16
Initial stages of text processing
• Tokenization
– Cut character sequence into word tokens
• Deal with “John’s”, a state-of-the-art solution
• Normalization
– Map text and query term to same form
• You want U.S.A. and USA to match
• Stemming
– We may wish different forms of a root to match
• authorize, authorization
• Stop words
– We may omit very common words (or not)
• the, a, to, of
Tokenizing and preprocessing
42
17
Generate posting
43
18
Sort postings
44
19
Create postings lists, determine document
frequency
45
20
Split the result into dictionary and postings
dictionary
postings
46
21
Processing Boolean queries
Boolean queries
▪The Boolean retrieval model can answer any query that is a Boolean
expression.
▪Boolean queries are queries that use AND, OR and NOT
to join query terms.
▪Views each document as a set of terms.
▪Is precise: Document matches condition or not.
▪Primary commercial retrieval tool for 3 decades
▪Many professional searchers (e.g., lawyers) still like Boolean queries.
▪You know exactly what you are getting.
▪Many search systems you use are also Boolean: e.g. email search.
48
23
Sec. 1.3
The index we just built
• How do we process a query?
– Later - what kinds of queries can we process?
24
Sec. 1.3
Query processing: AND
• Consider processing the query:
Brutus AND Caesar
– Locate Brutus in the Dictionary;
• Retrieve its postings.
– Locate Caesar in the Dictionary;
• Retrieve its postings.
– “Merge” the two postings (intersect the document sets):
2
4
8
16
1
2
3
5
32
8
128
64
13
21
Brutus
34 Caesar
25
Sec. 1.3
The merge
• Walk through the two postings simultaneously, in time
linear in the total number of postings entries
2
4
1
2
8
16
32
128
64
Brutus
34 Caesar
3 5 8 13 21
If the list lengths are x and y, the merge takes O(x+y)
operations.
Crucial: postings sorted by docID.
26
Intersecting two postings lists
(a “merge” algorithm)
27
Sec. 1.3
Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT
(Antony OR Cleopatra)
• Can we always merge in “linear” time?
– Linear in what?
• Can we do better?
28
Sec. 1.3
Query optimization
• What is the best order for query processing?
• Consider a query that is an AND of n terms.
• For each of the n terms, get its postings, then AND
them together.
Brutus
2
Caesar
1
Calpurnia
4
2
8
16 32 64 128
3
5
8
16
21 34
13 16
Query: Brutus AND Calpurnia AND Caesar
29
Sec. 1.3
Query optimization example
• Process in order of increasing freq:
– start with smallest set, then keep cutting further.
This is why we kept
document freq. in dictionary
Brutus
2
Caesar
1
Calpurnia
4
2
8
16 32 64 128
3
5
8
16
21 34
13 16
Execute the query as (Calpurnia AND Brutus) AND Caesar.
30
Sec. 1.3
More general optimization
• e.g., (madding OR crowd) AND (ignoble OR
strife)
• Get doc. freq.’s for all terms.
• Estimate the size of each OR by the sum of
its doc. freq.’s (conservative).
• Process in increasing order of OR sizes.
31
Commercially successful Boolean retrieval: Westlaw
▪Largest commercial legal search service in terms of the number of paying
subscribers
▪Over half a million subscribers performing millions of searches a day over
tens of terabytes of text data
▪The service was started in 1975.
▪In 2005, Boolean search (called “Terms and Connectors” by Westlaw) was
still the default, and used by a large percentage of users . . .
▪. . . although ranked retrieval has been available since 1992.
57
32
Query processing
Query optimization
▪Consider a query that is an and of n terms, n > 2
▪For each of the terms, get its postings list, then and them together
▪Example query: BRUTUS AND
CALPURNIA AND CAESAR
▪What is the best order for processing this query?
59
34
Query optimization
▪Example query: BRUTUS AND
CALPURNIA AND CAESAR
▪Simple and effective optimization: Process in order of increasing frequency
▪Start with the shortest postings list, then keep cutting further
▪In this example, first CAESAR, then CALPURNIA, then BRUTUS
60
35
Optimized intersection algorithm for
conjunctive queries
61
36
More general optimization
▪Example query: (MADDING OR CROWD) and (IGNOBLE OR STRIFE)
▪Get frequencies for all terms
▪Estimate the size of each or by the sum of its frequencies (conservative)
▪Process in increasing order of or sizes
▪NOTE: Arbitrary queries require (all) intermediate results to be kept in
memory (or separate index) and some (algorithmic) search or hash-based
access is required to effectively access these run-time
62
37
Contents and take away
▪Boolean retrieval
▪Inverted indexing
▪Query processing
▪Boolean retrieval is still relevant in many fields where users want to know
exactly what they will get and eliminate any assumptions made by the system
63
38
Questions?