Database System
Implementation CSE 507
Query Processing and Query Optimization
Some slides adapted from Silberschatz, Korth and Sudarshan Database System Concepts – 6th Edition.
And Elamsri and Navathe, Fundamentals of Database Systems – 6th Edition.
Basic Steps in Query Processing
1. Parsing and
translation
2. Optimization
3. Evaluation
Basic Steps in Query Processing Contd.
Parsing and translation
translate the query into its internal form, e.g., relational
algebra.
Parser checks syntax, verifies relations
Evaluation
The query-execution engine takes a query-evaluation plan,
executes that plan, and returns the answers to the query.
Material adapted from Silberchatz, Korth and Sudarshan
Basic Steps: Regarding Optimization (1/3)
A relational algebra expression may have many equivalent
expressions
E.g., for “Select salary from Instructor where salary < 75000”
salary75000(salary(instructor)) is equivalent to
salary(salary75000(instructor))
For each relational algebra operation we have candidate
algorithms
Thus, a relational-algebra expression can be evaluated in many
ways.
Material adapted from Silberchatz, Korth and Sudarshan
Basic Steps: Regarding Optimization (2/3)
Annotated expression specifying detailed evaluation strategy
is called an evaluation-plan.
E.g., can use an index on salary to find instructors with salary
< 75000,
or can perform complete relation scan and discard
instructors with salary 75000
Material adapted from Silberchatz, Korth and Sudarshan
Basic Steps: Regarding Optimization (3/3)
Query Optimization: Amongst all equivalent evaluation plans
choose the one with lowest cost.
Cost is estimated using statistical information from the
database catalog
e.g. number of tuples in each relation, size of tuples, etc.
Material adapted from Silberchatz, Korth and Sudarshan
Measures of Query Cost
Total Cost = total elapsed time for answering query
Many factors contribute to time cost
e.g., disk accesses, CPU, or even network communication
Typically disk access is the predominant cost, and is also
relatively easy to estimate. Measured by taking into account
Number of seeks X average-seek-cost
Number of blocks read X average-block-read-cost
Number of blocks written X average-block-write-cost
Material adapted from Silberchatz, Korth and Sudarshan
Measures of Query Cost
Some simplifying assumptions will make
Average-block-read-cost == Average-block-write-cost
Also we will mostly work with total #blocks accessed as the total
cost of a query algorithm.
Material adapted from Silberchatz, Korth and Sudarshan
Measures of Query Cost
Occasionally, we use the number of block transfers from disk
and the number of seeks as the cost measures
tT – time to transfer one block
tS – time for one seek
Cost for b block transfers plus S seeks
b * tT + S * tS
Material adapted from Silberchatz, Korth and Sudarshan
Query Processing Algorithms
Selection Operation
File scan
Algorithm A1 (linear search). Scan each file block and test all
records to see whether they satisfy the selection condition.
Cost estimate = br block transfers + 1 seek
br denotes #blocks containing records from relation r
If selection is on a key attribute, can stop on finding record
cost = (br /2) block transfers + 1 seek
Selection Operation
Index scan – search algorithms that use an index
selection condition must be on search-key of index.
A2 (primary index, equality on key). Retrieve a single record that
satisfies the corresponding equality condition
#blocks accessed = height + 1
Cost = (height + 1) X (tT + tS)
A3 (primary index, equality on nonkey) Retrieve multiple records.
Records will be on consecutive blocks
Let b = #blocks containing matching records
Cost = height X (tT + tS) + tS + tT X b; total blocks = height + b;
Selection Operation
A4 (secondary index).
Retrieve a single record if the search-key is a candidate key
Total #blocks accessed = height + 1
Cost = (height + 1) * (tT + tS)
Retrieve multiple records if search-key is not a candidate key
Each of n matching records may be on a different block
Total #blocks accessed = height + n + 1
Cost = (height + n + 1) * (tT + tS)
External Sorting Algorithm
N-way Sort-Merge strategy:
Starts by sorting small subfiles (runs) of the main file
Then merges the sorted runs, creating larger sorted subfiles that
are merged in turn.
Each merge phase merges >= 2 runs.
External Sorting Algorithm Example 2-way
g 24
a 19
d 31
c 33
b 14
e 16
a 19
a 19
d 31
b 14
g 24
c
b 14
d 31
c 33
e 16
e 16
g 24
r 16
d 21
d 21
m 3
m 3
r 16
p
2
d
7
a 14
initial
relation
7
p
2
d
b 14
c 33
d
d 31
e 16
7
g 24
m 3
m 3
p
p
2
2
r 16
r 16
runs
merge
pass–1
7
d 21
d 21
runs
create
runs
a 19
33
a 14
a 14
d
a 14
merge
pass–2
sorted
output
Cost of External Sorting Algorithm
nR: number of initial runs;
b: number of file blocks;
nB: available buffer space;
dM: degree of merging;
nP: number of passes.
Sorting phase: nR = (b/nB)
Merging phase: dM = Min (nB-1, nR); nP = (logdM(nR))
Total #blocks accessed = 2*b + (2*b* (logdM(nR))
Join Operation: Nested Loop Join
To compute the theta join
R
S
for each tuple tr in R do begin
for each tuple ts in S do begin
test pair (tr,ts) to see if they satisfy the join condition
if they do, add tr • ts to the result.
end
end
R is called the outer relation and S the inner relation of the join.
Requires no indices and can be used with any kind of join
condition.
Expensive since it examines every pair of tuples in the two
relations ??
Join Operation: Nested Loop Join
In the worst case, if there is enough memory only to hold one block
of each relation, the estimated cost is
nr bs + br blocks transferred, plus
nr + br
seeks
If the smaller relation fits entirely in memory, use that as the inner
relation.
Reduces cost to br + bs block transfers and 2 seeks
Block nested-loops algorithm (next slide) is preferable.
Join Operation: Nested Loop Join with blocks
Variant of nested-loop join in which every block of inner relation
is paired with every block of outer relation.
for each block Br of r do begin
for each block Bs of s do begin
for each tuple tr in Br do begin
for each tuple ts in Bs do begin
Check if (tr,ts) satisfy the join condition
if they do, add tr • ts to the result.
end
end
end
end
Nested Loop Join using blocks
Worst case estimate: br bs + br block transfers + 2 * br seeks
Each block in the inner relation S is read once for each block
in the outer relation
Best case: br + bs block transfers + 2 seeks.
If nB buffers are available then:
In block nested-loop, use nB — 2 disk blocks as blocking unit
for outer relations; use remaining two blocks to buffer inner
relation and output
Cost = br / (nB — 2 ) bs + br block transfers +
2 br / (nB — 2 ) seeks
Join Operation: Single Loop Join
For each tuple tR in the outer relation R,
use the index to look up tuples in S that satisfy the join condition
with tuple tR.
If buffer has space for only one page of R.
Cost of the join if we have a secondary index on S
bR + |R| (height + 1 + sB ) block accesses
sB is the selection cardinality for the join attribute.
Join Operation: Single Loop Join
Cost of the join if we have a clustering index on S
bR + |R| (height + sB / Bfr-of-S) block accesses
sB is the selection cardinality for the join attribute.
Cost of the join if we have a primary index on S
bR + |R| (height + 1) block accesses
Join Operation: Merge Join Algorithm
1. Sort both relations on their join attribute (if not already sorted
on the join attributes).
2. Merge the sorted relations to join them
1. Join step is similar to the merge stage of the sort-merge
algorithm.
2. Main difference is handling of duplicate values in join
attribute — every pair with same value on join attribute must
be matched and reported in the result.
Join Operation: Merge Join Algorithm
Each block needs to be read only once.
Under the assumption that all tuples of one relation for any given
value of the join attributes fit in memory
If bb is the number buffer blocks allocated to each relation.
The cost of merge join is:
br + bs block transfers + br / bb + bs / bb seeks
+
the cost of sorting if relations are unsorted.
Join Operation: Hash Join Algorithm
Step 1: Partition Phase:
h maps JoinAttrs values to {0, 1, ..., n}, where JoinAttrs denotes the
common attributes of r and s used in the join.
r0, r1, . . ., rn denote partitions of R’s tuples
Each tuple tr R is put in partition ri where i = h(tr [JoinAttrs]).
s0,, s1. . ., sn denotes partitions of S’s tuples
Each tuple ts S is put in partition si, where i = h(ts [JoinAttrs]).
Each partition (of S or R) may spread across several disk blocks.
Join Operation: Hash Join Algorithm
Intuition of Partition Phase:
R tuples in ri need only to be compared with S tuples in si
Need not be compared with s tuples in any other partition, since:
an r tuple and an s tuple that satisfy the join condition will have
the same value for the join attributes.
If that value is hashed to some value i, the r tuple has to be in ri
and the s tuple in si.
Join Operation: Hash Join Algorithm
Step 1: Partition Phase:
Step 1(a). Partition the relation S using hashing function h.
Step 1(b). Partition R similarly.
Step 2 Probe Phase:
For each i: (Making si as the build relation)
(a)Load si into memory and build an in-memory hash index on it
using the join attribute. This hash index uses a different hash
function than the earlier one h.
(b)Read the tuples in ri from the disk one by one. For each tuple tr
locate each matching tuple ts in si using the in-memory hash
index. Output the join result.
Illustrating Hash Join Algorithm
Some Details on Hash Join Algorithm
The value n (#partitions) and the hash function h is chosen such
that each si should fit in memory.
If we have space for M blocks in the Main Mem then,
n must be at least bs/M .
To be more precise, we need to account for space for
One block for reading in ri + 1 output buffer + index on si
The probe relation (relation R) partitions ri need not fit in memory.
We will just take one block from ri each time.
Some Details Hash Join Algorithm
Recursive partitioning
Required if #partitions n is greater than #buffers M available.
Instead of partitioning n ways, use M – 1 partitions.
Further partition the M – 1 partitions using a different hash function
Use same partitioning method on both R and S.
Join Operation: Hash Join Algorithm
Hash-table overflow occurs in partition si if si does not fit in
memory. Reasons could be
Many tuples in S with same value for join attributes
Bad hash function
Overflow resolution can be done in build phase
Partition si is further partitioned using different hash function.
Partition ri must be similarly partitioned.
Join Operation: Hash Join Algorithm
Overflow avoidance performs partitioning carefully to avoid
overflows during build phase
E.g. partition build relation into many partitions, then combine them
Both approaches fail with large numbers of duplicates
Fallback option: use block nested loops join on overflowed
partitions.
Join Operation: Hash Join Algorithm Cost
If recursive partitioning is not required: cost of hash join is
3(br + bs) + 4 n block transfers
Join Operation: Hash Join Algorithm Cost
If recursive partitioning is not required: cost of hash join is
3(br + bs) + 4 n block transfers
Small, can
be ignored!
Join Operation: Hybrid Hash Join Algorithm
Useful when memory sized are relatively large, and the
build input is bigger than memory.
Main feature of hybrid hash join:
Keep the first partition of the build relation in memory.
Some Meta-Level Evaluation Strategies
Materialization
Materialized evaluation:
Evaluate one operation at a time, starting at the lowestlevel.
Use intermediate results materialized into temporary
relations to evaluate next-level operations.
Materialization
Consider the following example query and its expression tree.
Select name
from department natural join instructor
where department.building = “Watson”;
Materialization
Example, in figure below, compute and store
building"Watson" (department)
then compute the store its join with instructor, and finally compute the
projection on name.
Materialization Contd..
Materialized evaluation is always applicable
Cost of writing results to disk and reading them back can be
quite high
Our cost formulas for operations for far have ignored cost of
writing results to disk, so
Overall cost = Sum of costs of individual operations
+
cost of writing intermediate results to disk
Pipelining
Pipelined evaluation :
Evaluate several operations simultaneously, passing the
results of one operation on to the next.
E.g., in previous expression tree, don’t store result of
building"Watson" (department)
instead, pass tuples directly to the join.
Similarly, don’t store result of join, pass tuples directly to
projection.
Any thoughts on its applicability? Can it be used for
processing all algorithms.
Pipelining
Much cheaper than materialization: no need to store a
temporary relation to disk.
Pipelining may not always be possible – e.g., sort, hash-join.
For pipelining to be effective, use evaluation algorithms that
generate output tuples even as tuples are received for
inputs to the operation.
Pipelines can be executed in two ways: demand driven
and producer driven
Pipelining: Demand Driven
In demand driven or lazy evaluation
System repeatedly requests next tuple from top level
operation
Each operation requests next tuple from children
operations as required, in order to output its next tuple
In between calls, operation has to maintain “state” so it
knows what to return next
Pipelining: Producer Driven
In producer-driven or eager pipelining
Operators produce tuples eagerly and pass them up to
their parents
Buffer maintained between operators, child puts
tuples in buffer, parent removes tuples from buffer
if buffer is full, child waits till there is space in the buffer,
and then generates more tuples
System schedules operations that have space in output
buffer and can process more input tuples
© Copyright 2026 Paperzz