Inverted page table - Rose

Day 22
Virtual Memory
Two-level scheme to support large
tables
Consider a process as described here:
 Logical address space is 4-GiB (232)
 Size of a page is 4KiB (212 bytes)
 There are 220 pages in the process. (232/212)
 This implies we need 220 page table entries.
 If each page table entry occupies 4-bytes, then need 222 byte
(4MiB) large page table

The page table will occupy 222/212 i.e. 210 pages.
 Root table will consist of 210 entries – one for each page that
holds a page table.
 Root table will occupy 212 bytes i.e 4KiB of space and will be
kept in main memory permanently.
 Could require two disk accesses.
Always in
main
memory
Brought into
main memory
when needed.
Inverted page table




The page table can get very large
An inverted page table has an entry for every
frame in main memory and hence is of a fixed
size.
A hash function is used to map the page number to
the frame number.
An entry has a page number, process id, valid bit,
modify bit, chain pointer, and so on.
Rehashing techniques for the inverted page table (Fig. 8.27)
Hashing function: X mod 8
(b) Chained rehashing
Translation Look-aside
Buffer(TLB)



Used in conjunction with a page table
Aim is to reduce references to the page table and
hence reduce the number of memory accesses. (2
memory accesses for each fetch)
TLB is a cache that holds a small portion of the
page table.



It’s a faster and smaller memory.
Reduces the overall page access time.
A TLB entry contains the page number and PTE.

During address translation:
1.
2.
Check TLB. If TLB hit, use frame number with
offset to generate address.
Simultaneously access page table. If TLB hit, then
stop. Else look at page table entry.
1.
3.
4.
If found, use frame number with offset to generate
address. Update TLB.
If page fault, then block process and issue a
request to bring the page into main memory.
When page is ready, update page table
TLB


If we keep the right entries of the page table
in the TLB, we can reduce the page table
accesses and hence memory accesses.
TLB will hold only some of the page table
entries


use associative mapping to find a page table
entry.
Search time is O(1).
Memory access time



In tlb = 10ns (TLB) + 100ns (data)
not in TLB: 10ns (TLB) + 100 (root PT) + 100
(PT) + 100ns (data)
Average access time = 110ns (.99) + (1-.99) *
310ns = 112ns
Direct Mapping
Associative Mapping
Page size – hardware/software
decision

Small page size



Less internal fragmentation
More pages in main memory
 Large page tables
 Few page faults
Large page size




More internal fragmentation
Fewer pages per process
 Smaller page tables
Fewer page faults
Fewer processes in main memory
Page faults and page size
Eg: Small pages
while(x < 30){ - Page 1
printTheValues(); - Page 5
readNewValues(); - Page 6
filterNewValues(); - Page 11
writeNewValues(); - Page 12
printTheValues(); - Page5
x++; - Page 1
}
Since the pages are small, pages 1, 5, 6, 11 and 12 can all reside in
main memory. Hence, fewer page faults.
Eg: Medium sized pages
while(x < 30){ - Page 1
printTheValues(); - Page 5
readNewValues(); - Page 3
filterNewValues(); - Page 4
writeNewValues(); - Page 5
printTheValues(); - Page5
x++; - Page 1
}
 Only pages 1,3 and 4 in main memory. So, bring in 5, but replace
1/3/4. Lots of page faults.
Eg: Large pages
while(x < 30){ - Page 1
printTheValues(); - Page 1
readNewValues(); - Page 1
filterNewValues(); - Page 2
writeNewValues(); - Page 2
printTheValues(); - Page 1
x++; - Page 1
}
Both pages 1 and 2 in main memory. Fewer page faults.
Page faults and number of frames
per process


Variable page sizes are supported by many
architectures.
Operating systems typically support only one
page size


Makes replacement policy simpler
Makes resident set management easier (how
many pages per process etc)
VM with segmentation
Advantages





Growing data structures – OS can shrink or
enlarge the segment as required.
Allows parts of the process to be recompiled
independently without recompiling the entire
process.
Easier to share.
Easier for protection.

Segment table entry





present bit
starting address
length of segment
modify bit
protection bit
Combined paging and
segmentation


Sharing and protection at the segment level.
Replacement at the page level.

Present bit, modified bit in the page-table entry.