Processes - Rose

Paging/Segmentation Architectures
 Memory references are dynamically translated into
physical addresses at run time
 A process may be swapped in and out of main memory
such that it occupies different regions
 A process may be broken up into pieces that do not
need to be located contiguously in main memory
 All pieces of a process do not need to be loaded in main
memory during execution
2
Combined Paging and Segmentation
 Each segment is broken into fixed-size pages
 Paging
 is transparent to the programmer
 eliminates external fragmentation
 Segmentation
 is visible to the programmer
 allows for growing data structures, modularity, and
support for sharing and protection
3
4
Virtual Memory
 With paging/segmentation, a program can execute without all of
its pages/segments being in memory!
 The resident set of a process is the set of its pages that are
currently loaded in memory
 If a process attempts to access a non-resident page then








Memory management unit (MMU) generates an interrupt
OS moves process to Blocked state
OS chooses a page to replace
OS issues disk I/O write request to save page (if necessary)
Disk generates interrupt when write operation completes
OS issues disk I/O read request to load page
Disk generates interrupt when read operation completes
OS moves process to Ready state
5
Advantages
 More processes may be maintained in main memory
 Increases the probability that some process will be in the
Ready state at any particular time
 A process may be larger than main memory
6
Virtual Memory Support
 Hardware must support paging and/or segmentation
 Operating system must be able to manage the
movement of pages and/or segments between
secondary memory and main memory
7
Paging
 Each page table entry contains the frame number in
main memory of the corresponding page
 A present bit indicates whether or not the page is in
main memory
 A modified bit (or “dirty” bit) indicates whether or not
the page has been altered since it was last loaded
8
Page Table Entries Revisited
Page Table Descriptor
type
Page Table Pointer
2 1
0
Page Table Entry
Frame Number
C M R ACC
8 7 6 5 4
Type = PTD, PTE, Invalid
C - Cacheable
M - Modify
R - Reference
ACC - Access permissions
2 1
type
0
9
Also in main memory
Simple indexing
10
Page Table Memory Requirements
 Each process has its own page table
 Page tables can be big
 (Only) the relevant portion of the page table has to be
in memory for the process to execute
 Page tables are also stored in virtual memory
 However, the page table descriptor must be in memory
(or in a processor register)
11
Two-Level Page Tables
 Some processors may use a two-level scheme to
organize large page tables
 Page tables are kept in virtual memory
 A one-page root page table resides permanently in
main memory and has entries mapping to the pages of
the page table
12
Two-Level Page Tables
13
Always in
main
memory
Brought into
main memory
as needed
14
Two-Level Page Table Example
Consider a process as described here:
 Logical address space is 4-GB (232 bytes)
 Size of a page is 4KB (212 bytes)
 There are ___ pages in the process.
 This implies we need ____ page table entries.
 If each page table entry occupies 4-bytes, then need ______ byte large
page table
 The page table will occupy __________ pages.
 Root table will consist of ____ entries – one for each page that holds
part of the page table.
 Root table will occupy 212 bytes i.e 4KB of space and will be kept in
main memory permanently.
 A page access could require two disk accesses.
15
Inverted Page Tables
 An inverted page table is another solution to the
problem of page table size
 An inverted page table has one entry per frame and
hence is of a fixed size
 A hash function is used to map the page number (and
process number) to the frame number
 Each PTE has a page number, process id, valid bit,
modify bit, and chain pointer
17
Inverted Page Tables
Virtual Address
Page #
Hash
<PID, Page #>
Physical Address
Frame #
Offset
Offset
Hash Table
Inverted
Page Table
Page Frame
Synonym Chain
Search
V M PID,Page #
matches Page #
and PID
Program
Paging Hardware
Memory
18
Hashing techniques
Hashing function: X mod 8
(b) Chained rehashing
19
HW/SW Decisions: Page Size
 Smaller page size means
 Less internal fragmentation
 More pages required per process
 More pages per process means
 Larger page tables
 Larger page tables means
 Large portion of page tables in virtual memory
 But … secondary memory is designed to efficiently
transfer large blocks of data so a large page size is
better
20
Exploiting the Principle of Locality
 Programs tend to demonstrate temporal and spatial
locality of reference
 Program and data references within a process tend to cluster
 Only a few pages of a process are needed over a short time
 Possible to make good predictions about which pages will be
needed in the near future
 This is the reason that virtual memory can work efficiently
21
Page Size Revisited
 Small page size means
 Improved locality exploitation
 Low page faults
 Large page size means
 Reduced locality exploitation
 Increased page faults
22
Overall Effect of Page Size
23
Effect of Working Set Size
24
Virtual Memory Policies
 Fetch policy
 Placement Policy
 Replacement policy
 Resident Set Management
 Cleaning policy
 Load Control
25
Fetch Policy
 When should a page be brough into memory?
 Demand paging:
 Fetch when a reference is made to a location in the page
 Pre-paging
 Bring in more pages than immediately needed
 Load referenced page as well as “n” neighboring pages
 Exploits spatial locality and the efficiency of most
secondary devices in reading contiguous locations
26
Placement Policy
 Where should a portion be placed?
 Already discussed
 With segmentation:
 Use best-fit, first-fit, next-fit, etc.
 With paging (or paging combined with segmentation):
 Not an issue – any frame will do
27
Replacement Policy
 Stallings: “Replacement policy”
 Which page should be replaced?
 Stallings: “Resident set management”
 How many frames should each process be allocated?
 Should the replaced page be one belonging to


The process that caused the page fault, or
Could it belong to any process?
 Discussed later …
28
Fixed-Allocation Policies
 Optimal
 Least Recently Used
 First-in, First-out
 Clock algorithm
29
Optimal Policy
 Replace the page that will not be referenced for the
longest time
 Provably optimal
 Impossible to implement
30
31
First-In, First-Out (FIFO)
 Replace the page that has been in memory the longest
 Simplest replacement policy to implement
 Treat frames allocated to a process as a circular buffer
 Remove pages in round-robin order
32
33
Least Recently Used (LRU)
 Replace the page that hasn't been referenced for the
longest time in the past
 Locality suggests that this page is least likely to be referenced
in the near future
 Does almost as well as optimal algorithm on many real
reference sequences
 Naïve implementations are inefficient
 Tag frames with time or sequence of last reference


Must scan frames for every time a page must be replaced
Overflow is an issue
 Maintain a queue similar to LRU
 Must scan and update queue every time a page is referenced!
34
35
Clock Policy
 Memory frames are treated as a circular buffer
 Each frame has a “use” bit
 Set to 1 every time the page is referenced (including initially)
 When a page fault occurs:
 Starting at the frame after the last one referenced scan the use bits
of each frame
 If set to zero, then replace the frame and stop scanning
 Otherwise, change to zero and continue scanning
 If all the frames have bit set to one
 Pointer will go one complete circle and the first page will be
replaced (FIFO)
 Otherwise, it is like LRU – replacing the page that has been
least recently accessed.
36
Need to load page 727
Which page to replace?
Page to replace
Page replaced and
Next pointer advanced
37
38
Variation of Clock Policy
 Variation of simple clock policy – make more powerful
by increasing the # of bits.
 u (use bit), m (modify bit).
 Not accessed recently, not modified (u=0,m=0)
 Not accessed recently, modified (u=0,m=1)
 Accessed recently, not modified (u=1,m=0)
 Access recently, modified recently (u=1,m=1)
 Replace pages that have not been used recently else
not modified.
39
Variation of Clock Policy (cont.)
 Step 1:
 Start scanning frame buffer from current pointer
position and make no changes to use bit
 Replace first page with u=0, m=0
 Step 2:
 If step 1 fails, scan again, setting each u = 0 for each
frame bypassed
 Replace first page with u=0, m=1
 Step 3:
 If step 2 fails, repeat 1
40
Resident Set Management
 Resident set size
 How many pages per process in main memory
 Replacement scope
 When bringing in a new page,



Should the page being replaced be that of the process that
caused the page fault or
Could it belong to any process
Thrashing: A condition in which a processor spends
most of its time swapping pages rather than executing
instructions
41
Resident Set Management (cont.)
 Fixed allocation
 Fixed number of frames in main memory for a process
 Number decided at creation time
 Local scope: page to be replaced must be from the same process
 Variable allocation
 Number of frames allocated can change during the duration of the
process.
 If there are many page faults, then increase the number of frames
allocated and vice versa.
 Operating system has to observe program behavior and predict
future behavior.
 Allows global scope: page to be replaced can come from any frame
42
Page Buffering Policy
 A page that is to be replaced remains in memory for
some more time and will require no disk access if it is
needed in the immediate future
 Also, since a list of modified pages is maintained,
when writing back to disk, can write all the modified
pages together
 Used along with a simple page replacement scheme
such as FIFO
43
Page Buffering Policy (cont.)
 A few frames are not allocated to any process
 When a page is to be replaced, its page table entry is
added to one of two lists:
 Free page list, if page not modified
 Modified page list, if page modified
 When a new page is brought in,
 It is placed in the frame vacated by the page whose page
table entry is in the front of the free/modified page list,
and
 The page table entry of the page to be replaced is placed
at the back of the free/modified page list.
44
Load Control and
Multiprogramming
 Goal is to maximize processor utilization
 Adjust the level of multiprogramming such that the
mean time between faults equals the mean time
required to process a page fault
 Maximized processor utilization
45
Cleaning Policy
 When should a modified page be written back to disk.
 Demand cleaning
 Write when a page has to be replaced
 Process that suffers a page fault must wait for two disk
accesses before it can be unblocked =>

Processor under-utilized.
 Precleaning
 Write pages in batches
 A page may be written to disk many times before it is
actually replaced
 Best approach uses page buffering
46
Load Control Policy
 Determines the number of processes that will be
resident in main memory
 Too few resident processes could result in processor
being idle.
 Too many resident processes could result in frequent
page faults and thrashing.
47
Load Control and
Multiprogramming
 50% criterion
 Keep utilization of the paging device at 50%
 Adapt the clock page replacement algorithm
 Monitor the rate at which the pointer scans the circular
buffer of frames



Few page faults => few requests to advance the pointer
For each request, if the number of pages being scanned is
small => many pages in the resident set are not being accessed
frequently and do not need to be in memory
Can decrease the resident set size and increase the
multiprogramming level.
48
Process Suspension Policy
 Lowest priority process
 Faulting process
 This process does not have its working set in main memory so
it will be blocked anyway
 Last process activated
 This process is least likely to have its working set resident
 Process with smallest resident set
 This process requires the least future effort to reload
 Largest process
 Obtains the most free frames
 Process with the largest remaining execution window
49