Memory Management

MEMORY MANAGEMENT
Prepared By: Dr. Vipul Vekariya
MEMORY MANAGEMENT


Memory hierarchy
 small amount of fast, expensive memory – cache
 some medium-speed, medium price main memory
 gigabytes of slow, cheap disk storage
Memory manager handles the memory hierarchy
2

Ideally programmers want memory that is
 large
 fast
 non volatile
MEMORY MANAGEMENT
Memory needs to be allocated to ensure a
reasonable supply of ready processes to consume
available processor time
BASIC MEMORY MANAGEMENT

Memory management system can be divided in
two classes.
1) Without swapping the process
2) with swapping the process between memory
and disk
4
BASIC MEMORY MANAGEMENT
MONOPROGRAMMING WITHOUT SWAPPING
5
Three simple ways of organizing memory
- an operating system with one user process
MULTIPROGRAMMING WITH FIXED
PARTITIONS
6

Fixed memory partitions
separate input queues for each partition
 single input queue

MODELING MULTIPROGRAMMING







When multiprogramming used , the CPU utilization can be
improved.
If the average process computes only 20 percent of the time
is sitting in memory, with five process in memory at once,
The CPU should be busy all time.
Suppose that a process spend a fraction P of time waiting
for I/O to complete with n process in memory.
Then probability that n processes are waiting for I/O is Pn.
CPU utilization = 1 – Pn
Shows the CPU utilization as a function of n, which is
called degree of multiprogramming.
7
MODELING MULTIPROGRAMMING
8
Degree of multiprogramming
CPU utilization as a function of number of processes in
memory
ANALYSIS OF MULTIPROGRAMMING
SYSTEM PERFORMANCE
9



Arrival and work requirements of 4 jobs
CPU utilization for 1 – 4 jobs with 80% I/O wait
Sequence of events as jobs arrive and finish
 note numbers show amout of CPU time jobs get in each
interval
RELOCATION
When program loaded into memory the actual
(absolute) memory locations are determined
 A process may occupy different partitions which
means different absolute memory locations
during execution

Swapping
 Compaction

Multiple Programs Without Memory
Abstraction
Illustration of the relocation problem.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
ADDRESSES

Logical


Reference to a memory location independent of the
current assignment of data to memory.
Physical or Absolute

The absolute address or actual location in main
memory.
REGISTERS USED DURING EXECUTION

Base register


Bounds register


Starting address for the process
Ending location of the process
These values are set when the process is loaded
or when the process is swapped in
RELOCATION
REGISTERS USED DURING EXECUTION
The value of the base register is added to a
relative address to produce an absolute address
 The resulting address is compared with the value
in the bounds register
 If the address is not within bounds, an interrupt
is generated to the operating system

Base and Limit Registers
Base and limit registers can be used to give each process a
separate address space.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
SWAPPING
17
Memory allocation changes as
 processes come into memory
 leave memory
Shaded regions are unused memory
DYNAMIC PARTITIONING EXAMPLE
OS (8M)
P2
P1
(14M)
(20M)
Empty (6M)
P4(8M)
Empty
P2
(56M)
(14M)
Empty (6M)
P3
(18M)
Empty (4M)
Refer to Figure 7.4
External Fragmentation
 Memory external to all processes
is fragmented
 Can resolve using compaction

OS moves processes so that they
are contiguous
 Time consuming and wastes CPU
time

19
SWAPPING
20
 Allocating
space for growing data segment
 Allocating space for growing stack & data
segment
MEMORY MANAGEMENT WITH BIT MAPS


1.
2.
3.
4.
When memory is assign dynamically, the OS must mange it.
There are two ways to keep track of memory usage
1) bit map
2) free list
With a bit map , memory is divided up in to allocation unit.
Divide in to few words or large as several kilobytes.
Corresponding to each allocation unit is a bit in the bit map.
Which 0 if the unit is free and 1 if it is occupied.
21
MEMORY MANAGEMENT WITH BIT MAPS
22
Part of memory with 5 processes, 3 holes
 tick marks show allocation units
 shaded regions are free
 Corresponding bit map
 Same information as a list

MEMORY MANAGEMENT WITH LINKED
LISTS
23
Four neighbor combinations for the terminating
process X
DYNAMIC PARTITIONING

First-fit algorithm
Scans list of segment form the beginning and chooses
the first available block that is large enough
 Then hole is broken up in to two pieces.
 On is the process and one is hole(unused memroy)
 Fastest

DYNAMIC PARTITIONING

Best-fit algorithm
Chooses the block that is closest in size to the request
 Worst performer overall
 Since smallest block is found for process, the smallest
amount of fragmentation is left
 Memory compaction must be done more often

DYNAMIC PARTITIONING

Next-fit
Scans memory from the location of the last placement
 More often allocate a block of memory at the end of
memory where the largest block is found
 The largest block of memory is broken up into
smaller blocks
 Compaction is required to obtain a large block at the
end of memory

WORST FIT
That is take always largest available hole.
 So that he hole broken up will big enough to be
useful.
 Worst fit is not very good idea.

27
QUICK FIT
Which maintain separate list for some of the
more common size requested.
 For example 4KB,8KB,21KB

28
ALLOCATION
VIRTUAL MEMORY
Spilt your program into pieces , called a overlays.
 The overlay kept on disk and swapped in and out of
memory by operating system, dynamically as needed.
 The method that was devised has come to be known
as virtual memory.
 The operating system keeps those part of the
program currently in use in main memory and rest
on the disk.
 Virtual memory can also work in multiprogramming
system.

PAGING


Most of the virtual memory system use the technique called
paging.
The program generated address are called virtual address and
form the virtual address space.
HOW MAPPING WORKS?
The virtual address space is divided up into units
called pages.
 The corresponding unit in physical memory are
called page frames.
 The pages and page frames are always the same size.
 Here we take 4kb in example but page sizes from 512
bytes to 64 kb.
 With 64KB of virtual address space and 32 KB of
physical memory.
 We get 16 virtual pages and 8 page frames.
 Transfer between Ram and disk are always in units
of page.

PAGING
Mov REG , 0
Which map in page 2(8192 to
12287)
MOV REG,8192
Which map in to page 6(24576 to
28671)
MOV REG, 20500
Which map to
12288 + 20 =12308
33
MOV REG, 20500
 0101 0000 0001 0100
 12 bit Offset and 4 Page number
 0101 page number so it is 5
 Page number 5 is mapped to 011
 So final physical address is
 011 0000 0001 0100
 Decimal of above number is 12308

34
PAGE FAULT
What happens if the program tries to use an
unmapped page.
 Mov REG, 32780
 Which is within virtual page 8.
 The MMU notice that the page is unmapped.
 And cause the CPU to trap to the OS.
 This trap is called page fault.
 Present bit/Absent bit is used for keep track for
which pages are physically present in memory.
 The page number is used as an index in page tabel.

POWER OF 2





28 = 256 The number of values represented by the 8 bits in
a byte,
210 = 1,024 The binary approximation of the kilo-, or 1,000
multiplier, which causes a change of prefix. For example:
1,024 bytes = 1 kilobyte (or kibibyte).
212 = 4,096.
220 = 1,048,576 The binary approximation of the mega-, or
1,000,000 multiplier, which causes a change of prefix. For
example: 1,048,576 bytes = 1 megabyte (or mibibyte). total.
230 = 1,073,741,824 The binary approximation of the giga-,
or 1,000,000,000 multiplier, which causes a change of
prefix. For example, 1,073,741,824 bytes = 1 gigabyte (or
gibibyte).
36
WHY PAGE SIZE IS POWER OF 2?






Here we choose the page size is 4k means power of 2.(212)
We seen example of virtual address, 8196(0010000000000100)
being mapped using MMU.
The incoming 16 bit virtual address split into 4 bit page
number and 12 bit offset.
With 4 bit page number , we can have 16 page.
And 12 bit offset we can address all 4096 byte within page.
The page number is used as an index in to page tables.
37
ADDRESS TRANSLATION
PAGE TABLES
39
Internal operation of MMU with 16 4 KB pages
PAGE TABLE

Operating system maintains a page table for
each process
Contains the frame location for each page in the
process
 Memory address consist of a page number and offset
within the page

PROCESSES AND FRAMES
A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
PAGE TABLE
SPEEDING UP PAGING
Mapping from virtual address to physical address
must be fast.
 If the virtual address space is large, the page
table will large.

43
TRANSLATION LOOKASIDE
BUFFER

Each virtual memory reference can cause two
physical memory accesses
One to fetch the page table
 One to fetch the data


To overcome this problem a high-speed cache is
set up for page table entries
Called a Translation Lookaside Buffer (TLB)
 Contains page table entries that have been most
recently used

TLBS – TRANSLATION LOOK ASIDE
BUFFERS
45
A TLB to speed up paging
TLB OPERATION

Given a virtual address,


If page table entry is present (TLB hit),


processor examines the TLB
the frame number is retrieved and the real address is
formed
If page table entry is not found in the TLB (TLB
miss),

the page number is used to index the process page
table
LOOKING INTO THE
PROCESS PAGE TABLE

First checks if page is already in main memory


If not in main memory a page fault is issued
The TLB is updated to include the new page
entry
TRANSLATION LOOKASIDE
BUFFER
TWO-LEVEL
HIERARCHICAL PAGE TABLE
ADDRESS TRANSLATION FOR
HIERARCHICAL PAGE TABLE
INVERTED PAGE TABLE
Used on PowerPC, UltraSPARC, and IA-64
architecture
 Page number portion of a virtual address is
mapped into a hash value
 Hash value points to inverted page table
 Fixed proportion of real memory is required for
the tables regardless of the number of processes

INVERTED PAGE TABLE
STRUCTURE OF PAGE TABLE ENTRY.
53
FETCH POLICY
Determines when a page should be brought into
memory
 Two main types:

Demand Paging
 Prepaging

DEMAND PAGING
AND PREPAGING

Demand paging
only brings pages into main memory when a
reference is made to a location on the page
 Many page faults when process first started


Prepaging
brings in more pages than needed
 More efficient to bring in pages
contiguously on the disk
 Don’t confuse with “swapping”

that
reside
REPLACEMENT POLICY

When all of the frames in main memory are
occupied and it is necessary to bring in a new
page, the replacement policy determines which
page currently in memory is to be replaced.
BUT…
Which page is replaced?
 Page removed should be the page least likely to
be referenced in the near future

How is that determined?
 Principal of locality again


Most policies predict the future behavior on the
basis of past behavior
BASIC REPLACEMENT
ALGORITHMS

There are certain basic algorithms that are used
for the selection of a page to replace, they include
Optimal
 Least recently used (LRU)
 First-in-first-out (FIFO)
 Clock


Examples
EXAMPLES

An example of the implementation of these
policies will use a page address stream formed by
executing the program is


232152453252
Which means that the first page referenced is 2,
the second page referenced is 3,
 And so on.

OPTIMAL POLICY
Selects for replacement that page for which the
time to the next reference is the longest
 But Impossible to have perfect knowledge of
future events.

OPTIMAL POLICY
EXAMPLE

The optimal policy produces three page faults after
the frame allocation has been filled.
LEAST RECENTLY
USED (LRU)
Replaces the page that has not been referenced
for the longest time
 By the principle of locality, this should be the
page least likely to be referenced in the near
future
 Difficult to implement

One approach is to tag each page with the time of last
reference.
 This requires a great deal of overhead.

LRU EXAMPLE

The LRU policy does nearly as well as the optimal
policy.

In this example, there are four page faults
FIRST-IN, FIRST-OUT (FIFO)
Treats page frames allocated to a process as a
circular buffer
 Pages are removed in round-robin style



Simplest replacement policy to implement
Page that has been in memory the longest is
replaced

But, these pages may be needed again very soon if it
hasn’t truly fallen out of use
FIFO EXAMPLE

The FIFO policy results in six page faults.

Note that LRU recognizes that pages 2 and 5 are referenced
more frequently than other pages, whereas FIFO does not.
CLOCK POLICY
Uses and additional bit called a “use bit”
 When a page is first loaded in memory or
referenced, the use bit is set to 1
 When it is time to replace a page, the OS scans
the set flipping all 1’s to 0
 The first frame encountered with the use bit
already set to 0 is replaced.

CLOCK POLICY
CLOCK POLICY EXAMPLE

Note that the clock policy is adept at protecting frames
2 and 5 from replacement.
COMBINED EXAMPLES
THRASHING
•
•
A state in which the system spends most of its
time swapping pieces rather than executing
instructions.
To avoid this, the operating system
tries to guess which pieces are least likely
to be used in the near future.
• The guess is based on recent history
SEGMENTATION

Segmentation allows the programmer to view
memory as consisting of multiple address spaces
or segments.





May be unequal, dynamic size
Simplifies handling of growing data structures
Allows programs to be altered and recompiled
independently
Lends itself to sharing data among processes
Lends itself to protection
Main
Program
SQRT Sub
program
Segment no
Data Area
Sub Program
Size
0
1000
1
700
2
800
3
900
4
500
Total
3900
stack Area
72
SEGMENT TABLE ENTRY ORGANIZATION
Starting address corresponding segment in main
memory(Base address)
 Each entry contains the length of the segment
 A bit is needed to determine if segment is already
in main memory
 Another bit is needed to determine if the segment
has been modified since it was loaded in main
memory

SEGMENT PAGE TABLE
Segment
no
Size
Base
address
Access
rights
0
1000
1000
-
1
700
3500
-
2
800
4200
-
3
900
6100
-
4
500
2500
-
Total
3900
74
SEGMENT TABLE ENTRIES
ADDRESS TRANSLATION IN
SEGMENTATION
COMBINED PAGING AND SEGMENTATION
Paging is transparent to the programmer
 Segmentation is visible to the programmer
 Each segment is broken into fixed-size pages

COMBINED PAGING AND SEGMENTATION
ADDRESS TRANSLATION