• Number of processes
• OS switches context from A to B
• main use fork, and the child process call execvp()
• Can compiler do something bad by adding
privileged instructions?
Lecture 3
Scheduling
How to develop scheduling policy
• What are the key assumptions?
• What metrics are important?
• What basic approaches have been used in the
earliest of computer systems?
Workload Assumptions
1. Each job runs for the same amount of time.
2. All jobs arrive at the same time.
3. Once started, each job runs to completion.
4. All jobs only use the CPU (i.e., they perform no I/O).
5. The run-time of each job is known.
Scheduling Metrics
• Performance: turnaround time
Tturnaround = Tcompletion − Tarrival
• As Tarrival is now 0, Tturnaround = Tcompletion
First In, First Out
• Work well under our assumption
0
20
40
60
80 100 120
• Relax “Each job runs for the same amount of time”
• Convoy effect
0
20
40
60
80 100 120
Shortest Job First
• SJF would be optimal
0
20
40
60
80 100 120
• Relax “All jobs arrive at the same time.”
B/C
arrive
A
0
20
40
B C
60
80 100 120
Shortest Time-to-Completion First
• STCF is preemptive, aka PSJF
• “Once started, each job runs to completion” relaxed
B/C
arrive
A B C
0
20
A
40
60
80 100 120
Scheduling Metrics
• Performance: turnaround time
Tturnaround = Tcompletion − Tarrival
• As Tarrival is now 0, Tturnaround = Tcompletion
• Performance: response time
Tresponse = Tfirstrun − Tarrival
Turnaround time or response time
• FIFO, SJF, or STCF
0
20
40
60
80 100 120
0
20
40
60
80 100 120
• Round robin
Conflicting criteria
• Minimizing response time
•
•
•
•
requires more context switches for many processes
incur more scheduling overhead
decrease system throughput
Increase turnaround time
• Scheduling algorithm depends on nature of system
• Batch vs. interactive
• Designing a generic AND efficient scheduler is difficult
Incorporating I/O
• Poor use of resources
CPU
A
A
Disk
0
A
A
20
A
40
A
B
A
60
80 100 120
• Overlap allows better use of resources
CPU
A B A B A B A B
A
Disk
0
20
A
40
A
60
80 100 120
Workload Assumptions
1. Each job runs for the same amount of time.
2. All jobs arrive at the same time.
3. Once started, each job runs to completion.
4. All jobs only use the CPU (i.e., they perform no I/O).
5. The run-time of each job is known.
Multi-level feedback queue
• Goal
• Optimize turnaround time without priori knowledge
• Optimize response time for interactive users
Q6
Q5
Q4
Q3
Q2
Q1
A
C
D
B
• Rule 1: If Priority(A) > Priority(B)
A runs (B doesn’t).
• Rule 2: If Priority(A) = Priority(B)
A & B run in RR.
How to Change Priority
• Rule 3: When a job enters the system, it is placed at
the highest priority (the topmost queue).
• Rule 4a: If a job uses up an entire time slice while
running, its priority is reduced (i.e., it moves down
one queue).
• Rule 4b: If a job gives up the CPU before the time
slice is up, it stays at the same priority level.
Example
Q2
A
Q1
B
A
B
A
Q0
0
20
40
A
60
80 100 120
Example with I/O
Q2
B B B B B B B B B B B B
Q1
A A A A A A A A A A A A
Q0
0
20
40
60
80 100 120
• Problems:
• Starvation
• Program can game the scheduler
• Program may change its behavior over time
Priority Boost
Q2
Q2
A
Q1
Q1
A
A
Q0
0
20
A
40
A
A
A
Q0
60
80 100 120
0
20
40
60
80 100 120
• Rule 5: After some time period S, move all the jobs
in the system to the topmost queue.
Gaming the scheduler
Q2
Q2
A A A A A A A A A A A A
AA
Q1
Q1
AA
AA AA AA AA
Q0
Q0
B BB BB BB BB BB
B B B B B B B B B B B B
0
20
40
60
80 100 120
0
20
40
60
80 100 120
Better Accounting
• Rule 4a: If a job uses up an entire time slice while
running, its priority is reduced (i.e., it moves down
one queue).
• Rule 4b: If a job gives up the CPU before the time
slice is up, it stays at the same priority level.
• Rule 4: Once a job uses up its time allotment at a
given level (regardless of how many times it has
given up the CPU), its priority is reduced (i.e., it
moves down one queue).
Tuning MLFQ And Other Issues
• How to parameterize?
• The system administrator configures it
• Default values available: on Solaris, there are
• 60 queues
• time-slice 20 milliseconds (highest) to 100s milliseconds (lowest)
• priorities boosted around every 1 second or so.
• The users provides hints: command-line utility nice
Workload Assumptions
1. Each job runs for the same amount of time.
2. All jobs arrive at the same time.
3. Once started, each job runs to completion.
4. All jobs only use the CPU (i.e., they perform no I/O).
5. The run-time of each job is known.
MLFQ rules
• Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t).
• Rule 2: If Priority(A) = Priority(B), A & B run in RR.
• Rule 3: When a job enters the system, it is placed at the
highest priority (the topmost queue).
• Rule 4: Once a job uses up its time allotment at a given
level (regardless of how many times it has given up the
CPU), its priority is reduced (i.e., it moves down one
queue).
• Rule 5: After some time period S, move all the jobs in
the system to the topmost queue.
Scheduling Metrics
• Performance: turnaround time
Tturnaround = Tcompletion − Tarrival
• As Tarrival is now 0, Tturnaround = Tcompletion
• Performance: response time
Tresponse = Tfirstrun − Tarrival
• CPU utilization
• Throughput
• Fairness
A proportional-share or
A fair-share scheduler
• Each job obtain a certain percentage of CPU time.
• Lottery scheduling tickets
• to represent the share of a resource that a process
should receive
• If A 75 tickets, B 25 tickets, then 75% and 25%
(probabilistically)
63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49 49
A B A A B A A A A A A B A B A A A A A A
• higher priority => more tickets
Lottery Code
int counter = 0;
Int winner = getrandom(0, totaltickets);
node_t *current = head;
while(current) {
counter += current->tickets;
if (counter > winner)
break;
current = current->next;
}
// current is the winner
Lottery Fairness Study
Ticket currency
User A 100 (global currency)
-> 500 (A’s currency) to A1
-> 50 (global currency)
-> 500 (A’s currency) to A2
-> 50 (global currency)
User B 100 (global currency)
-> 10 (B’s currency) to B1
-> 100 (global currency)
More on Lottery Scheduling
• Ticket transfer
• Ticket inflation
• Compensation ticket
• How to assign tickets?
• Why not Deterministic?
Stride Scheduling:
a deterministic fair-share scheduler
• Deterministic but requires global state
• What if a new job enters in the middle
Scheduling
• Workload assumption
• Metrics
• MLFQ
• Lottery Scheduling and stride scheduling
Next
• Work on PA0
• Reading: chapter 12-16
PA0
PA0. 0-1
0. Step 2, `cs-status | head -1 | sed 's/://g'`
Step 6, cs-console, (control-@) OR (control-spacebar)
1. .section
.data
.section
.text
.globl zfunction
zfunction:
pushl %ebp
movl %esp, %ebp
….
Leave
ret
Read http://en.wikibooks.org/wiki/X86_Assembly/GAS_Syntax
In C, we count from 0
PA0. 2,3, and 5
2. Try “man end” and see what you can get
Use “kprintf” for output
3. Read “Print the address of the top of the runtime stack for whichever process you are
currently in, right before and right after you get
into the printos() function call.” carefully
You can use in-line assembly
Use ebp, esp
5. syscallsummary_start(): should clear all numbers
syscallsummary_stop(): should keep all numbers
Others
• https://vcl.ncsu.edu/help/files-data/where-save-my-files
• Know how to use VirtualBox? feel free to share
© Copyright 2026 Paperzz