CPS110:
Implementing threads
Landon Cox
Recap and looking ahead
Applications
Where
we’ve been
Hardware
Threads,
synchronization
primitives
OS
Atomic Load-Store,
Interrupt enabledisable,
Atomic Test-Set
Where we’re
going
Recall, thread interactions
1. Threads can access shared data
E.g., use locks, monitors
What we’ve done so far
2. Threads also share hardware
CPU and memory
For this class, assume uni-processor
Single CPU core: one thread runs at a time
Unrealistic in the multicore era!
Hardware, OS interfaces
Thread lectures up to this point
Applications
Job 1
Job 2
Job 3
CPU, Mem
CPU, Mem
CPU, Mem
Hardware
OS
Memory
CPU
Memory lectures
Remaining thread lectures
The play analogy
Process is like a play performance
Program is like the play’s script
One CPU is like a one-man-show
(actor switches between roles)
Threads
Address space
Threads that aren’t running
What is a non-running thread?
thread=“stream of executing instructions”
non-running thread=“paused execution”
Blocked/waiting, or suspended but ready
Must save thread’s private state
Leave stack etc. in memory where it lies
Save registers to memory
Reload registers to resume thread
Private vs global thread state
What state is private to each thread?
PC (where actor is in his/her script)
Stack, SP (actor’s mindset)
What state is shared?
Code (like lines of a play)
Global variables, heap
(props on set)
Thread control block (TCB)
The software that manages threads and
schedules/dispatches them is the thread system or “OS”
OS must maintain data to describe each thread
Thread control block (TCB)
Container for non-running thread’s private data
Values of PC, SP, other registers (“context”)
Each thread also has a stack
Other OS data structures (scheduler queues, locks,
waiting lists) reference these TCB objects.
Thread control block
Address Space
TCB1
PC
Ready
queue
SP
TCB2
TCB3
registers
PC
SP
registers
PC
SP
registers
Code
Code
Code
Stack
Stack
Stack
Thread 1
running
PC
SP
registers
CPU
Thread states
Running
Currently using the CPU
Ready(suspended)
Ready to run when CPU is next available
Blocked (waiting or sleeping)
Stuck in lock (), wait () or down ()
Switching threads
What needs to happen to switch threads?
1. Thread returns control to OS
For example, via the “yield” call
2. OS chooses next thread to run
3. OS saves state of current thread
To its thread control block
4. OS loads context of next thread
From its thread control block
5. Run the next thread
Project 1:
swapcontext
1. Thread returns control to OS
How does the thread system get control?
Voluntary internal events
Thread might block inside lock or wait
Thread might call into kernel for service
(system call)
Thread might call yield
Are internal events enough?
1. Thread returns control to OS
Involuntary external events
(events not initiated by the thread)
Hardware interrupts
Transfer control directly to OS interrupt handlers
From 104
CPU checks for interrupts while executing
Jumps to OS code with interrupt mask set
OS may preempt the running thread (force yield)
when an interrupt gives the OS control of its CPU
Common interrupt: timer interrupt
2. Choosing the next thread
If no ready threads, just spin
Modern CPUs: execute a “halt” instruction
Project 1: exit if no ready threads
Loop switches to thread if one is ready
Many ways to prioritize ready threads
Will discuss a little later in the semester
3. Saving state of current thread
What needs to be saved?
Registers, PC, SP
What makes this tricky?
Self-referential sequence of actions
Need registers to save state
But you’re trying to save all the registers
Saving the PC is particularly tricky
Saving the PC
Why won’t this work?
Instruction
address
100 store PC in TCB
101 switch to next thread
Returning thread will execute instruction at 100
And just re-execute the switch
Really want to save address 102
4. OS loads the next thread
Where is the thread’s state/context?
Thread control block (in memory)
How to load the registers?
Use load instructions to grab from memory
How to load the stack?
Stack is already in memory, load SP
5. OS runs the next thread
How to resume thread’s execution?
Jump to the saved PC
On whose stack are these steps running?
or Who jumps to the saved PC?
The thread that called yield
(or was interrupted or called lock/wait)
How does this thread run again?
Some other thread must switch to it
Example thread switching
Thread 1
print “start thread 1”
yield ()
print “end thread 1”
Thread 2
print “start thread 2”
yield ()
print end thread 2”
yield ()
print “start yield (thread %d)”
swapcontext (tcb1, tcb2)
print “end yield (thread %d)”
swapcontext (tcb1, tcb2)
save regs to tcb1
load regs from tcb2
// sp points to tcb2’s stack now!
jump tcb2.pc
// sp must point to tcb1’s stack!
return
Thread 1 output
Thread 2 output
-------------------------------------------start thread 1
start yield (thread 1)
start thread 2
start yield (thread 2)
end yield (thread 1)
end thread 1
end yield (thread 2)
end thread 2
Note: this assumes no pre-emptions.
If OS is preemptive, then other
interleavings are possible.
Thread states
Running
Thread is
scheduled
Ready
Thread is
Pre-empted
(or yields)
?
Another
thread calls
unlock or
signal (or I/O
completes)
Thread calls
Lock or wait
(or makes I/O request)
Blocked
Creating a new thread
1.
2.
3.
Also called “forking” a thread
Idea: create initial state, put on ready queue
Allocate, initialize a new TCB
Allocate a new stack
Make it look like thread was going to call a function
PC points to first instruction in function
SP points to new stack
Stack contains arguments passed to function
Project 1: use makecontext
4. Add thread to ready queue
Creating a new thread
Parent
call
return
Parent
thread_create
(parent work)
Child
(child work)
Thread join
How can the parent wait for child to finish?
Parent
thread_create
(parent work)
Child
(child work)
join
Thread join
Will this work?
Sometimes, assuming
Uni-processor
No pre-emptions
Child runs after parent
child () {
print “child works”
}
parent () {
create child thread
print “parent works”
yield ()
print “parent continues”
}
Never, ever assume these things!
Yield is like slowing the CPU
Program must work +- any yields
parent works
child works
parent continues
child works
parent works
parent continues
Thread join
Will this work?
parent () {
create child thread
lock
print “parent works”
wait
print “parent continues”
unlock
}
1
3
2
child () {
lock
print “child works”
signal
unlock
}
No. Child can call signal first.
Would this work with semaphores?
Yes
No missed signals (increment sem value)
parent works
child works
parent continues
child works
parent works
parent continues
How can we solve this?
Pair off for a couple of minutes
parent () {
child () {
}
parent works
child works
parent continues
}
child works
parent works
parent continues
© Copyright 2026 Paperzz