Critical sections

Introduction to Real-Time
Systems
Lecture 5
By:
Mohammad hajibegloo
[email protected]
1
Handling shared resources
Problems caused
by
mutual exclusion
2
Critical sections
3
Blocking on a semaphore
4
5
Mars Mission
Pathfinder + Sojourner
6
Application structure
7
Schedule with no conflicts
Red section is critical section
8
Conflict on a critical section
9
Conflict on a critical section
10
Priority Inversion
Solution
Introduce a concurrency control protocol for accessing
critical sections.
11
Protocol key aspects
Access Rule: Decides whether to block and when.
Progress Rule: Decides how to execute inside a
critical section.
Release Rule: Decides how to order the pending
requests of the blocked tasks.
Other aspects
Analysis: estimates the worst-case blocking times.
Implementation: finds the simplest way to encode the protocol rules.
12
Rules for classical semaphores
• The following rules are normally used for classical semaphores:
• Access Rule (Decides whether to block and when):
• Enter a critical section if the resource is free, block if the resource is
locked.
• Progress Rule (Decides how to execute in a critical section):
• Execute the critical section with the nominal priority.
• Release Rule (Decides how to order pending requests):
• Wake up the blocked task in FIFO order.
• Wake up the blocked task with the highest priority.
13
Resource Access Protocols
• Classical semaphores (No protocol)
• Non Preemptive Protocol (NPP)
• Highest Locker Priority (HLP)
• Priority Inheritance Protocol (PIP)
• Priority Ceiling Protocol (PCP)
• Stack Resource Policy (SRP)
14
Assumption
• Critical sections are correctly accessed by tasks:
15
Non Preemptive Protocol
• Access Rule: A task never blocks at the entrance of a
critical section (Preemption is forbidden in critical sections), but at
its activation time.
• when a task enters a CS, its priority is increased at the
maximum value.
• Progress Rule: Disable preemption when executing
inside a critical section.
• Release Rule: At exit, enable preemption so that the
resource is assigned to the pending task with the
highest priority.
16
Non Preemptive Protocol
• Preemption is forbidden in critical sections.
• Implementation: when a task enters a CS, its
priority is increased at the maximum value.
Advantages:
simplicity
prevents deadlocks.
Problems:
high priority tasks that do not
use the CS may also block
17
Conflict on critical section
(using classical semaphores)
18
Schedule with NPP
19
NPP: implementation notes
• Each task ti must be assigned two priorities:
• a nominal priority P (fixed) assigned by the application
i
developer;
• a dynamic priority pi (initialized to Pi ) used to schedule
the task and affected by the protocol.
• Then, the protocol can be implemented by changing the
behavior of the wait and signal primitives:
20
Non Preemptive Protocol
ADVANTAGES: simplicity and efficiency.
• Semaphores queues are not needed, because tasks
never block on a wait(s).
• Each task can block at most on a single critical section.
• It prevents deadlocks and allows stack sharing.
• It is transparent to the programmer.
PROBLEMS:
1. Tasks may block even if they do not use resources.
2. Since tasks are blocked at activation, blocking could be unnecessary
(pessimistic assumption).
21
NPP: problem 1
22
NPP: problem 2
A task could block even if not accessing a critical section:
23
Highest Locker Priority
• Access Rule: A task never blocks at the entrance of a critical
section, but at its activation time.
• Progress Rule: Inside resource R, a task executes at the
highest priority of the tasks that use R.
• Release Rule: At exit, the dynamic priority of the task is
reset to its nominal priority Pi
A task in a CS gets the highest priority among the tasks that use it.
24
Schedule with HLP
25
HLP: implementation notes
• Each task ti is assigned a nominal priority Pi and a dynamic
priority pi
• Each semaphore S is assigned a resource ceiling C(S):
• Then, the protocol can be implemented by changing the
behavior of the wait and signal primitives:
• Note: HLP is also known as Immediate Priority Ceiling (IPC).
26
Highest Locker Priority
• ADVANTAGES: simplicity and efficiency.
• Semaphores queues are not needed, because tasks never
block on a wait(s).
• Each task can block at most on a single critical section.
• It prevents deadlocks.
• It allows stack sharing.
PROBLEMS:
• Since tasks are blocked at activation, blocking could be
unnecessary (same pessimism as for NPP).
• It is not transparent to programmers (due to ceilings).
27
Priority Inheritance Protocol
• Access Rule: A task blocks at the entrance of a critical
section if the resource is locked.
• Progress Rule: Inside resource R, a task executes with
the highest priority of the tasks blocked on R.
• Release Rule: At exit, the dynamic priority of the task
is reset to its nominal priority Pi
• A task in a CS increases its priority only if it blocks other tasks.
• A task in a CS inherits the highest priority among those tasks
it blocks.
P = max {P | t blocked on CS}
CS
k
k
28
Schedule with PIP
29
PIP: types of blocking
• Direct blocking
A task blocks on a locked semaphore
• Push-through blocking
A task blocks because a lower priority task
inherited a higher priority.
Blocking:
a delay caused by a lower priority task
30
PIP: implementation notes
• Inside a resource R the dynamic priority pi is set as
31
Identifying blocking resources
• Under PIP, a task ti can be blocked on a semaphores
Sk only if :
1. Sk is directly shared between ti and lower priority
tasks (direct blocking)
OR
2. Sk is shared between tasks with priority lower
than ti and tasks having priority higher than ti
(push-through blocking).
32
Identifying blocking resources
33
Identifying blocking resources
34
Bounding blocking times
• A theorem follows from the previous lemmas:
35
Example
36
Schedule with PIP
Example in which t2 is blocked on B3 by push-through
37
Remarks on PIP
ADVANTAGES
• It is transparent to the programmer.
• It bounds priority inversion.
• It removes the pessimisms of NPP and HLP (a task is blocked only
when really needed).
PROBLEMS
• More complex to implement (especially to supportnested critical
sections).
• It is prone to chained blocking
• It does not avoid deadlocks.
38
Chained blocking with PIP
Note: ti can be blocked at most once by each lower
priority task
39
Priority Ceiling Protocol
• Access Rule: A task can access a resource only if it
passes the PCP access test.
• Progress Rule: Inside resource R, a task executes
with the highest priority of the tasks blocked on R.
• Release Rule: At exit, the dynamic priority of the
task is reset to its nominal priority Pi
NOTE: PCP can be viewed as PIP + access test
40
Priority Ceiling Protocol
Can be viewed as PIP + access test.
• A task can enter a CS only if it is free and there is
no risk of chained blocking.
To prevent chained blocking, a task may stop
at the entrance of a free CS (ceiling blocking).
41
Avoiding chained blocking
42
Resource Ceilings
• Each semaphore sk To keep track of resource usage
by high-priority tasks
• each resource is assigned a resource ceiling: is
assigned a ceiling:
• Then a task ti can enter a critical section
• only if its priority is higher than the maximum ceiling of
the locked semaphores:
43
Schedule with PCP
44
PCP: properties
Theorem 1
Under PCP, a task can block at most on a single critical
section.
Theorem 2
PCP prevents chained blocking.
Theorem 3
PCP prevents deadlocks.
45
Typical deadlock
It can only occur with nested critical sections:
46
PCP: deadlock avoidance
It can only occur with nested critical sections:
47
Remarks on PCP
ADVANTAGES
• Blocking is reduced to only one CS
• It avoids deadlocks when using nested critical sections.
PROBLEMS:
• It is complex to implement (like PIP).
• It can create unnecessary blocking (it is pessimistic like HLP).
• It is not transparent to the programmer: resource ceilings must
be specified in the source code.
48
Guarantee with resource constraints
• We select a scheduling algorithm and a resource
access protocol.
• We compute the maximum blocking times (Bi) for
each task.
• We perform the guarantee test including the
blocking terms.
49
Guarantee with RM (D = T)
50
Guarantee with RM (D <= T)
• ‫ريا‬
response time analysis
51
Resource Sharing under EDF
• The protocols analyzed so far have been originally
developed for fixed priority scheduling schemes.
However:
• NPP (Non Preemptive Protocol)
• can also be used under EDF
• PIP (Priority Inheritance Protocol)
• has been extended under EDF by Spuri (1997).
• PCP(Priority Ceiling Protocol)
• has been extended under EDF by Chen-Lin(1990)
• In 1990, Baker proposed a new access protocol, called
Stack Resource Policy (SRP)
• that works both under fixed priorities and EDF.
52
Stack Resource Policy [Baker 1990]
• This protocol satisfies the same PCP properties:
• it avoids unbounded priority inversion;
• it prevents deadlocks and chained blocking;
• each task can be blocked at most once;
• In addition:
• it allows using multi-unit resources;
• it can be used under fixed and dynamic priorities;
• it allows tasks to share the same stack space.
53
Stack Resource Policy
• Each resource Rk is characterized by:
• a maximum number of units Nk
• a current number of available units nk
• For each task ti, we have to specify
• A priority pi (static or dynamic):
• RM: pi ∝ 1/Ti
• DM: pi ∝ 1/Di
• EDF: pi∝ 1/di
• A preemption level pi ∝ 1/Di
• A set of resource equirements:
• ∀Rk µi(Rk) specifies how many units of Rk are used by ti
54
Preemption Level : pi
• The preemption level is :
• a static parameter
• assigned to a task at its creation time and associated
with all instances of that task.
• a task τa can preempt another task τb only if πa > πb
• preemption level is used to predict potential
blocking
• if τa arrives after τb and τa has higher priority than
τb, then τa must have a higher preemption level
than τb.
55
Preemption Level
pi ∝1/Di
• Note that, under EDF, a task ti can preempt
another task tk only if pi > pk (that is, if Di < Dk)
D1 < D2, we define p1 = 2 and p2 = 1. t1 can preempt t2
• No preemption can occur if pi <= pk (that is, if Di >= Dk)
D2 < D1, we define p1 = 1 and p2 = 2. t1 can never preempt t2
56
Stack Resource Policy
•
•
•
•
Each resource Rk
specifies the total number of units available: N(Rk)
keeps track of the current available units: n(Rk)
is assigned a dynamic ceiling: CR(nk)
• equal to the highest preemption level of the tasks that could be
blocked on Rk
• Note :
• CR(nk) increases only when a resource is locked
• CR(nk) decreases only when a resource is released
57
Stack Resource Policy
• Finally, a system ceiling is defined as:
• SRP preemption rule
A ready task ti can preempt the executing task texe
if and only if
Pi>Pexe and pi> Πs
58
Computing Resource Ceilings
59
Computing Resource Ceilings
60
Schedule with SRP
61
Schedule with PCP
62
SRP Properties
Lemma
If pi> CR(nk) then there exist enough units of R
1. to satisfy the requirements of ti
2. to satisfy the requirements of all tasks that can
make preemption on ti
63
SRP Properties
• Theorem 1
Under SRP, each task can block at most once.
• Consider the following scenario where t1 blocks
twice:
This is not possible, because t2 could not preempt
t3 because,
at time t* p2>Πs
64
SRP: deadlock avoidance
65
Comparison
66