Mutual Exclusion:
A Centralized Algorithm
a)
b)
c)
Process 1 asks the coordinator for permission to enter a critical region.
Permission is granted
Process 2 then asks permission to enter the same critical region. The
coordinator does not reply.
When process 1 exits the critical region, it tells the coordinator, when1
then replies to 2
A Centralized Algorithm
• Advantages:
– Guarantees mutual exclusion
– Fair
– Requires only three messages per use of a critical region
(request, grant, release)
• Shortcomings
– Single point of failure
– If processes normally blocks after making request, they cannot
distinguish a dead coordinator from “permission denied”.
– In a large system, a single coordinator becomes a performance
bottleneck.
2
A Distributed Algorithm
•
Assuming there is a total ordering of all events in the system
1.
2.
When a process wants to enter a critical region, it builds a message
containing the name of the critical region it wants to enter, its process
number, and the current time. Then it sends the message to all other
processes.
When a process receives a request,
1.
2.
3.
3.
If the receiver is not in the critical region, and does not want to enter it, it
sends back an OK message to the sender.
If the receiver is already in the critical region, it does not reply. It queues the
request.
If the receiver wants to enter the critical region, but has not yet done so, it
compares the timestamp in the incoming message with the one contained in the
message that it has sent to everyone. The lowest one wins. If the incoming
message is lower, the receiver sends back an OK message. Otherwise, it
queues the message.
As soon as all permissions are in, it may enter the critical region. When it
exits the critical region, it sends OK message to all processes on its queue
and deletes them all from the queue.
3
A Distributed Algorithm
a)
b)
c)
Two processes want to enter the same critical region at the same
moment.
Process 0 has the lowest timestamp, so it wins.
When process 0 is done, it sends an OK also, so 2 can now enter
the critical region.
4
A Distributed Algorithm
• Advantage
– No deadlock or starvation
– Number of message per entry is 2(n-1)
– No single point of failure
• Disadvantage
– n points of failure
• If any process crashes, the failure to respond to requests will be interpreted as
denial of permission
• Patch: When a request comes in, the receiver always sends a reply, either
granting or denying. Whenever a request or reply is lost, the sender times out
and keep trying until a reply comes back or the sender concludes that the
receiver is dead. After a request is denied, the sender should block waiting for a
subsequent OK message.
– group communication support needed
– If one process is unable to handle the load, it is unlikely that forcing
everyone to do exactly the same thing in parallel is going to help much.
• Modify the algorithm so that when a process has collected permission from
5
majority of the other processes.
A Toke Ring Algorithm
(a) An unordered group of processes on a network.
(b) A logical ring constructed in software.
When a process acquires the token from its neighbor, if
it wants to enter a critical region, it can enter that
critical region, do its work, and then leave the region. It
then can pass the token to the next process. Otherwise,
it just passes the token to the next process.
6
A Toke Ring Algorithm
• If the token is lost, it must be regenerated. Detecting the
lost is difficult, since the amount of time between
successive appearances of the token on the network is
unbounded.
• If a process crashes, the algorithm will fail too.
– We could amend the problem by requiring a process receiving
the token to acknowledge receipt. A dead process will be
detected when its neighbor tries to give it the token and fails.
The token holder can send the token to the successor of the
dead process in the ring.
7
Comparison
Messages per
entry/exit
Delay before entry
(in message times)
Problems
Centralized
3
2
Coordinator crash
Distributed
2(n–1)
2(n–1)
Crash of any
process
Token ring
1 to
0 to n – 1
Lost token,
process crash
Algorithm
A comparison of three mutual exclusion algorithms.
8
The Transaction Model (1)
Updating a master tape is fault tolerant.
9
The Transaction Model (2)
Primitive
Description
BEGIN_TRANSACTION
Make the start of a transaction
END_TRANSACTION
Terminate the transaction and try to commit
ABORT_TRANSACTION
Kill the transaction and restore the old values
READ
Read data from a file, a table, or otherwise
WRITE
Write data to a file, a table, or otherwise
Examples of primitives for transactions.
10
The Transaction Model (3)
BEGIN_TRANSACTION
reserve WP -> JFK;
reserve JFK -> Nairobi;
reserve Nairobi -> Malindi;
END_TRANSACTION
(a)
a)
b)
BEGIN_TRANSACTION
reserve WP -> JFK;
reserve JFK -> Nairobi;
reserve Nairobi -> Malindi full =>
ABORT_TRANSACTION
(b)
Transaction to reserve three flights commits
Transaction aborts when third flight is unavailable
11
The Transaction Model (4)
• Four characteristics that transactions have:
– Atomic: To the outside world, the transaction happens
individually
– Consistent: The transaction does not violate system invariants
– Isolated: Concurrent transactions do not interfere with each
other
– Durable: Once a transaction commits, the changes are
permanent
12
Nested vs. Distributed Transactions
a)
b)
A nested transaction
A distributed transaction
13
Private Workspace
a)
The file index and disk blocks for a three-block file
b)
The situation after a transaction has modified block 0 and appended block 3
c)
After committing
The scheme works for distributed transaction too. A process is started on
each machine containing a file that is to be accessed as part of the
transaction. Each process is given its own private workspace. If the
transaction aborts, all processes just discard its private workspace. When 14
the transaction commits, updates are propagated.
Writeahead Log
Before any block is changed, a record is written to a log telling which
transaction is making the change, which file and block are being changed,
and what the new and old values are. Only after the log has been written
successfully, the change will be made to the file. If the transaction
commits, a commit record is written to the log. If the transaction aborts, a
rollback must be performed.
x = 0;
y = 0;
BEGIN_TRANSACTION;
x = x + 1;
y=y+2
x = y * y;
END_TRANSACTION;
(a)
Log
Log
Log
[x = 0 / 1]
[x = 0 / 1]
[y = 0/2]
[x = 0 / 1]
[y = 0/2]
[x = 1/4]
(b)
(c)
a) A transaction
b) – d) The log before each statement is executed
(d)
15
Concurrency Control (1)
General organization of managers for handling transactions.
16
Concurrency Control (2)
General organization of
managers for handling
distributed transactions.
17
Serializability
BEGIN_TRANSACTION
x = 0;
x = x + 1;
END_TRANSACTION
(a)
BEGIN_TRANSACTION
x = 0;
x = x + 2;
END_TRANSACTION
BEGIN_TRANSACTION
x = 0;
x = x + 3;
END_TRANSACTION
(b)
(c)
Schedule 1
x = 0; x = x + 1; x = 0; x = x + 2; x = 0; x = x + 3
Legal
Schedule 2
x = 0; x = 0; x = x + 1; x = x + 2; x = 0; x = x + 3;
Legal
Schedule 3
x = 0; x = 0; x = x + 1; x = 0; x = x + 2; x = x + 3;
Illegal
(d)
a) – c) Three transactions T1, T2, and T3
d) Possible schedules
18
Conflicting Operations
• Two operations conflict if they operate on the same data
item, and if at least one of them is a write operation.
• Concurrency control can be classified
– by their synchonization methods:
• Locking or timestamps
– By the expected frequency of conflicts
• Pessimistic
• Optimistic
19
Two-Phase Locking (1)
Two-phase locking.
20
Two-Phase Locking (2)
Strict two-phase locking eliminates cascaded aborts.
21
Two-Phase Locking (3)
• Deadlock problem
– Fixed by asking to acquire locks in some canonical order
– Deadlock detection
• Distributed
– Centralized 2PL
– Primary 2PL: each data item is assigned a primary copy. The
lock manager on that copy’s machine is responsible for granting
and releasing locks.
– Distributed 2PL: assumes data may be replicated. The scheduler
on each machine not only take care that locks are granted and
released, but also that the operation is forwarded to the (local)
data manager.
22
Pessimistic Timestamp Ordering
• Every operation that is part of a transaction T, is timestamped
with ts(T).
• Every data item x has a read timestamp tsRD (x ) set to the
timestamp of the transaction that most recently read x. Every
data item x has a write timestamp tsWR(x) set to the timestamp
of the transaction that most recently write x.
• For operation read(T, x), suppose timestamp of T is ts. If ts
< tsWR(x) , then T aborts. Otherwise, let the read take place, and
set tsRD (x ) to max{ ts, ts ( x)}
RD
• For operation write(T, x), suppose timestamp of T is ts. If ts
< tsRD (x ) , then T aborts. Otherwise, let the write take place,
and set tsWR(x) to max{ ts, ts ( x)}
WR
23
Pessimistic Timestamp Ordering
Concurrency control using timestamps.
24
Optimistic Timestamp Ordering
• Idea: just go ahead and do whatever you want.
• It keeps tracks of which data items have been read and
written. At the point of committing, it checks all other
transactions to see if any of them have been changed
since the transaction started. If so, it is aborted.
Otherwise, it is committed.
• It fits best with the implementation based on private
workspaces.
• It is deadlock free and allows maximum parallelism.
• Disadvantage: when failed, the transaction has to be run
again.
25
© Copyright 2026 Paperzz