What is a semaphore - bca study material

INTERPROCESS COMMUNICATION
Inter process communication (IPC) is a
capability supported by operating system that
allows one process to communicate with another
process.
The processes can be running on the same
computer or on different computers connected
through a network. IPC enables one application
to control another application, and for several
applications to share the same data without
interfering with one another. IPC mechanism
called Dynamic Data Exchange.
Shared-Memory , Race Conditions, and Mutual
Exclusion
A critical problem occurring in shared-memory
system is that two or more processes are reading or
writing some shared variables or shared data, and
the final results depend on precisely who runs and
when. Such situations are called Race Conditions.
In order to avoid race conditions we must find some
way to prevent more than one process from reading
and writing shared variables or shared data at the
same time, i.e., we need the concept of Mutual
Exclusion, where one process is in use is sharing a
variable, and the other process is excluded from
doing the same thing.
Serialization
The key idea in process synchronization is serialization.
This means that we have to go to some pains to undo the
work we have put into making an operating system perform
several tasks in parallel.
The scheduler can be disabled for a short period of time, to
prevent control being given to another process during a
critical action like modifying shared data. The protocol
ensures that processes have to queue up to gain access to
shared data.
Mutex: Mutual Exclusion
When two or more processes must share some object,
an arbitration mechanism is needed so that they do not
try to use it at the same time. For example; two
processes attempting to update the same bank account
must take turns; if each process reads the current
balance from some database, updates it, and then writes
it back, one of the updates will be lost.
This can be solved if there is some way for each process
to exclude the other from using the shared object during
critical sections of code. Thus the general problem is
described as the mutual exclusion problem.
Mutual exclusion can be achieved by a system of locks.
A mutual exclusion lock is colloquially called a mutex.
Lost Update
Lost update: Occurs when a transaction say (B) updates the same
data being modified by another transaction say (A) in such a way that
(B) reads the value of (A) at time 2 prior to the write operation of A at
time 3 and the updated value of A is lost.
Table: 3.1 Example on Lost Update Problem
This example in table 3.1 shows that the write operation at time 3 is
overwritten by the write operation at time 4. The transactions are
from the same or from the different sites or locations and the lost
update problem returns an erroneous value of A.
PROCESS
SYNCHRONIZATION
Critical Sections
The critical section must ENFORCE ALL THREE of the
following rules:
Mutual Exclusion: No more than one process can execute in its
critical section at one time.
Progress: If no one is in the critical section and someone wants in,
then those processes not in their remainder section must
be able to decide in a finite time who should go in.
Bounded Wait: All requesters must eventually be let into the
critical section.
6
Critical Section
The key to preventing trouble involving shared storage is find
some way to prohibit more than one process from reading and
writing the shared data simultaneously. That part of the program
where the shared memory is accessed is called the critical
section.
To solve the critical section problem, three criteria are there:
1. Mutual Exclusion: If process P1 is executing in its critical
section then no other process can be executing in their critical
section.
2. Progress: If no process is executing in its critical section and
there exists some process that wish to enter their critical
section , then the selection of the process that will enter the
critical section next cannot be postponed indefinitely.
3. Bounded Waiting: A bound must exist on the number of
times that other process are allowed to enter their critical
sections after a process has made a request to enter its critical
section and before that request is granted.
PROCESS SYNCHRONIZATION
Two Processes
Software
FLAG TO REQUEST ENTRY:
• Each processes sets a flag to request entry. Then each process
toggles a bit to allow the other in first.
• This code is executed for each process i.
Shared variables
boolean flag[2];
initially flag [0] = flag [1] = false.
flag [i] = true  Pi ready to enter its critical section
do {
flag [i]:= true;
turn = j;
while (flag [ j ] and turn == j) ;
critical section
flag [i] = false;
remainder section
} while (1);
6: Process Synchronization
Are the three Critical Section
Requirements Met?
This is Peterson’s
Solution
8
Bakery’s Algorithm
Bakery algo to critical section problem for n processes as
follows:
• Before entering its critical section, process receives a
number. Holder of the smallest number enters the critical
section.
• If processes P i and P j receive the same number, if i < j ,
then Pi is served first; else P j is served first.
• The numbering scheme always generates numbers in
increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...
State Lamport’s Bakery algo with solution
The basic idea of a Bakery algorithm is that of a bakery. On entering
the bakery, customers take tokens. Wherever has the lowest tokens
gets service next. Service means entry to the critical section.
While(T)
{
Choosing [ownID]=T; //*Receive a token
token[ownID]=max[token[0],token[1],…….token[n-1]+1;
{Choosing [ownID]=F; //*Wait for Turn
For(othersID=0; othersID<n; thus ID++)
while(choosing[othersID]);
while(token[othersID]!=0 && (token[othersID]), othersID)<
(token(ownID,ownID));
Critical Section(); // *Enter Critical section
Token[OwnID]=0; //* Leave Critical Section
}
What is a semaphore
A semaphore is a variable.
There are 2 types of semaphores:
Binary semaphores
Counting semaphores
Binary semaphores have 2 methods associated with
it. (up, down / lock, unlock)
Binary semaphores can take only 2 values (0/1).
They are used to acquire locks. When a resource is
available, the process in charge set the semaphore to
1 else 0.
Counting Semaphore may have value to be greater
than one, typically used to allocate resources from a
pool of identical resources. It is used to implement
bounded concurrency problems.
SEMAPHORES in critical section problem
The effective synchronization tool often used to realise
mutual exclusion in more complex systems are semaphores.
A semaphore S is an integer variable which can be accessed
only through two standard atomic operations: wait() and
signal(). The definition of the wait and signal operation are:
wait(S)
signal(S)
{ while S ≤ 0
{S++;
};S- }
All the modifications to the integer value of the semaphore in the
above two operations must be executed indivisibly. Semaphore
can be used in this way
Do{waiting(mutex); //*critical section
Signal(Mutex);
//* Remainder Section
} While(True);
 A queue is used to hold processes waiting on the semaphore
Strong Semaphores
• the process that has been blocked the longest
is released from the queue first (FIFO)
Weak Semaphores
• the order in which processes are removed
from the queue is not specified
Difference between strong and weak
semaphore
Strong Semaphores: unblock the longest waiting process and
guarantee freedom from starvation.
Strong Semaphores: maintains FIFO queue
Weak Semaphores: unblock without regard to order
Weak Semaphores: maintains no guaranteed order of the queue
Define two operations permitted on a
Semaphore
The semaphore has only two operations
P() and V()
P() waits until value>0 then decrement
V() increment, waiting up a thread
Waiting in P() if necessary
Implement the indivisibility criteria of
the two operations on the semaphore
The two operations are indivisible or atomic. For the V()
operation to be indivisible its execution must not be
implemented by any P() or v() operations executed by other
process on the same semaphore.
For the P() operations, there are two separate execution
branches based on the condition to be checked. The
indivisibility of P() and V(), operations are necessary to
guarantee the correctness for the intended purpose of
semaphore.
Producer/Consumer Problem
A large class of concurrency control problems. These
problems are used for testing nearly every newly
proposed synchronization scheme.
Producer–Consumer processes are common in
operating systems. The problem definition is that, a
producer (process) produces the information that is
consumed by a consumer (process).
For example, a compiler may produce assembly
code, which is consumed by an assembler. A
producer can produce one item while the consumer is
consuming another item. The producer and consumer
must be synchronized. These problems can be solved
either through unbounded buffer or bounded buffer.
17
Producer/Consumer Problem(Cont.)
With an unbounded buffer
The unbounded-buffer producer- consumer problem places no
practical limit on the size of the buffer .The consumer may have to
wait for new items, but the producer can always produce new
items; there are always empty positions in the buffer.
With a bounded buffer
The bounded buffer producer problem assumes that there is a fixed
buffer size. In this case, the consumer must wait if the buffer is
empty and the producer must wait if the buffer is full.
Producer/Consumer Problem(Cont.)
Shared Data
char item; //could be any data type
char
buffer[n];
semaphore full = 0;
//counting semaphore
semaphore empty = n; //counting semaphore
semaphore mutex = 1; //binary semaphore
char nextp,nextc;
Producer/Consumer Problem(Cont.)
Producer Process
Do
{
produce an item in nextp
wait (empty);
wait (mutex);
add nextp to buffer signal (mutex);
signal (full);
}
while (true)
Producer/Consumer Problem(Cont.)
Consumer Process
Do
{
wait( full );
wait( mutex );
remove an item from buffer to nextc
signal( mutex );
signal( empty );
consume the item in nextc;
}
Readers and Writers Problem
The readers/writers problem is one of the classic
synchronization problems.
This problem has two types of clients accessing the
shared data. The first type, referred to as readers, only
wants to read the shared data. The second type, referred to
as writers, may want to modify the shared data. There is
also a designated central data server or controller.
It enforces exclusive write semantics; if a writer is active
then no other writer or reader can be active. The server
can support clients that wish to both read and write. The
readers and writers problem is useful for modeling
processes which are competing for a limited shared
resource.
Readers and Writers
Problem(Cont.)
Structure of a writer process
While (T) //* loop for ever
{
Wait(wrt); //*waiting is performed
Signal (wrt); //*writing is performed
}
Readers and Writers
Problem(Cont.)
Structure of a reader process
Reader()
{
While (T) //* loop for ever
Wait(mutex); //*waiting is performed
Read count + +;
If (Read count )==1 wait(wrt)
signal(mutex); //*Reading is performed
Wait(mutex);
Read count - -;
If (Read count )==0 signal(wrt);
signal(mutex);
}
Dining Philosophers Problem
Five philosophers sit around a circular table. In the centre of the
table a large bowl of rice is placed. A philosopher needs two
chopsticks to eat. Only 5 chop sticks are available and a chopstick
is placed between each pair of philosophers. They agree that each
will only use the chopstick to his immediate right and left. From
time to time, a philosopher gets hungry and tries to grab the two
chopsticks that are immediate left and right to him. When a hungry
philosopher has both his chopsticks at the same time, he eats
without releasing his chopsticks. When he finishes eating, he puts
down both his chopsticks and starts thinking again.
Dining Philosophers Problem(cont.)
Semaphore fork [5]={1}; /* Number of philosphers */
Semaphore room={4};
Void philosopher (int i) {
While (true){
Think();
Wait(room);
Wait(fork[i]);
Wait (form[(i+1) mod 5]);
Eat();
Signal(fork[(i+1) mod 5]);
Signal(fork[i]);
Signal(room);
}
}
Void main() {
Parbegin (Philosopher(0), Philosopher(1), Philosopher(2), Philosopher(3), Philosopher(4));
}
Race Condition
Race condition is a situation in which multiple
processes read and write a shared data item and the
final result depends on the relative timing of their
execution
The final result depends on the order of execution
–the “loser” of the race is the process that updates last
and will determine the final value of the variable
LOCKS
Locks are another synchronization mechanism. A lock has got two
atomic operations (similar to semaphore) to provide mutual
exclusion. These two operations are Acquire and Release. A process
will acquire a lock before accessing a shared variable, and later it will
be released. A process locking a variable will run the following code:
Lock-Acquire(); critical section Lock-Release(); The difference
between a lock and a semaphore is that a lock is released only by the
process that have acquired it earlier.
As we discussed above any process can increment the value of the
semaphore. To implement locks, here are some things you should
keep in mind:
• To make Acquire () and Release () atomic
• Build a wait mechanism.
• Making sure that only the process that acquires the lock will release
the lock.
MONITORS
Monitors are the alternatives of the semaphores to address the
weakness of semaphores. A monitor is a shared object with
operations, internal state and a number of condition queues. Only one
operation of a given monitor may be active at a time i.e. no need to
remember to release things –occurs on procedure exit.
For example;
monitor synch
integer i; condition c;
procedure producer(x);
.
end;
procedure consumer(x);
.
end;
end monitor;
There is only one process that can enter a monitor, therefore every
monitor has its own waiting list with process waiting to enter the
Basic disadvantage of semaphore, which is
overcome in Monitors
The basic disadvantage of semaphore is that it requires
busy waiting which wastes CPU cycle that some other
process might be able to use productively. Monitors have
the property for achieving mutual exclusion i.e. the
process only can be active in a monitor at any instant.
Consequently the programmer does not need to code the
synchronization constraint explicitly.
Solution to the Dining Philosophers Problem using Monitors
monitor dining-philosophers
void test (int k)
{ enum state {thinking, hungry,
{
eating};
if ((state[k+4 % 5] != eating) &&
state state[5];
(state[k]==hungry) && state[k+1
condition self[5];
% 5] != eating))
void pickup (int i)
{
{
state[k] = eating;
state[i] = hungry;
self[k].signal;
test(i);
}
if (state[i] != eating)
}
self[i].wait;
init
}
{
void putdown (int i)
for (int i = 0; i< 5; i++)
{
state[i] = thinking;
state[i] = thinking;
}
test(i+4 % 5);
}
test(i+1 % 5);