Lecture13nFull - UCF Computer Science

COT 4600 Operating Systems Spring 2011
Dan C. Marinescu
Office: HEC 304
Office hours: Tu-Th 5:00 – 6:00 PM
Lecture 13 – Thursday, February 24, 2011

Last time:

Peer-to-peer systems

Remote Procedure Call

Strategies for name resolution

Case study: DNS – Domain Name Service

Today

Case study: NFS – Network File System

Virtualization

Locks
Next time


Review for the midterm
Lecture 13
2
The Network File System



Developed at Sun Microsystems in early to early 1980s.
Application of the client-server paradigm.
Objectives:

Design a shared file system to support collaborative work
 Simplify the management of a set of workstations



Facilitate the backups
Uniform, administrative policies
Main design goals
1.
2.
3.
4.
Compatibility with existing applications  NFS should provide the same
semantics as a local UNIX file system
Ease of deployment  NFS implementation should be easily ported to existing
systems
Broad scope  NSF clients should be able to run under a variety of operating
systems
Efficiency  the users of the systems should not notice a substantial
performance degradation when accessing a remote file system relative to
access to a local file system
Lecture 12
3
NFS clients and servers



Should provide transparent access to remote file systems.
It mounts a remote file system in the local name space  it perform
a function analogous to the MOUNT UNIX call.
The remote file system is specified as Host/Path
Host  the host name of the host where the remote file system is located
 Path  local path name on the remote host.


The NFS client sends to the NFS server an RPC with the file Path
information and gets back from the server a file handle


A 32 bit name that uniquely identifies the remote object.
The server encodes in the file handle:

A file system identifier
 An inode number
 A generation number
Lecture 12
4
Lecture 13
5
Implementation

Vnode – a structure in volatile memory which abstracts if a file or directory is
local or remote. A file system call (Open, Read, Write, Close, etc.) is done
through the vnode-layer. Example:

To Open a file a client calls PATHNAME_TO_VNODE
 The file name is parsed and a LOOKUP is generated
 if the directory is local and the file is found the local file system creates a
vnode for the file
 else







the LOOKUP procedure implemented by the NFS client is invoked . The file
handle of the directory and the path name are passed as arguments
The NFS client invokes the LOOKUP remote procedure on the sever via an RPC
The NFS server extracts the file system id and the inode number and then calls a
LOOKUP in the vnode layer.
The vnode layer on the server side does LOOKUP on the local file system
passing the path name as an argument.
If the local file system on the server locates the file it creates a vnode for it and
returns the vnode to the NFS server.
The NFS server sends a reply to the RPC containing the file handle of the vnode
and some metadata
The NFS client creates a vnode containing the file handle
Lecture 13
6
Lecture 13
7
Why file handles and not path names
--------------------------------- Example 1 ------------------------------------------------
Program 1 on client 1
Program 2 on client 2
CHDIR (‘dir1’)
fd  OPEN(“f”, READONLY)
RENAME(‘dir1’,’dir2)
RENAME(‘dir3’,’dir1’)
READ(fd,buf,n)
To follow the UNIX specification if both clients would be on the same system client1 would
read from dir2.f. If the inode number allows the client 1 to follw the same semantics rather
than read from dir1/f
----------------------------------- Example 2 ----------------------------------------------fd  OPEN(“file1”, READONLY)
UNLINK(“f”)
fd  OPEN(“f”,CREATE)
READ(fd,buf,n)
If the NFS server reuses the inode of the old file then the RPC from client 2 will read from the
new file created by client 1. The generation number allows the NSF server to distinguish
between the old file opened by client 2 and the new one created by client 1.
8
Lecture 12
Read/Write coherence

Enforcing Read/Write coherence is non-trivial even for a local operations.


For performance reasons a device driver may delay a Write operation issued by
Client1; caching could cause problems when Client2 tries to Read the file.
Possible solutions

Close-to-Open consistency:



Client 1 executes the sequence : Open Write  Close before Client 2 executes the
sequence : Open  Read  Close  Read/Write coherence is provided
Client 1 executes the sequence : Open Write before Client 2 executes the sequence :
Open  Read . Read/Write coherence may or may not be provided
Consistency for every operation: no caching 
Lecture 13
9
Lecture 12
10
NFS Close-to-Open semantics





A client stores with each block in its cache the timestamp of the block’s
vnode at the time the client got the block from the NFS server.
When a user program opens a file the client sends a GETATTR request to
get the timestamp of the latest modification for the file.
The client gets a new copy only if the file has been modified since it has last
accessed it; else it uses the local copy.
To implement a Write a client updates only its copy in cache without a an
RPC Write.
At Close time the client sends the cached copy to the server
Lecture 13
11
Virtualization – relating physical with virtual objects

Virtualization 
simulating the interface
to a physical object by:
1.
2.
3.
Multiplexing  create
multiple physical
objects from one
instance of a physical
object.
Aggregation  create
one virtual object from
multiple physical
objects
Emulation  construct
a virtual object from a
different type of a
physical object.
Emulation in software
is slow.
Method
Physical
Resource
Virtual
Resource
Multiplexing
processor
thread
real memory
virtual memory
communication channel
virtual circuit
processor
server (e.g., Web
server)
disk
RAID
core
multi-core processor
disk
RAM disk
system (e.g. Macintosh)
virtual machine (e.g.,
Virtual PC)
real memory + disk
virtual memory with
paging
communication channel +
processor
TCP protocol
Aggregation
Emulation
Multiplexing +
Emulation
Lecture 13
12
Virtualization of the three abstractions. (1) Threads

1.
Implemented by the operating system for the three abstractions:
Threads  a thread is a virtual processor; a module in execution
Multiplexes a physical processor
2. The state of a thread: (1) the reference to the next computational step (the
Pc register) + (2) the environment (registers, stack, heap, current objects).
3. Sequence of operations:
1.
1.
2.
Load the module’s text
Create a thread and lunch the execution of the module in that thread.
A module may have several threads.
5. The thread manager implements the thread abstraction.
4.
1.
2.
3.
Interrupts  processed by the interrupt handler which interacts with the thread
manager
Exception  interrupts caused by the running thread and processed by
exception handlers
Interrupt handlers run in the context of the OS while exception handlers run in the
context of interrupted thread.
Lecture 13
13
Virtualization of the three abstractions. (2) Virtual
memory
2.
Virtual Memory  a scheme to allow each thread to access only its own
virtual address space (collection of virtual addresses).
1.
Why needed:
1.
2.
2.
To implement a memory enforcement mechanism; to prevent a thread running
the code of one module from overwriting the data of another module
The physical memory may be too small to fit an application; otherwise each
application would need to manage its own memory.
Virtual memory manager maps virtual address space of a thread to
physical memory.
Thread + virtual memory allow us to create a virtual computer for each
module.
Each module runs in own address space; if one module runs mutiple
threads all share one address space.
Lecture 13
14
Virtualization of the three abstractions: (3) Bounded
buffers
3
Bounded buffers  implement the communication channel abstraction
1
2
Bounded the buffer has a finite size. We assume that all messages are of
the same size and each can fit into a buffer cell. A bounded buffer will only
accommodate N messages.
Threads use the SEND and RECEIVE primitives.
Lecture 13
15
Principle of least astonishment

Study and understand simple phenomena or facts before moving to
complex ones. For example:
Concurrency  an application requires multiple threads that run at the same
time. Tricky. Understand sequential processing first.
 Examine a simple operating system interface to the three abstractions

Memory
CREATE/DELETE_ADDRESS SPACE
ALLOCATE/FREE_BLOCK
MAP/UNMAP
UNMAP
Interpreter
ALLOCATE_THREAD
EXIT_THREAD
AWAIT
TICKET
ACQUIRE
DESTROY_THREAD
YIELD
ADVANCE
RELEASE
Communication ALLOCATE/DEALLOCATE_BOUNDED_BUFFER
SEND/RECEIVE
channel
Lecture 13
16
Thread coordination with a bounded buffer


Producer-consumer problem  coordinate the sending and
receiving threads
Basic assumptions:

We have only two threads
 Threads proceed concurrently at independent speeds/rates
 Bounded buffer – only N buffer cells
 Messages are of fixed size and occupy only one buffer cell

Spin lock  a thread keeps checking a control variable/semaphore
“until the light turns green”
Lecture 13
17
Read from
the buffer
location
pointed by
out
0
1
in
2
N-2 N-1
out
Write to
the buffer
location
pointed by
out
shared structure buffer
message instance message[N]
integer in initially 0
integer out initially 0
procedure SEND (buffer reference p, message instance msg)
while p.in – p.out = N do nothing
/* if buffer full wait
p.message [p.in modulo N] msg
/* insert message into buffer cell
p.in  p.in + 1
/* increment pointer to next free cell
procedure RECEIVE (buffer reference p)
while p.in = p.out do nothing
/* if buffer empty wait for message
msg p.message [p.in modulo N] /* copy message from buffer cell
p.out  p.out + 1
/* increment pointer to next message
return msg
Lecture 13
18
Implicit assumptions for the correctness of the
implementation
1.
2.
3.
4.
5.
6.
One sending and one receiving thread. Only one thread updates each
shared variable.
Sender and receiver threads run on different processors to allow spin
locks
in and out are implemented as integers large enough so that they do not
overflow (e.g., 64 bit integers)
The shared memory used for the buffer provides read/write coherence
The memory provides before-or-after atomicity for the shared variables in
and out
The result of executing a statement becomes visible to all threads in
program order. No compiler optimization supported
Lecture 13
19
Race conditions


Race condition  error that occurs when a device or system
attempts to perform two or more operations at the same time, but
because of the nature of the device or system, the operations must
be done in the proper sequence in order to be done correctly.
Race conditions depend on the exact timing of events thus are not
reproducible.


A slight variation of the timing could either remove a race condition or
create.
Very hard to debug such errors.
Lecture 13
20
Two senders execute the code concurrently
Processor 1 runs
thread A
Processor 2 runs
thread B
Processor-memory
bus
Memory contains shared data
Buffer
In
out
time
Fill entry 0 at time t2
with item a
Increment pointer
at time t3
Operations of Thread A
in=out=0
in1
Buffer is empty
0
in2
on=out=0
Operations of Thread B
Increment pointer
at time t4
Fill entry 0 at time t1
with item b
Item b is overwritten, it is lost
Lecture 13
21
Another manifestation of race conditions
 incrementing a pointer is not atomic
1. L
in  in+1
R1,in
2. ADD R1,1
3. ST
R1,in
time
A2
A1
A3
Correct execution
Operations of Thread A
in1
in2
Operations of Thread B
B1
A2
A1
B2
B3
A3
Incorrect execution
Operations of Thread A
in1
in1
Operations of Thread B
B1
Lecture 13
B2
B3
22
One more pitfall of the previous implementation of
bounded buffer

If in and out are long integers (64 or 128 bit) then a load requires
two registers, e.,g, R1 and R2.
int “00000000FFFFFFFF”
L R1,int
/* R1  00000000
L R2,int+1
/* R2  FFFFFFFF

Race conditions could affect a load or a store of the long integer.
Lecture 13
23
Lock

Lock  a mechanism to guarantee that a program works correctly
when multiple threads execute concurrently

a multi-step operation protected by a lock behaves like a single operation
 can be used to implement before-or after atomicity
 shared variable acting as a flag (traffic light) to coordinate access to a
shared variable
 works only if all threads follow the rule  check the lock before accesing a
shared variable.
Lecture 13
24
Deadlocks



Happen quite often in real life and the proposed solutions are not
always logical: “When two trains approach each other at a crossing,
both shall come to a full stop and neither shall start up again until
the other has gone.” a pearl from Kansas legislation.
Deadlock jury.
Deadlock legislative body.
Lecture 13
25
A
B
J
J
K
K
A
B
Lecture 13
26
Deadlocks
Deadlocks  prevent sets of concurrent
threads/processes from completing their tasks.
 How does a deadlock occur  a set of blocked
processes each holding a resource and waiting to
acquire a resource held by another process in the set.
 Example
 semaphores A and B, initialized to 1
P0
P1
wait (A);
wait(B)
wait (B);
wait(A)


Aim prevent or avoid deadlocks
Lecture 13
27
Example of a deadlock

Traffic only in one direction.

Solution  one car backs up (preempt resources and rollback).
Several cars may have to be backed up .
Starvation is possible.

Lecture 13
28
System model



Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices)
Each resource type Ri has Wi instances.
Resource access model:
 request
 use
 release
Lecture 13
29
Simultaneous conditions for deadlock




Mutual exclusion: only one process at a time can use a resource.
Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.
No preemption: a resource can be released only voluntarily by the process holding
it (presumably after that process has finished).
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by
P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a
resource that is held by P0.
Lecture 13
30