Limitations of synchronous communication with

Carnegie Mellon University
Research Showcase @ CMU
Computer Science Department
School of Computer Science
1985
Limitations of synchronous communication with
static process structure in languages for distributed
computing
B Liskov
Carnegie Mellon University
Maurice Herlihy
Lucy Gilbert
Follow this and additional works at: http://repository.cmu.edu/compsci
This Technical Report is brought to you for free and open access by the School of Computer Science at Research Showcase @ CMU. It has been
accepted for inclusion in Computer Science Department by an authorized administrator of Research Showcase @ CMU. For more information, please
contact [email protected].
NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS:
The copyright law of the United States (title 17, U.S. Code) governs the making
of photocopies or other reproductions of copyrighted material. Any copying of this
document without permission of its author may be prohibited by law.
CMU-CS-85-168
Limitations of
Synchronous Communication
with Static Process Structure
in Languages for Distributed Computing
Barbara
Liskov
M.l.T. Laboratory for C o m p u t e r S c i e n c e
Maurice
Herlihy
C . M . U . C o m p u t e r S c i e n c e Department
Lucy
Gilbert
Autographix, Inc.
15 O c t o b e r 1985
Abstract
M o d u l e s in a distributed p r o g r a m are active, c o m m u n i c a t i n g entities.
A l a n g u a g e for distributed
p r o g r a m s must c h o o s e a set of c o m m u n i c a t i o n primitives a n d a structure for p r o c e s s e s . T h i s paper
examines one possible choice:
s y n c h r o n o u s c o m m u n i c a t i o n primitives (such as r e n d e z v o u s or
remote p r o c e d u r e call) in c o m b i n a t i o n with m o d u l e s that e n c o m p a s s a fixed n u m b e r of p r o c e s s e s
(such as A d a tasks or UNIX p r o c e s s e s ) . A n analysis of the c o n c u r r e n c y requirements of distributed
programs suggests
problems
and
concurrency
thus
that this c o m b i n a t i o n
is
poorly
is important.
programs shouid
abandon
To
suited
for
imposes
complex
applications
such
provide a d e q u a t e e x p r e s s i v e
either s y n c h r o n o u s
and
as
indirect solutions
distributed
programs
power, a l a n g u a g e for
communication
to
common
in
which
distributed
primitives or the static
process
structure.
Copyright ©
1985 B a r b a r a Liskov, M a u r i c e Herlihy, a n d L u c y Gilbert
T h i s r e s e a r c h w a s s p o n s o r e d by the D e f e n s e A d v a n c e d R e s e a r c h Projects A g e n c y (DOD), A R P A
O r d e r N o . 3597, monitored by the Air F o r c e A v i o n i c s L a b o r a t o r y U n d e r C o n t r a c t F33615-81 -K-1539.
T h e views a n d c o n c l u s i o n s c o n t a i n e d in this d o c u m e n t are t h o s e of the authors and s h o u l d not be
interpreted as representing the official policies, either e x p r e s s e d or implied, of the D e f e n s e A d v a n c e d
R e s e a r c h P r o j e c t s A g e n c y or the U S G o v e r n m e n t .
1
1. Introduction
A distributed system consists of multiple c o m p u t e r s (called nodes) that c o m m u n i c a t e through a
network.
A programming language for distributed computing must c h o o s e a set of communication
primitives
and
synchronous
a structure
for
communication
processes.
primitives
In this
(such
as
paper,
we
rendez-vous
examine
or
one
remote
possible
procedure
choice:
call)
in
c o m b i n a t i o n with modules that e n c o m p a s s a fixed number of p r o c e s s e s (such as A d a tasks or UNIX
processes).
W e d e s c r i b e and evaluate the program structures n e e d e d to manage certain c o m m o n
c o n c u r r e n c y control problems.
W e are c o n c e r n e d here not with computational power, but with
expressive power: the d e g r e e to w h i c h c o m m o n problems may be solved in a straightforward and
efficient manner.
W e c o n c l u d e that the combination of s y n c h r o n o u s communication with static
p r o c e s s structure imposes complex and indirect solutions, and therefore that it is poorly suited for
applications s u c h as distributed programs in w h i c h c o n c u r r e n c y is important.
T o provide adequate
expressive power, a l a n g u a g e for distributed programming s h o u l d a b a n d o n either
synchronous
c o m m u n i c a t i o n primitives or the static p r o c e s s structure.
O u r analysis is b a s e d on the client/server
model, in w h i c h a distributed program is organized as a
c o l l e c t i o n of modules, e a c h of w h i c h resides at a single n o d e in the network. M o d u l e s do not share
d a t a directly; instead they c o m m u n i c a t e through messages. M o d u l e s act as clients and as servers. A
client
module makes use of services provided by other modules, while a server
module provides
services to others by encapsulating a resource, providing synchronization, protection, and crash
recovery.
T h e . c l i e n t / s e r v e r model is hierarchical: a particular module may be both a client and a
server.
Although
other
models
for
concurrent
computation
have
been
proposed,
the
hierarchical
client/server model has c o m e to be the standard model for structuring distributed programs.
Lauer
a n d N e e d h a m [ 1 6 ] a r g u e d that the client/server model is equivalent (with respect to expressive
power)
to
a
model
of
computation
in
which
modules
communicate
through
shared
data.
Nevertheless, the client/server model is more appropriate for distributed systems b e c a u s e s p e e d and
bandwidth me typically more critical in the c o n n e c t i o n between a module and its local data than
between distinct modules. Although certain specialized applications may fit naturally into alternative
structures
such
as
pipelines
or
distributed
coroutines,
the
hierarchical
client/server
model
e n c o m p a s s e s a large c l a s s of distributed programs.
This paper is organized as follows.
In Section 2, w e p r o p o s e a taxonomy for concurrent systems,
UNIVERSITY LIBRARIES
CARNEGIE-MELLON UNIVERSITY
PITTSBURGH, PENNSYLVANIA 15213
2
f o c u s i n g on primitives for intermodule c o m m u n i c a t i o n a n d the p r o c e s s structure of modules.
S e c t i o n 3, w e formulate the basic c o n c u r r e n c y requirement for modules:
In
if o n e activity within a
m o d u l e b e c o m e s b l o c k e d , other activities s h o u l d b e able to make progress. In S e c t i o n 4, w e d e s c r i b e
and
evaluate
the
program
structures
needed
to
satisfy
the
concurrency
s y n c h r o n o u s c o m m u n i c a t i o n primitives with a static p r o c e s s structure.
reasonable
solution
to
certain
problems
requires
using
the
requirement
using
W e c o n c l u d e that the only
available
primitives
a s y n c h r o n o u s c o m m u n i c a t i o n primitives a n d / o r a d y n a m i c p r o c e s s structure.
to
simulate
In Section 5, w e
summarize our results a n d make s u g g e s t i o n s about linguistic support for distributed c o m p u t i n g .
2. Choices for Communication and Process Structure
In this s e c t i o n w e review the r a n g e of c h o i c e s for c o m m u n i c a t i o n a n d p r o c e s s structure a n d
identify the c h o i c e s m a d e in various l a n g u a g e s .
2.1.
Communication
T h e r e are two main alternatives for c o m m u n i c a t i o n primitives:
Synchronous
mechanisms
associated response.
provide a single
primitive
synchronous and asynchronous.
for s e n d i n g
a request a n d
receiving
the
T h e client's p r o c e s s is b l o c k e d until the server's r e s p o n s e is received.
E x a m p l e s of s y n c h r o n o u s m e c h a n i s m s i n c l u d e p r o c e d u r e call, remote p r o c e d u r e call [21], and
rendez-vous [7].
L a n g u a g e s that u s e s y n c h r o n o u s m e c h a n i s m s for c o m m u n i c a t i o n i n c l u d e M e s a
[20], D P [5], A d a [7], S R [1], M P [24], and A r g u s [18].
1
Asynchronous
c o m m u n i c a t i o n m e c h a n i s m s typically take the form of distinct send
primitives for originating requests a n d a c q u i r i n g r e s p o n s e s .
and
receive
A. client p r o c e s s executing a s e n d is
b l o c k e d either until the request is c o n s t r u c t e d , or until the m e s s a g e is delivered (as in C S P [131). T h e
client a c q u i r e s the r e s p o n s e by e x e c u t i n g the receive primitive. After e x e c u t i n g a s e n d a n d before
e x e c u t i n g the receive, the client may undertake other activity, p e r h a p s executing other s e n d s a n d
receives. L a n g u a g e s that u s e s e n d / r e c e i v e i n c l u d e C S P a n d P L I T S [0].
1
i n addition to remote call, S R
response.
provides the ability to s o n d a request without waiting for the
3
2.2. P r o c e s s S t r u c t u r e
T h e r e are two c h o i c e s for p r o c e s s structure within a module.
e n c o m p a s s a fixed number of threads of control (usually one).
M o d u l e s having a static
structure
T h e programmer is responsible for
multiplexing these threads a m o n g a varying number of activities.
E x a m p l e s of m o d u l e s having a
static p r o c e s s structure i n c l u d e A d a tasks, D P monitors, a n d C S P p r o c e s s e s , w h e r e there is just o n e
p r o c e s s per module, and SR, w h e r e there may b e multiple p r o c e s s e s per module. T h e multiplexing
m e c h a n i s m s available to the programmer i n c l u d e g u a r d e d c o m m a n d s a n d c o n d i t i o n variables.
A n alternative to the static structure is the dynamic
p r o c e s s e s may e x e c u t e within a module.
structure, in w h i c h a variable number of
(Note that a m o d u l e ' s p r o c e s s structure is independent of
the e n c o m p a s s i n g system's p r o c e s s structure; the number of p r o c e s s e s executing within a module
may vary dynamically even if the overall number of p r o c e s s e s in the system is fixed.) T h e system is
r e s p o n s i b l e for s c h e d u l i n g , but the programmer must s y n c h r o n i z e the use of s h a r e d data. A d a , M P ,
Argus, and M e s a are e x a m p l e s of l a n g u a g e s in w h i c h the b a s i c modular unit e n c o m p a s s e s a dynamic
p r o c e s s structure.
2.3. C o m b i n a t i o n s
All four c o m b i n a t i o n s of c o m m u n i c a t i o n a n d p r o c e s s structure are possible. Figure 2-1 s h o w s the
c o m b i n a t i o n s provided by several l a n g u a g e s . In Argus, the b a s i c modular unit is the guardian,
h a s a dynamic p r o c e s s structure.
which
In M P , a l a n g u a g e for programming highly parallel machines,
s h a r e d d a t a are e n c a p s u l a t e d by frames,
w h i c h e n c o m p a s s a d y n a m i c p r o c e s s structure. In C S P , the
p r o c e s s itself is the b a s i c modular unit.
A monitor
implicitly multiplexed a m o n g multiple activities.
processes.
in D P c o n t a i n s a single thread of control that is
A resource
in S R c a n contain a fixed n u m b e r of
In M e s a monitors, the p r o c e s s e s e x e c u t i n g in external
p r o c e d u r e s have a dynamic
structure. A c c e s s to s h a r e d d a t a is s y n c h r o n i z e d through entry p r o c e d u r e s that a c q u i r e the monitor
lock.
In Starmod [6], a d y n a m i c p r o c e s s model with s y n c h r o n o u s c o m m u n i c a t i o n c a n be obtained
t h r o u g h the use of processes;
t h r o u g h the use of
a static p r o c e s s model with a s y n c h r o n o u s primitives c a n be obtained
ports.
In A d a , a task is a m o d u l e with a static p r o c e s s structure.
A server task exports a c o l l e c t i o n of
entries, w h i c h are c a l l e d directly by clients. A server task's p r o c e s s structure is static b e c a u s e a task
e n c o m p a s s e s a single thread of c o n t r o l .
(Although a task c a n create subsidiary tasks dynamically,
these subsidiary tasks c a n n o t be a d d r e s s e d as a group.)
T h e programmer of the server task uses
4
s e l e c t statements to multiplex the task a m o n g various clients.
Note that a task family (an array of
tasks) d o e s not qualify as a module, b e c a u s e the entire family c a n n o t be the target of a call; instead a
call must b e m a d e to a particular member of a family.
Alternatively, a distributed A d a program might b e o r g a n i z e d as a c o l l e c t i o n of packages
that
c o m m u n i c a t e t h r o u g h interface p r o c e d u r e s .
T h i s alternative, however, c a n b e rejected a s violating
the
Ada
intent
of
the
Ada
designers.
The
Rationale [14] explicitly
states
c o m m u n i c a t i o n is by entry calls to tasks, not by p r o c e d u r e calls to p a c k a g e s .
2
that
internode
T h i s alternative c a n
also be rejected on t e c h n i c a l grounds; w e a r g u e in S e c t i o n 4.3 that certain extensions to A d a are
n e c e s s a r y before p a c k a g e s c a n b e u s e d effectively a s m o d u l e s in distributed programs.
W e are not a w a r e of any l a n g u a g e s that provide a s y n c h r o n o u s c o m m u n i c a t i o n with d y n a m i c
processes.
Although
such
languages
may exist,
this
combination
appears
to
provide
an
embarrassment of r i c h e s not n e e d e d for expressive power.
Static
Dynamic
Synchronous
Ada Tasks, DP, SR
Asynchronous
C S P , PLITS, Starmod
Argus, Mesa, Starmod, MP
Figu re 2-1 : C o m m u n i c a t i o n a n d P r o c e s s Structure
in S o m e L a n g u a g e s
3. Concurrency Requirements
In this s e c t i o n w e d i s c u s s the c o n c u r r e n c y requirements of the m o d u l e s that make up a distributed
p r o g r a m . O u r d i s c u s s i o n centers o n the c o n c u r r e n c y n e e d s of m o d u l e s that a c t as both clients a n d
servers; any linguistic m e c h a n i s m that provides a d e q u a t e c o n c u r r e n c y for s u c h a m o d u l e will also
provide a d e q u a t e c o n c u r r e n c y for a m o d u l e that acts only a s a client or only as a server.
T h e principal c o n c u r r e n c y requirement is the following:
if o n e activity within a m o d u l e b e c o m e s
b l o c k e d , other activities s h o u l d b e a b l e to make progress. A system in w h i c h m o d u l e s are unable to
2 , ,
N o t e ... that on distributed systems (where tasks d o not share a c o m m o n store) c o m m u n i c a t i o n
by p r o c e d u r e calls may b e d i s a l l o w e d , all c o m m u n i c a t i o n being a c h i e v e d by entry calls."
11-40).
(Page
5
set aside b l o c k e d activities may suffer from unnecessary d e a d l o c k s and low throughput.
For
example, s u p p o s e a server c a n n o t carry out one client's request b e c a u s e another client has locked a
n e e d e d resource.
If the server then b e c o m e s b l o c k e d , rendering it unable to a c c e p t a request to
release the resource, then a d e a d l o c k will o c c u r that w a s otherwise avoidable. Even when there is no
prospect of deadlock, a module that remains idle when there is work to be d o n e is a performance
bottleneck.
W e c a n distinguish two c o m m o n situations in w h i c h an activity within a module might be b l o c k e d :
1. Local
Delay:
A local resource n e e d e d by the current activity is found to be unavailable.
For example, a file server may discover that the request on w h i c h it is working must read
a file that is currently open for writing by another client. In this situation, the file server
should temporarily set aside the b l o c k e d activity, turning its attention to requests from
other clients.
2. Remote
Delay:
The
module
makes a call to another
module, where a delay
is
e n c o u n t e r e d . T h e delay may simply be the communication delay, w h i c h c a n be large in
s o m e networks.
Alternatively, the delay may o c c u r b e c a u s e the called module is busy
with another request or must perform c o n s i d e r a b l e computing in response to the request.
While the calling module is waiting for a response, it s h o u l d be able to work on other
activities.
W e d o not d i s c u s s the use of I/O devices in our analysis below, but it is worth noting that remote
delays are really I/O delays, and remarks about o n e are equally applicable to the other. *
4. Program Structures
We
now d i s c u s s
several
program
structures
that
might
be
used
to meet the
concurrency
requirement stated above. First, w e review several techniques used to make progress in the p r e s e n c e
of local delays.
provides
W e c o n c l u d e that the s y n c h r o n o u s communication/static p r o c e s s
adequate
expressive
power
for
coping
with
local
delays
(although
combination
the
particular
m e c h a n i s m s provided by A d a are awkward for this purpose). In the s e c o n d part, however, w e argue
that this combination provides inadequate expressive power for c o p i n g with remote delays.
In the following d i s c u s s i o n , examples are s h o w n as A d a tasks. W e c h o s e A d a b e c a u s e w e assume
it will be familiar to many readers. Although w e call attention to s o m e problems with Ada, this paper is
not intended to be an exhaustive analysis of the suitability of A d a for distributed computing (see,
however, [23,9, 15]).
Instead,
the
examples
are
intended
to
illustrate
problems with the s y n c h r o n o u s communication/static p r o c e s s combination.
language-independent
6
4 . 1 . L o c a l Delay
Our evaluation of techniques for c o p i n g with local delays is based on work of B l o o m [4], w h o
argued that a synchronization mechanism provides adequate expressive power to c o p e with local
delays only if it permits s c h e d u l i n g d e c i s i o n s to be made on the basis of the following information:
• the name of the called operation,
o the order in w h i c h requests are received,
• the arguments to the call, and
• the state of the resource.
W e use these criteria to evaluate t e c h n i q u e s for avoiding local and remote delays.
L a n g u a g e s c o m b i n i n g static p r o c e s s structure with s y n c h r o n o u s c o m m u n i c a t i o n
provide two
distinct m e c h a n i s m s for making progress in the p r e s e n c e of local delay:
1. Conditional
Wait.
T h e conditional wait is the method used in monitors.
A n activity that
e n c o u n t e r s a local delay relinquishes the monitor lock and waits o n a q u e u e . W h i l e that
activity is s u s p e n d e d , other activities c a n make progress after acquiring the monitor lock.
Conditional wait is quite adequate for c o p i n g with local delays b e c a u s e s c h e d u l i n g
d e c i s i o n s c a n employ information from all the categories listed above.
2. Avoidance.
A v o i d a n c e is used in l a n g u a g e s s u c h as A d a and S R .
called guards
B o o l e a n expressions
are used to c h o o s e the next request. A server will a c c e p t a call only after it
has ascertained that the local r e s o u r c e s n e e d e d by the new request are currently
available.
A v o i d a n c e is adequate for c o p i n g with local delay as long as the guard
mechanism is sufficiently powerful.
from all of B l o o m ' s categories.
In SR, for example, guards may employ information
In A d a , however, guards for a c c e p t statements c a n n o t
d e p e n d on the values of arguments to the call, and therefore the A d a guarded c o m m a n d
mechanism lacks the expressive power n e e d e d to provide a general solution to the
problem of local delays.
.The s h o r t c o m i n g s of the A d a g u a r d e d c o m m a n d mechanism c a n be illustrated by the following
simple example, w h i c h will be used throughout the paper,
A Disk Scheduler
module s y n c h r o n i z e s
a c c e s s to a disk through R E Q U E S T a n d R E L E A S E operations (c.f. [12, 4]). Before reading or writing
to the disk, a client calls R E Q U E S T to identify the desired track. W h e n R E Q U E S T returns, the client
a c c e s s e s the disk and calls R E L E A S E w h e n the a c c e s s is complete.
T h e s c h e d u l e r attempts to
minimize c h a n g e s in the direction the disk head is moving. A request for a track is p o s t p o n e d if the
track d o e s not lie in the head's current direction of motion. T h e direction is reversed w h e n there are
7
no more pending requests in that direction.
P e r h a p s the most natural implementation of the disk scheduler server in A d a would provide two
entries:
task
end
DISK_SCHEDULER
is
entry
REQUESTED:
entry
RELEASE;
in
TRACK);
DISK_SCHEDULER
Internally, the server keeps track of the current direction of motion, and the current head position:
task
body
DISK_SCHEDULER
is
t y p e STATUS = ( U P , D O W N , N E U T R A L ) ;
c u r r e n t head p o s i t i o n .
P O S I T I O N : TRACK ;
current
DIRECTION:
end
direction
STATUS;
of
motion.
DISK_SCHEDULER
A naive attempt at s c h e d u l i n g is:
select
when
or
or
Warning:
(DIRECTION
Move u p .
not
= UP
legal
and
Ada!
POSITION
<= I D )
a c c e p t R E Q U E S T E D : i n TRACK) do . . .
when ( D I R E C T I O N = DOWN and P O S I T I O N
Move d o w n ,
a c c e p t R E Q U E S T E D : i n TRACK) do . . .
when ( D I R E C T I O N = N E U T R A L ) =>
Move e i t h e r w a y .
a c c e p t R E Q U E S T E D : i n TRACK) do . . .
=>
>= I D )
=>
This program fragment is illegal, however, b e c a u s e A d a d o e s not permit an argument value (i.e., ID) to
be used in a w h e n c l a u s e . A s a c o n s e q u e n c e of this restriction, modules w h o s e s c h e d u l i n g policies
d e p e n d on argument values must be implemented in a more roundabout manner.
in the remainder of this section, w e use the disk s c h e d u l e r example to illustrate five alternativo
t e c h n i q u e s for avoiding local delay.
W e argue that although s o m e of these techniques will help in
s p e c i a l cases, only o n e of them, the Early Reply structure, is powerful e n o u g h to provide a fully *
general solution to the local delay problem.
1. Entry Families.
It is possible to treat an argument specially by using an indexed family
of entries. If the index is viewed as an argument to the call, entry families provide a limited
ability to a c c e p t requests b a s e d on arguments.
For example, R E Q U E S T c o u l d be implemented as a family containing an entry for e a c h track.
The
8
index provides an indirect way to incorporate the track number into the entry name, and so, for
example, the task c a n delay a c c e p t i n g a request to write to a track that lies in the w r o n g direction
from the disk h e a d .
case
DIRECTION
when
for
I
in
accept
end
when
POSITION..TRACK'LAST
REQUEST(I)
do
loop
...
loop;
DOWN =>
for
I
in
accept
end
when
reverse
POSITION..TRACK•FIRST
REQUEST(I)
do
loop
...
loop;
NEUTRAL
for
I
in
accept
end
end
is
UP =>
=>
TRACK'RANGE
REQUEST(I)
loop
do
...
loop;
case
A s o u r c e of a w k w a r d n e s s here is that the a c c e p t statement c a n refer only to values of local variables
(I in this case). T h e first two arms of the c a s e statement must poll e a c h of the entries in turn, a n d in
the third arm, the entire family must either b e polled sequentially as s h o w n , or e a c h track must have
its o w n a c c e p t statement.
Neither c h o i c e is attractive, particularly if the number of tracks is large.
Similar problems have b e e n noted by [27].
P e r h a p s a more important limitation of entry families is that they are effective only w h e n the
s u b s u m e d argument's type is a discrete range. For example, entry families c o u l d not be used if the
range of disk tracks were to vary dynamically, or if requests were granted on the basis of a numeric
priority, as in thè "shortest job first" s c h e d u l e r of [26].
2. Refusal.
A server that a c c e p t s a request it is unable to service might return an exception
to the client to indicate that the client s h o u l d try again later.
Refusal c a n eliminate local delays, but it is an awkward a n d potentially inefficient technique. W h e n
a client receives s u c h an exception, it must d e c i d e whether to try again or to pass the exception to a
higher level. If it tries again, it must d e c i d e w h e n to d o so, a n d h o w many times to retry before giving
up. Note that s u c h d e c i s i o n s present a modularity problem, b e c a u s e they are likely to c h a n g e as the
underlying system c h a n g e s .
For example, if a server's load increases over time, it will refuse more
requests, and its clients will be f o r c e d to adjust their timeout and retry strategies. T h e server might
also have to roll back work already d o n e on the client's behalf, work that may have to b e redone later,
perhaps several times.
9
3. Nested Accept.
A server that e n c o u n t e r s a local delay might a c c e p t another request in a
nested a c c e p t statement.
T h e following c o d e fragment s h o w s an unlikely way of programming a disk scheduler.
indiscriminately a c c e p t s calls to R E Q U E S T .
T h e server
W h e n it d i s c o v e r s that it has a c c e p t e d a request for a
track that lies in the w r o n g direction, it uses a nested a c c e p t statement to a c c e p t a different request.
W h e n the disk head direction c h a n g e s , the server resumes p r o c e s s i n g the original request.
accept
REQUESTED:
If
the
then
while
in
direction
accept
TRACK)
is
something
DIRECTION
= UP
do
wrong,
and
else,
ID
< POSITION
loop
select
accept
REQUEST(ANOTHER_ID:
in
TRACK)
do
or
terminate;
end
end
end
select;
loop;
REQUEST;
At first g l a n c e the nested a c c e p t t e c h n i q u e might appear similar to the conditional wait technique
supported by monitors, in w h i c h an activity that is unable to make progress is temporarily set aside in
favor of another activity. T h e critical difference between the two t e c h n i q u e s is that the use of nested
a c c e p t statements requires that the n e w call be completed before the old call c a n be completed, an
ordering that may not c o r r e s p o n d to the n e e d s of applications.
Monitors d o not impose the same
last-in-first-out ordering on activities. A s e c o n d difficulty with nested a c c e p t statements is d e c i d i n g
w h i c h request to a c c e p t .
For example, the disk s c h e d u l e r will be no better off if it a c c e p t s another
request for a track that lies in the w r o n g direction.
Note that if it w e r e possible to d e c i d e w h i c h
requests to a c c e p t after getting into trouble, then it s h o u l d have been possible to a c c e p t only nontroublesome requests in the first place, thus avoiding the use of the nested a c c e p t . T h i s observation
s u g g e s t s that the nested a c c e p t provides no expressive power that is not otherwise available.
4. Task Families.
Instead of using a single task to implement a server, one might use a
family (i.e., an array) of identical t a s k s .
3
T h e s e tasks would together constitute the server,
and w o u l d s y n c h r o n i z e with o n e another in their use of the server's data.
If one task
e n c o u n t e r s a local or remote delay, the server need not remain idle if another task is still
v
D P and S R provide a n a l o g o u s m e c h a n i s m s .
10
able to service requests.
T h e principal difficulty with task families lies in allocating tasks a m o n g clients. A s mentioned above, a
client c a n n o t make a call to a task family as a whole; instead it is n e c e s s a r y to allocate family
members to the clients in a d v a n c e of the call. If the server's clients c a n b e identified in a d v a n c e , then
it is p o s s i b l e to have a task family member for e a c h client, and e a c h client c a n be permanently
a s s i g n e d its o w n private task. T h i s structure c a n avoid l o c a l delays, but s u c h static assignments are
not feasible for many applications.
If static allocation is not possible, then two alternative methods might be u s e d .
A client c o u l d
c h o o s e a task on its o w n , p e r h a p s at r a n d o m , a n d then use a timed or conditional entry call to
determine if it is free.
If the task is not free, the client c o u l d try again.
S u c h a method is c h e a p if
c o n t e n t i o n for tasks in the family is low, but it c a n result in arbitrary delays w h e n contention is high.
Alternatively, the server c o u l d provide a manager task that a s s i g n s tasks to clients. A l t h o u g h the
manager c a n avoid conflicts in the u s e of family members, extra m e s s a g e s are n e e d e d w h e n a task is
allocated or freed. A manager is e x p e n s i v e if a task is used just o n c e for e a c h allocation; the c o s t
d e c r e a s e s as the number of uses per allocation increases.
In either the static or dynamic c a s e , task families require e n o u g h family members to make the
probability of contention a c c e p t a b l y low.
S i n c e most tasks in the family are likely to be idle at any
time, the e x p e n s e of the t e c h n i q u e d e p e n d s on h o w c h e a p l y idle tasks c a n b e implemented.
If the number of clients is static, task families d o provide a solution to local delay.
Nevertheless,
they d o not provide a general solution b e c a u s e often the number of clients is dynamic (or unknown).
In this c a s e , task families c a n at best provide a probabilistic guarantee against delay.
5. Early Reply.
A fully general solution to the c o n c u r r e n c y problem c a n b e c o n s t r u c t e d if
s y n c h r o n o u s primitives are u s e d to simulate a s y n c h r o n o u s primitives.
A server that a c c e p t s a request from a client simply r e c o r d s the request a n d returns an immediate
a c k n o w l e d g m e n t . T h e server carries out the request at s o m e later time, a n d c o n v e y s the result to the
client through a distinct e x c h a n g e of m e s s a g e s . N u m e r o u s e x a m p l e s employing Early Reply a p p e a r
in the literature (e.g. s e e [10, 26, 25]).
A disk s c h e d u l e r task employing Early R e p l y w o u l d export three entries:
a R E G I S T E R entry, a
R E Q U E S T entry family, a n d a R E L E A S E entry. T h e client calls R E G I S T E R to indicate the disk track
d e s i r e d . T h e server r e c o r d s the request a n d r e s p o n d s with an index into the R E Q U E S T entry family.
11
entry
REGISTERED:
in
TRACK,
CLIENT_ID:
out
INDEX);
A n outline of a disk s c h e d u l e r server employing early reply to avoid local delays a p p e a r s in Figure 4-1.
A l t h o u g h early reply provides a general t e c h n i q u e for avoiding local delays, it is potentially inefficient
a n d a w k w a r d . Efficiency may suffer b e c a u s e an additional remote call is n e e d e d whenever a delay is
p o s s i b l e (e.g. w h e n requesting a disk track).
P r o g r a m clarity may suffer b e c a u s e the server must
perform its o w n s c h e d u l i n g , explicitly multiplexing its thread of control a m o n g multiple clients.
W h e n the server is feady to grant a client's request, it a c c e p t s a call to that client's member of the
R E Q U E S T entry family.
For this structure to work, the client must follow a protocol in w h i c h it first
registers its request, a n d then m a k e s a s e c o n d call to receive its r e s p o n s e .
T o avoid delaying the
server, the client s h o u l d follow the R E G I S T E R call with an immediate call to R E Q U E S T :
SERVER.REGISTER(TRACK_ID,
SERVER.REQUEST(MY_ID);
MY_ID);
M o s t of the t e c h n i q u e s d i s c u s s e d in this paper, including Early Reply, work best if clients a c c e s s
server tasks indirectly through p r o c e d u r e s exported by a p a c k a g e l o c a l to the client's site.
The
p a c k a g e e n h a n c e s safety by e n f o r c i n g p r o t o c o l s and by ensuring that identifiers issued to clients are
not m i s u s e d .
4.2. Remote Delay
W e n o w g o on to d i s c u s s the problem of remote delays.
W e d o not claim that it is impossible to
avoid remote delay under the s y n c h r o n o u s c o m m u n i c a t i o n / s t a t i c p r o c e s s structure c o m b i n a t i o n . W e
a r g u e instead that this c o m b i n a t i o n i m p o s e s c o m p l e x a n d indirect solutions to c o m m o n problems that
arise in distributed programs. T o illustrate this point, w e review the t e c h n i q u e s d i s c u s s e d a b o v e for
c o p i n g with local delay.
problem is Early Reply.
lower-level
T h e only t e c h n i q u e that provides a general solution to the remote delay
W e f o c u s on two servers: the higher-level
server called by a client, and a
server c a l l e d by the higher-level server. M o s t of the t e c h n i q u e s previously c o n s i d e r e d for
c o p i n g with local delay are clearly inadequate for c o p i n g with remote delay:
• Conditional
nested
Wait.
monitor
T h e inability of monitors to c o p e with remote delay is k n o w n as the
call problem [19, 11]. W h e n a monitor makes a call to another monitor,
the calling activity retains the monitor lock, and the monitor must remain idle while the
call is in progress.
• Avoidance.
R e m o t e delays c a n n o t b e eliminated by a v o i d a n c e . A server c a n n o t c h o o s e
to avoid a client's request just b e c a u s e it may lead to a remote delay.
T h e lower-level
12
task
DISK_SCHEDUI.ER
is
entry
REGISTERED:
entry
RESP0NSE(INDEX);
entry
RELEASE;
end
in
TRACK;
CLIENT_ID:
out
INDEX);
SERVER;
task
body
SERVER
is
begin
1 oop
BUSY := f a l s e ;
N E X T _ C L I E N T := UNKNOWN;
- - A c c e p t a n d e n q u e u e new
select
accept
--
REGISTERED:
Allocate
CLIENT_ID
end
REGISTER;
...
--
--
and
:=
:=
CLIENT_ID:
INDEX)
do
index.
and
client
client
and
next
:=
NEXT_TRACK
return
out
request
next
NEXT_CLIENT
TRACK;
...;
Enqueue
compute
in
requests.
id.
track.
...
...
or
when
not
BUSY
accept
POSITION
BUSY
and NEXT_CLIENT
/= UNKNOWN =>
REQUEST(NEXT_CLIENT);
:=
:=
NEXT_TRACK;
true;
or
when
BUSY =>
accept
BUSY
...
RELEASE;
:=
--
false;
Change
NEXT_CLIENT
NEXT_TRACK
end
end
end
end
:=
:=
direction
if
necessary.
...
...
select;
loop,
loop;
SERVER;
F i g u re 4 - 1 : U s i n g Early Reply to Avoid L o c a l Delays
server c o u l d avoid a c c e p t i n g the higher-level server's call if it w o u l d otherwise e n c o u n t e r
a local delay, but the higher-level server is delayed in the meantime. A v o i d a n c e c a n t h u s
lead to both d e a d l o c k a n d performance problems.
• Refusal.
Refusal is no more effective for d e a l i n g with remote delays than it is for d e a l i n g
with local delays.
* Nested
Accept.
O n e might attempt to avoid remote delays through the use of nested
a c c e p t statements in the following way: a task creates a subsidiary task to carry out the
13
remote call while the creating task executes a nested a c c e p t . However, the necessity of
finishing the nested call before responding to the outer call renders this technique of
d u b i o u s value.
• Task Families.
T h e remarks made about task families w h e n d i s c u s s i n g local delays apply
to remote delays as well. Task families work well w h e n the number of clients is static and
predetermined.
delay.
Otherwise, task families provide only a probabilistic guarantee against
Probabilistic guarantees c o n c e r n i n g performance might well suffice for s o m e
applications, but probabilistic guarantees c o n c e r n i n g d e a d l o c k seem less useful.
Only Early Reply provides a general solution to the problem of remote delay, but the efficiency and
program complexity problems noted above b e c o m e more acute.
If the lower-level server employs
early reply, then the higher-level server will be subject to remote delay if it requests its response
prematurely.
T o avoid waiting, the higher-level server might use periodic timed or conditional entry
calls to c h e c k for the response. T h i s structure is potentially inefficient, s i n c e the higher-level server is
making multiple remote calls, and it may also yield complex programs b e c a u s e the higher-level server
must explicitly multiplex these calls with other activities. A n alternative structure in w h i c h the lowerlevel server returns its response by calling an entry provided by the higher-level server is no better.
T o avoid delaying the lower-level server, the higher-level server must a c c e p t the call in a timely
fashion, and the need for explicit multiplexing is u n c h a n g e d .
T h e s e shortcomings of Early Reply c a n be alleviated by the introduction of dynamic task creation in
the higher-level server. In this structure, the higher-level server uses Early Reply to c o m m u n i c a t e with
clients. W h e n the higher-level server receives a request, it creates a subsidiary task (or allocates an
existing task) and returns the name of that task to the client. T h e structure is illustrated in Figure 4-2.
T h e client later calls the subsidiary task to receive the results of its request.
SERVER.REQUEST(
TASK__ID) ;
TASK_ID.RESPONSE(...);
T h e subsidiary task carries out the client's request, making calls to lower-level servers, and using
s h a r e d data to synchronize with other subsidiary tasks within the higher-level server. W h e n work on
the request is finished, the subsidiary task a c c e p t s the client's s e c o n d call to pick up the response.
T h i s structure avoids the program complexity problems associated with Early Reply. T h e higherlevel server need not perform explicit multiplexing a m o n g clients b e c a u s e the tasking mechanism
itself multiplexes the processor(s) a m o n g the various activities, although, as mentioned above,
14
subsidiary tasks must s y n c h r o n i z e through shared data.
T h i s structure also alleviates the need to
s y n c h r o n i z e with lower-level servers. T h e server as a w h o l e is not affected if one subsidiary task is
delayed, b e c a u s e other subsidiary tasks c a n still work for other clients.
task
SERVER
entry
is
REQUEST
task
type
From
entry
entry
end
HELPER:
out
access
HELPER_TASK) ;
is
server:
FORWARD_REQUEST(...);
From
end
(...,
HELPER__TASK
client:
RESP0NSE( . . . ) ;
HELPER_TASK;
SERVER;
task
body
task
SERVER
body
begin
accept
is
HELPER_JASK
is
FORWARD_REQUEST(...)
...
Put
arguments
end
FORWARD_REQUEST;
...
--Do
work
for
including
accept
RESP0NSE(...)
...
end
Return
do
local
variables,
client,
...
end
in
lower-level
calls,
do
response
to
client;
RESPONSE
HELPER_TASK;
begin
loop
select
accept
REQUEST( . . . .
HELPER: o u t
access
HELPERJTASK)
do
HELPER
:= new
HELPER_TASK;
HELPER.FORWARD_REQUEST(...);
end
REQUEST;
or
terminate;
end s e l e c t ;
end l o o p ;
e n d SERVER;
Figure 4-2:
Early Reply and Dynamic Task Creation
O n e disadvantage of this structure is that creating and destroying the server tasks may
15
expensive, although this cost c a n b e avoided by reusing the subsidiary t a s k s .
4
T h i s structure also
requires two message e x c h a n g e s for e a c h client/server interaction. T h e most striking fact about this
mechanism, however, is that w e are using s y n c h r o n o u s rendez-vous to simulate a s y n c h r o n o u s send
and receive primitives, and dynamic task creation to simulate a dynamic p r o c e s s structure. T h e need
to use o n e set of primitives to simulate another suggests that the combination of the static p r o c e s s
structure with s y n c h r o n o u s c o m m u n i c a t i o n d o e s not provide adequate expressive power to satisfy
the c o n c u r r e n c y requirements of distributed programs.
4.3. R e m a r k s a b o u t A d a
Although this paper is not intended to be a systematic critique of A d a itself, it d o e s raise s o m e
questions about the suitability of A d a for distributed computing.
O n e way to circumvent
the
limitations of A d a tasks is to use p a c k a g e s instead of tasks as the basic modular unit for constructing
servers. A server p a c k a g e would export a collection of interface procedures, w h i c h would be called
directly by clients. T h e s e p r o c e d u r e s c o u l d s y n c h r o n i z e with o n e another by explicit calls to entries
of tasks private to the p a c k a g e .
(This structure is similar to a M e s a monitor [20].) A server p a c k a g e
w o u l d solve the c o n c u r r e n c y problem by providing a dynamic p r o c e s s structure; an arbitrary number
of client p r o c e s s e s c a n be executing within the p a c k a g e ' s interface procedures. T h e programmer is
responsible for synchronizing a c c e s s to shared data, but not for s c h e d u l i n g the p r o c e s s e s .
Nevertheless, A d a p a c k a g e s suffer from critical limitations unrelated to the main f o c u s of this paper.
In applications s u c h as distributed computing, it is often necessary to store module names in
variables, and to pass them as parameters.
[2]
uses
Grapevine [3]
as
a
registry
For example, the C e d a r remote procedure call facility
for
server
instances
and
types.
Similarly,
dynamic
reconfiguration, replacing one module with another [15], may b e necessary to e n h a n c e functionality,
to a c c o m m o d a t e growth, and to support fault-tolerance.
A d a permits names of tasks (i.e. their
a c c e s s values) to be stored in variables and used as parameters, but not names of p a c k a g e s or
procedures.
could
not
Consequently, a distributed program in w h i c h the p a c k a g e is the basic modular unit
support
Grapevine-like
registries
of
services
or
dynamic
reconfiguration.
These
observations suggest that A d a w o u l d provide better support for distributed programs if it were
4
l n the microVAX-ll implementation of Argus, it takes only 160 m i c r o s e c o n d s to create, run, and
destroy a null process.
16
extended to treat p a c k a g e s as first-class objects.
5. Discussion
W e have argued that l a n g u a g e s that c o m b i n e s y n c h r o n o u s c o m m u n i c a t i o n primitives with a static
p r o c e s s structure d o not provide adequate expressive power for constructing distributed programs.
Although our d i s c u s s i o n has f o c u s e d on distributed programs, our c o n c l u s i o n s are valid for any
concurrent program organized as clients and servers (e.g. interprocess c o m m u n i c a t i o n in UNIX
[22] or programming for highly parallel machines [24]).
L a n g u a g e s that c o m b i n e s y n c h r o n o u s c o m m u n i c a t i o n with a static p r o c e s s structure c a n provide
adequate expressive power for avoiding local
delays, in w h i c h an activity is b l o c k e d b e c a u s e a local
r e s o u r c e is unavailable, as illustrated by monitors and by languages with a fully general g u a r d e d
command
mechanism.
(Ada's g u a r d e d a c c e p t
m e c h a n i s m , however, lacks expressive
b e c a u s e it d o e s not allow the call's arguments to b e u s e d in guards.)
power
The languages
consideration d o not, however, provide adequate expressive power for avoiding remote
under
delays,
in
w h i c h an activity is b l o c k e d pending the completion of a remote call. For monitors, this problem has
c o m e to b e k n o w n a s the nested
monitor
call
problem [11, 19].
T h e thesis of this paper is that
l a n g u a g e s in w h i c h modules c o m b i n e s y n c h r o n o u s c o m m u n i c a t i o n with a static p r o c e s s structure
suffer from an a n a l o g o u s problem.
W e examined a number of t e c h n i q u e s to avoid s u c h delays using A d a tasks.
Although certain
t e c h n i q u e s are adequate for particular applications, e.g. entry families, the only fully general solution
requires using s y n c h r o n o u s
primitives to simulate a s y n c h r o n o u s
primitives, and dynamic
task
creation to simulate a dynamic p r o c e s s structure. T h e need to use o n e set of primitives to simulate
another is e v i d e n c e that the original set lacks expressive power.
W e believe that it is necessary to a b a n d o n either s y n c h r o n o u s c o m m u n i c a t i o n or static p r o c e s s
structure, but
a well-designed
language
need
not
abandon
both.
If the
language
provides"
a s y n c h r o n o u s c o m m u n i c a t i o n primitives, in w h i c h the s e n d e r need not wait for the receiver to a c c e p t
the message, then the time spent waiting for the early reply in our examples w o u l d be s a v e d .
In
addition, the lower level-server need not be delayed waiting for the higher-level server to pick up the
response, a n d only two m e s s a g e s w o u l d b e n e e d e d .
T h e need to perform explicit multiplexing
remains a disadvantage of this c h o i c e of primitives.
Alternatively, if a language provides dynamic proce.ss creation within a single module, as in A r g u s
17
a n d M e s a , then the advantages of s y n c h r o n o u s c o m m u n i c a t i o n c a n be retained.
W h e n a call
m e s s a g e arrives at a module, a n e w p r o c e s s is created automatically (or allocated from a p o o l of
processes) to carry out the request.
W h e n the p r o c e s s has completed the call, a reply m e s s a g e is
sent back to the caller, a n d the p r o c e s s is destroyed or returned to the p r o c e s s pool.
m e s s a g e s are n e e d e d to carry out the call.
Only two
T h e new p r o c e s s has m u c h the same function as the
subsidiary tasks created by the server in Figure 4-2. It ensures that the module is not b l o c k e d even if
the request e n c o u n t e r s a delay. T h e p r o c e s s must s y n c h r o n i z e with other p r o c e s s e s in the module,
but
all the
processes
correctness.
are d e f i n e d
within
a single
module, w h i c h
facilitates
reasoning
about
Finally, a d y n a m i c p r o c e s s structure c a n mask delays that result from the use of local
input/output devices, a n d may permit multiprocessor n o d e s to be used to advantage.
synchronous
communication
with
dynamic
processes
is
a
better
choice
than
W e think
asynchronous
c o m m u n i c a t i o n with static p r o c e s s e s ; a detailed justification for this opinion is given in [17].
A l a n g u a g e m e c h a n i s m c a n n o t b e c o n s i d e r e d to have adequate expressive power if c o m m o n
problems require c o m p l e x and indirect solutions. Although s y n c h r o n o u s c o m m u n i c a t i o n with static
p r o c e s s structure
is computationally
complete, it is not sufficiently expressive.
Synchronous
c o m m u n i c a t i o n is not compatible with the static p r o c e s s structure, and languages for distributed
programs must a b a n d o n o n e or the other.
Acknowledgments
T h e authors w o u l d like to thank Beth Bottos, Larry R u d o l p h , and the members of the A r g u s d e s i g n
g r o u p , especially P a u l R. J o h n s o n , Robert Scheifler, and William W e i h l for their c o m m e n t s on earlier
drafts of this paper.
References
[1]
Andrews, G. R.
Synchronizing Resources.
ACM
[2]
Transactions
on Programming
Languages
and Systems 3(4):405-430, October, 1981.
Birrel A. D., and Nelson, B. J .
Implementing Remote P r o c e d u r e Calls.
ACM
Transactions
on Computer
Systems
2(1):39-59, February, 1984.
18
[3]
Birrel, A. D., Levin, R., N e e d h a m , R., a n d S c h r o e d e r , M .
Grapevine: an E x e r c i s e in Distributed C o m p u t i n g .
Communications
[4]
of the ACM
25(14):260-274, April, 1982.
B l o o m , T.
Evaluating S y n c h r o n i z a t i o n M e c h a n i s m s .
In Proceedings
of the Seventh
Symposium
on Operating
Systems
Principles.
D e c e m b e r , 1979.
[5]
B r i n c h H a n s e n , P.
Distributed p r o c e s s e s : A c o n c u r r e n t programming c o n c e p t .
Communications
[6]
of the ACM
21(11):934-941, November, 1978.
C o o k , R. P.
Starmod- a l a n g u a g e for distributed programming.
IEEE Transactions
[7]
on Software
Engineering
6(6):563-571, November, 1980.
Dept. of D e f e n s e .
R e f e r e n c e m a n u a l for the A D A p r o g r a m m i n g l a n g u a g e .
1983.
A N S I / M I L - S T D - 1 8 1 5 A -1983.
[8]
F e l d m a n , J.A.
High Level P r o g r a m m i n g for Distributed C o m p u t i n g .
Communications
[9]
of the ACM
22(6):353-368, J u n e , 1979.
G e h a n i , N.
C o n c u r r e n c y in A d a a n d Multicomputers.
Computer
[10]
Languages
7(2), 1982.
G e h a n i , Narain.
Ada: an advanced
introduction.
Prentice-Hall, E n g l e w o o d Cliffs, N e w Jersey, 1984.
[11]
H a d d o n . B. K.
N e s t e d monitor calls.
ACM
[12]
Operating
Systems
Review
11(4):18-23, O c t o b e r , 1977.
Hoare,C.A.R.
Monitors: an operating system structuring c o n c e p t .
Communications
[13]
of the ACM
17(10):549-557, O c t o b e r , 1974.
Hoare, C.A.R.
C o m m u n i c a t i n g sequential p r o c e s s e s .
Communications
of the ACM
21 (8):666-677, August, 1978.
ACM-SIGOPS,
19
Ichbiah, J . et al.
Rationale for the d e s i g n of the A d a programming l a n g u a g e .
SIGPLAN
Notices
14(6), J u n e , 1979.
Knight, J . C. and Urquhart, J . I. A.
O n the implementation and use of A d a o n fault-tolerant distributed systems.
Ada Letters 4(3), November, 1984.
Lauer, P. E., and N e e d h a m , R. M.
O n the duality of operating systems structures.
In Proc. Second
International
Symposium
on Operating
Systems
Structures.
October, 1978.
Reprinted in O p e r a t i n g Systems Review, 13(2) April 1979, pp. 3-19.
B. Liskov and M. Herlihy.
Issues in P r o c e s s a n d C o m m u n i c a t i o n Structure for Distributed P r o g r a m s .
In Proceedings
Database
of the Third IEEE Symposium
Systems.
on Reliability
in Distributed
Software
and
O c t o b e r , 1983.
Liskov, B., a n d Scheifler, R.
G u a r d i a n s a n d actions: linguistic s u p p o r t for robust, distributed programs.
ACM
Transactions
on Programming
Languages
and Systems
5(3):381-404, July, 1983.
Lister, A.
T h e problem of nested monitor calls.
ACM
Operating
Systems
Review
11(2):5-7, July, 1977.
Mitchell, J . , G., Maybury, W., a n d Sweet, R.
Mesa language
manual,
version
5.0.
T e c h n i c a l Report CSL-79-3, Xerox P a l o Alto R e s e a r c h Center, April, 1979.
N e l s o n , B.
Remote
Procedure
Call.
T e c h n i c a l R e p o r t C M U - C S - 8 1 - 1 1 9 , C a r n e g i e - M e l l o n University, 1981.
Ph.D. Thesis.
R a s h i d , R.
An inter-process
communication
facility for
Unix.
T e c h n i c a l Report C M U - C S - 8 1 - 1 2 4 , C a r n e g i e - M e l l o n University, 1980.
S c h u m a n , S. A., Clarke, E. M., N i k o l a o u , C. N.
P r o g r a m m i n g Distributed A p p l i c a t i o n s in A d a : a First A p p r o a c h .
In Proceedings
of the 1981 Conference
on Parallel
Processing.
August, 1981.
Segall, Z. a n d R u d o l p h , L.
PIE - a programming
and instrumentation
environment
for parallel
processing.
T e c h n i c a l Report C M U - C S - 8 5 - 1 2 8 , C a r n e g i e - M e l l o n University, 1985.
20
W e g n e r , P. and S m o l k a , S.
P r o c e s s e s , tasks, a n d monitory: a comparative study of c o n c u r r e n t programming primitives.
IEEE Transactions
on Software
Engineering
9(4):39-59, July, 1983.
W e l s h , J . , and Lister, A.
A comparative Study of T a s k C o m m u n i c a t i o n in A d a .
Software
- Practice
and Experience
11:257-290, 1981.
Y e m i n i , S.
O n the suitability of A d a multitasking for e x p r e s s i n g parallel algorithms.
In Proceedings
of AdaTEC
Conference
on Ada.
O c t o b e r , 1982.