Dynamic Initial Allocation awl Local Reallocation Procedures for

RESEARCHCONTRlBUTlONS
Programming
Techniques
and
Data
Structures
John Bruno
Editor
Dynamic Initial Allocation awl
Local Reallocation
Procedures for Multiple
Stacks
D. YUN YEH and TOSHINORI MUNAKATA
ABSTRACT: Two new procedures for manipulating
multiple stacks which share sequential memory locations are
discussed. The first is the dynamic initial allocation
procedure in which each stack is allocated as its first
element arrives rather than having every stack preallocated
at the very beginning of the entire process. The second is the
local reallocation procedure; in this scheme, when a stack
overflows, only its neighboring stacks, rather than the entire
memory area, are reorganized provided that a certain
condition is satisfied. The results of simulation appear to
suggest that these new approaches improve the operational
performance in many applications. With appropriate
modifications, these concepts may also be applied to any
other type of multiple linear lists (e.g., multiple queues)
sharing sequential memory locations.
1. INTRODUCTION
Multiple stacks have a variety of applications in computer science such as simulation languages (e.g.,
SIMULA), concurrent programming (e.g., ADA), and
some AI languages. We consider a problem for manipulating multiple stacks which share sequential memory
locations.
In our problem n stacks (n 2 3) share a sequence of
fixed, contiguous memory locations, L(l), . . , L(m),
01986 ACM 0001.0782/86/0200-0134
134
Communications of the ACM
750
with n 5 m (e.g., n = 10, m = 1000). During the course
of stack operations, a stack may overflow. If mr mory
areas still remain free to other stacks, we may .eallocate the memory locations among the stacks, b I shifting
them back and forth, to provide space for the particular
stack which incurs overflow. This procedure will be
executed upon each overflow until all the men lory locations are occupied. Our question is: “What are the
procedures that minimize the total number of : tack
reorganizations and data movements?”
The problem has been studied by a number (If researchers (e.g. [3-61). In Garwick’s algorithm [3 ] (Figure
l), each stack will be preallocated m/n memory locations sequentially; stack i (1 5 i 5 n) will occupy memory locations j, where (i - l)*m/n <j zz i*m/n. All
stacks grow in the direction of increasing i; her ce, the
structure may be called unidirectional. Here BA#jE[i]
and TOP[i] represent the base and top address6 s of the
ith stack. The first element of the ith stack is stored at
location BASE[i] + 1. The reallocation policy upln a
stack overflow is as follows: l-10 percent of thr free
memory space is evenly redistributed to each s ack,
while the rest of the free space is reallocated to each
stack in proportion to the most recent (i.e., since last
allocation) growth of the stack.
The idea of the most recent growth is based c n the
assumption that stacks which have grown recel ttly will
February 1986
Volume 29
Number 2
ResearchContributions
Stack
1
I
‘\
Stack
2
I
8
3
Stack
9
10
11
4
I
C-Y
\
1234567
Base (1) Top (1)
Stack
12
13
14
15
\
16
17
18
19
20
Base (2) Top (2) Base (3) Top (3) Base (4) Top (4)
FIGURE1. Garwick’s Unidirectional Structure for n = 4 and m = 20
compared to previous ones. In this structure, stack
2i - 1 occupies locations j with base[2i - l] C j 5 top[2i
- l] and stack 2i occupies locations j with top[2i] I j 5
base[2i], where base[zi - l] 5 top[2i - l] C top[2i] 5
base[2i] + 1, and base[2i] = base[2i + 11, 0 5 i 5
rn/21. Odd numbered stacks grow in the direction of
increasing i, and even numbered stacks in the direction
of decreasing i.
Other studies include one by Bobrow and Wogbreit
[l, 21, in which they discuss a multiple stack environment to handle multitasking, coroutines, backtracking,
etc.
In this article we propose two procedures called
dynamic initial allocation and local reallocation. With appropriate modifications, they may also be applied to
other similar data structures, such as multiple queues
and any type of variable length sequential lists [4-61.
The rest of this article is organized as follows. In Section 2, the outlines and underlying rationale of these
new procedures are discussed. In Section 3, our simulation results are shown. In Section 4, we give the conclusions. In the Appendix, a detailed description of the
dynamic initial allocation procedure is shown.
continue to grow in the near future. Of course, there is
no guarantee that such an assumption is always true. In
fact, if every stack’s size is expected to grow equally on
the average, the exact opposite may be true; that is, the
stacks which have not grown recently should grow
faster to catch up with the others. Without knowing the
expected sizes, however, the most recent growth concept probably is a reasonable assumption. Many Markov chain models, for example, employ such concepts.
The detailed procedures for Garwick’s algorithm can be
found in [3, 4, 61. Throughout this article, we assume
that the distribution of expected stack sizes is unknown.
Knuth [4, 61 discusses Garwick’s algorithm by using
10 percent of the free memory space for even redistribution, and 90 percent for reallocation proportional to
recent growth. Standish [6] has a similar study to that
of Knuth, but he introduces the current stack size as an
additional parameter for determining the redistribution
of the 90 percent of the free memory space. He empirically compares three reallocation cases based on the
different weights of these two parameters. Namely,
each stack gets space proportional to its 1) recent stack
growth (p = l), 2) current stack size (p = 0), and 3) halfand-half of 1) and 2) (p = 0.5).
Recently, Korsh and Laison [5] studied a bidirectional
preallocation structure of stacks in which pairs of
stacks grow toward each other (Figure 2). This structure
has empirically shown overall superior performance
Stack
1
I
Base (1)
Stack
2
8
10
r
1
:II
7
2. DYNAMIC
INITIAL
ALLOCATION
AND
LOCAL REALLOCATION
PROCEDURES
First we note that our problem involves two separate
issues. The first is how to allocate memory locations to
234567
t
Top (1)
I
9
Stack
* \
11
12
Stack
3
13
,
I
14
15
16
17
18
I t\
Top (2) Base (2) Top (3)
Base (3)
4
19
20
I
t t
Top (4)
Base (4)
FIGURE2. Korsh-Laison’s Bidirectional Structure for n = 4 and m = 20
February 1986
Volume 29
Number 2
Communicationsof the ACM
135
ResearchContributions
stacks for the first time. The second is how to reallocate
existing stacks when an overflow occurs. The dynamic
initial allocation procedure deals with the first issue,
and the local reallocation procedure deals with the second. In general, different initial allocation procedures
can be used together with any reallocation procedure,
each giving a different combination.
2.1 Dynamic Initial Allocation
We observe that all of the previously suggested methods have one common characteristic-termed
here as
static preallocation, that is, the entire memory space is
preallocated to every stack before elements arrive. In
our new approach, termed as dg’namic initial allocation,
we do not preallocate memory space to any stack at the
beginning. Instead, memory spa’ce for each stack is allocated when it is requested for the first time, that is,
when its first element arrives. Furthermore, the
amount of space allocated to a new stack is somewhat
determined by the amount of remaining free space.
This implies that stacks which have arrived at an early
stage and grown since then tend. to be given more space
than newcomers. Figure 3 illustrates the general concept of stack allocation and growth upon the arrival of
their elements.
For the first incoming stack, it& growth starts from
the left end: the second incoming stack grows from the
right end. When the first element of the third stack
arrives, we split the free area into two equal-sized, nonoverlapped regions called free space areas 1 and 2. The
third stack grows starting from the newly created
boundary between the free spaces to the right. The
fourth incoming stack will grow to the left starting from
the new boundary. For the fifth stack to be allocated,
we pick out the larger of free space areas 1 and 2, and
apply the same splitting and growth procedures as before. If, for example, free area 1 is larger than 2, then
we split free area 1 into new free areas 1 and 3. In
general, whenever splitting is necessary, we pick out
the largest of the free space areas.
Figures 1-3 show the essential differences among the
three initial allocation schemes. For the most active
stack, 2, two memory locations (L(9) and L(10)) are left
before it overflows in Figure 1. Two and one half are
left in Figure 2 and three are left in Figure 3 assuming
that each common free area is divided equally by the
two stacks adjacent to the area. Thus, Figure 3 displays
the best performance, if stack growth rates are unequal
and stack 2 grows the fastest.
Although there is no general guarantee that the dynamic scheme will always outperform the static ones, it
is reasonable to expect its superior performance for certain situations. Some of such situations are now described.
a) The dynamic scheme will perform well when the
sizes and first arrival time among stacks are not uniform. In such a case, certain stacks do not require as
much space, or their first elements arrive at a much
136
Communicationsof the ACM
later time, and it is wasteful to preallocate men ory
space to those less active stacks. Using the dym mic
approach, we defer memory allocations to those less
active stacks and allocate them smaller amount Gwhen
they arrive. This implies that the active stacks lvill
have more memory space to grow and the first 1recurrence of an overflow. if any. will be delayed. Tl e worst
case for the dynamic approach would occur wh :n the
stack insertion sequence is of uniform distribution, that
is, every stack grows evenly. Such performance expectations are well supported by our simulation ex )eriment (See Section 3).
b) Let us call the duration between the begin ring of
the stack process and the first overflow occurrelice the
“first phase” and the succeeding duration the “s :cond
phase.” The dynamic approach would have a m )re significant effect during the first phase than the se :ond,
since once an overflow occurs, the existing stac;;s are
reorganized according to a reallocation algorithr 1.
Hence, the dynamic approach would be more etfective
for the operations where the first phase is a significant
portion of the entire process.
c) Another advantage of the dynamic procedr re is
that the eventual number of stacks does not have to be
known in advance.
The dynamic initial allocation procedure, con pared
to the static one, requires some additional overhead.
For the static one, every stack is preallocated frc m the
beginning. Hence, no check is necessary to deter mine
whether the stack already exists for every incon iing
element. For the dynamic one, however, such a check
is necessary (perhaps requiring one machine cycle) before inserting each element. We can employ a dynamic
algorithm with which, once all the stacks are allocated,
element insertions are performed without checking. Using this scheme, the total number of checks during the
entire process would range from YIto m.
We note that each of the logical locations, L(1)
L(2),
, may actually be a record of r physical 1leations. Thus, the actual number of physical data I novements may be equivalent to r times the number If logical movements. To determine the effects of the ( verheads to the entire process, the number of physi :a],
rather than logical, movements should be considered.
2.2 Local Reallocation
Now we are concerned with how to reallocate IT emory
space for existing stacks when a stack overflows. We
observe that all of the previously suggested met1 ods
have one common characteristic-termed
here ESglobal
reallocation. Under this scheme, the entire memory
space and all existing stacks are subject to reorg: nization. In the local reallocatiolz procedure, we attempt to
solve the space problem locally.
Figure 4 illustrates the concept of the local rea Ilocation. When a stack (stack 7 in Figure 4) overflow ;, we
February 1986 Volume 29 f~umber 2
Resenrch Contributions
Stack
Free Space
3
8
1234567
9
10
Area 1
11
12
13
14
15
16
17
10
19
20
:-I
L--
t I
Base (3)
Top (3)
FIGURE3a. The First Element of Stack 3 Arrives
Stack
Free Space
3
,
Stack
Area 1
_I
1
2
-
234567
8
9
10
11
12
13
14
15
16
17
18
19
1
20
Base (2)
Base (3) Top (3)
TOP
(2)
FIGURE3b. The First Element of Stack 2 Arrives
Stack
3
Free Space
12
Area 1 Stack
345678
9
10
1
Free Space
11
12
13
14
Area 2
15
16
Stack
17
18
2
19
20
r-s
I
I
L--*
-
I
f
Base (3) Top (3)
t
Base (I)
f
Top (1)
I
Top (2)
t
base
(2)
FIGURE3c. The Second Element of Stack 2 and The First Element of Stack 1 Arrive In This Order
+,
II
12345678
I-lI
ILt I
Base (3) Top (3)
/I
.
”
9
10
11
12
t
Top (4) Base (1) Top (1)
Base (4)
13
14
15
16
17
18
I
19
20
t
Top (2) Base (2)
FiGURE3d. The Second Element of Stack 1, The Third Element of Stack 2, And Two Elements of Stack 4 Arrive
FIGURE3. Dynamic Initial Allocation Bidirectional Structure For n = 4, m = 20, and some Sample Data As In Figures 1 and 2.
Figure 3 (a, b, c, d) shows how the stacks are allocated and grow upon arrival of their elements.
February 1986
Volume 29
Number 2
Communications of fhe ACM
137
Research Contributions
Free
Area
1
2
I
Stack
II
2
I
14
I
3
I
5
(4)
3.
I
I
5
II
6 9
L First r ealIca:ion
attempt
L Second
10
J
reallocation
attempt
J
FIGURE4. Local Reallocation Procedure After Stack 7 Overflows. First, we attempt to reallocate Free Areas 3 and 4 to St lcks
5,6,7, and 6. If this does not allocaie sufficient space, we attempt to reallocate Free Areas 3,4, and 5 to Stacks 5-10, and so on.
check how much space will be reallocated if neighboring stacks and free areas are reorganized. In Figure 4,
this first attempt involves stacks 5, 6, 7, and 8, and free
areas 3 and 4. If this attempt gives sufficient space for
those stacks involved, then actual reallocation will take
place. Our criterion for “sufficient” space is: (sum of
free areas involved) 2 (sum of the most recent growth
for the stacks involved). Our crude rationale for this
criterion is that if each stack involved is allocated free
space not smaller in size than its most recent growth (as
an average), the stack should stay without an c verflow
until the next overflow takes place elsewhere.
If the above first attempt does not result in s efficient
space, then the domain of the neighboring stat cs and
free areas will be extended to include the next adjacent
areas. This process continues until either suffix ient
space is found, or the entire memory space is r cached.
The latter case is the same as the global reallocation.
TABLE I. Gatwick’s Unidirectional Algorithm
Stacks chosenvdthfrtiquency
P
Growth
(in units:)
Growth
(in spurts
of 20)
Growth
(in spurts
of 50)
1
0.5
0
1
0.5
0
1
0.5
0
Unifomrran&n choiceof stacks
a
b
1
c,
0.0
0.0
0.0
2486.1
1874.0
1562.4
2218.3
2251.6
3669.7
9449.3
6024.0
4759.3
11974.1
14059.2
20019.0
8985.9
11980.6
28148.1
17.9
10.5
t-f.9
25.0
21.6
32.2
17.8
19.0
44.1
’
a
121.5
121.5
121.5
1365.7
920.2
1006.5
1739.7
1206.5
1552.1
f#op&fonaltoY2’
b
3727.4
3066.6
2635.9
6931.2
5543.9
7017.7
5508.2
5833.0
8571.2
c
11.8
9.7
9.2
21.0
14.7
18.7
16.0
14.5
20.3
Column a represents the total number of moves at 70 percent saturation.
Column b represents the total number of moves at 100 percent saturation.
Column c represents the total number of reorganizations at 100 percent saturation.
TABLE It. Korsh-Laison’s Bidirectional Algorithm
Growth
(in units)
.a
‘P
tbifdfm dndofn choiceof stacks
a
b
._
c
1
0.5
0.0
0.0
0
Growth
(in SpUrt!j
of 20)
Growth
(in SfXKt!~
of 50)
1
0.5
0
1
0.5
0
0.0
136.0
136.0
136.0
314.9
344.1
402.1
3622.9
3097.0
2902.7
4881.1
6911.7
9573.4
4955.7
6995.9
12626.8
7.1
6.1
6.1
10.0
10.4
15.5
9.8
11.3
19.7
hacks chosanwitll fle&ency
pK@tional to ‘/2’
c
a
b
131.5
131.5
131.5
458.9
403.8
456.6
665.2
534.6
619.2
2184.6
2198.0
1813.3
3056.3
3248.7
3745.7
3411.3
3309.1
3877.1
6.8
6.8
6.2
8.6
8.9
10.1
8.7
8.3
9.7
See Table I lor the meanings of Columns a, b, and c.
138
Communications
of
the A.CM
February 1986
Volume 29 Number 2
Research Contributions
Note that in the local reallocation procedure, actual
data movements do not take place while we check the
criterion to find whether there is sufficient space.
Again there is no general guarantee that the local
reallocation scheme always works well. However, the
idea makes sense and will work well in many applications For example, in Figure 4, using the global reallocation stacks 3 and 5 may be moved two memory locations to the left for the first overflow. In the second
overflow, these stacks may be moved back two locations to the right. In this case, the movements are a
total waste. For the second overflow, if the stacks are to
be moved right by three locations instead of two, it is
better not to move them for the first overflow and move
them right one location for the second overflow. Similar arguments can be extended to cases of more than
two consecutive overflows. To sum up, with the global
reallocation approach, stacks may be moved back and
forth repetitively,
each time changing some locations,
and yielding little net result with a lot of computing
effort. This will show significant effect, particularly
when II is large.
We also tested some variations of the local reallocation approach combined with the dynamic initial approach. The first variation is a use of both the local and
global reallocation schemes. If less than x percent (e.g.,
x = 100, 95, 85, 75) of the entire memory space is filled
with stack elements, then we use the local scheme;
otherwise we use the global. Our experiment shows
that 100 percent, that is, use of the local scheme without the global, gives the best result.
Another variation we tested is to modify the space
sufficiency criterion, for example, (sum of free areas
involved] 2 y *.(sum of the most recent growth for the
stacks involved), where y = 0.5, 0.8, 1.0, 1.2, and 1.5.
The simplest original one (i.e., y = 1.0) appeared to give
the best ‘result. It is interesting to note that in both
variations the simplest schemes seem to give the best
results.
An additional overhead for the local reallocation procedure is the number of checks to determine whether
certain neighboring areas are sufficient. During each
reorganization, the two sums for the criterion are accumulated and tested. For each reorganization, at the
most, m/21 - 1 checks are required. This overhead
usually is negligible in comparison with the cost of data
movements.
3. EXPERIMENTAL
RESULTS
Our simulated experimental
results based on the structures of Garwick, Korsh-Laison, and for the dynamic
initial allocation approach without and with the local
reallocation procedure are shown in Tables I-IV, respectively. All programs are written in Pascal and run
on the VAX 11/730 under the UNIX’ operating system.
’ UNIX is a registered trademark of AT&T Bell Laboratories.
TABLE III. Dynamic Initial Allocation Algorithm without Local Reallocation
$@cks’chosen with fmuency
D
Growth
(in units)
1
0.5
0
1
0.5
0
1
0.5
0
Growth
(in spurts
of 20)
Growth
(in spurts
of 50)
Uniformrandomchoiceof stacks
a
b
c
484.2
484.2
484.2
361.5
361.5
361.5
40.0
40.0
40.0
3455.1
3329.6
3123.0
5940.5
6779.6
9526.2
4184.2
5698.1
9256.8
7.1
7.0
6.7
12.2
11.7
16.3
a.8
10.1
15.8
a
propoftionelto r/z
”
b
198.4
198.4
198.4
75.5
75.5
75.5
110.0
1 IQ.0
110.0
2008.1
2031.9
1495.3
i 804.3
2209.5
2966.4
1433.8
t 747.8
2342.9
C
7.3
7.6
6.6
6.3
7.0
9.3
5.8
6.6
a.3
See Table I for the meanings of Columns a, b, and c.
TABLE IV. Dynamic Initial Allocation Algorithm with Local Reallocation
Stackschosenwith frequency
P
Growth
(in units)
Growth
(in spurts
of 20)
Growth
(in spurts
of 50)
1
0.5
0
1
0.5
0
1
0.5
0
Uniformrandomchoiceof stacks
a
b
C
484.2
484.2
484.2
336.0
336.0
336.0
40.0
40.0
40.0
3198.4
2552.1
2791 .l
4576.1
5723.8
8027.4
3242.1
4862.7
9716.6
7.4
5.9
6.3
13.4
13.6
21.7
8.6
11.4
27.9
proportional to %’ ,
a
b
193.1
193.1
193.1
75.5
75.5
75.5
11a.o
110.0
110.0
1555.2
1504.9
1487.1
1641 .O
1922.z
2637.1
1172.8
1600.0
2198.1
C
7.1
7.0
6.7
6.7
6.7
9.7
5.1
6.3
a.2
See Table I for the meanings of Columns a, b, and c.
February 1986
Volume 29
Number 2
Communications of the ACM
139
ResearchContributiom
The characteristics of the expleriment are similar to
those by Standish and Korsh-Laison as shown.
‘1. There are 10 stacks to share 1000 locations. Each
tah]e entry is obtained by taking the average value on
ten runs.
2. Two probability distribmions for incoming stack
frequencies (i.e., the number of arriving elements) are
tested. One is uniform distribution, that is, all the
stacks will have the same size on the average. In the
other distribution, the probability of the ith stack size is
assumed to be proportiona! to I%‘, thus creating unequal
sizes of stacks.
5. Three different growth spurt sizes (items arriving
one at a time, in spurts of 20 and 50) are used.
4. The reallocation policy used is that: (i) 10 percent
of the available space is divided uniformly among
stacks, (ii) (90 * p) percent is allocated in proportion to
recent stack growth, (iii) [90 * (1 - p)] percent is allocated in proportion to stack size. The values used for p
are 0, 9.5, 1.
In these tables, the total number of data movements
APPENDIX
Dynamic Initial Allocation Algorithm
In ihe following, the sequence of incoming elements is
represented as i,, il, . . , i,, where 1 i ip 5 n, 1 5 p 5 9,
and ;7 5 m. We ignore stack deletions in this sequence
to simpbfy the discussion. When the memory locations
L(l), . . . , L(m) have been split into k (k 5 n/2) nonover-
Algorithm: Dynamic Initial Allocation
1:
[Initialize]
1.1.
For each stack
i (1 II i ZZ n) do
(a) S(i)=
0;
;*S(i)=
k (1 I k S n/2)
means
that
stack
i
is
allocated
at
the
free space area k.*/
(b) D(i)=
0;
/*D(i)=
‘d’ means that
stack
i grows
towards
decreasing
memory locations,
and D(i)=
'i.' towards
jncreasing
memary locations.*/.
1 .2.
For each .space area i (1 i i ZG n/2)
do
(4) AL(i)=
0, AR(i)=
0;
k 5 n) means that
stack
ing
from
the
left
end
/*AL(i)=
k (1 5
k grows
startof the free
space area i, and AR(i)=
k means that
stack k grow:; starting
from the right
end.*/
1.3.
Set NFS= 1; /*the
total
number of
equals
I.*/
free space area initially
1.4. AVIL= 0;
/*AVIL=
k means that the
right
end of the free space area k is
ready for allocating
a stack.*/
1 .5 . AVLOC= 0;
/+AVLOC= p means that p
is the breaking
address to split
one
free space into two.*/
140
Coi;:murlicatiom of the .4CM
(Columns a and b) are probably more importal tt factors
than the total number of reorganizations (Colt mn c).
The former contributes the major portion of tl e time in
the entire process whereas the latter affects the reorganization overhead.
4. CONCLUSIONS
The two new procedures, dynamic initial allot ation
and local reallocation, are proposed to minimi :e the
number of stack reorganizations and data movements
for handling multiple stacks. The dynamjc ini ial allocation performs about the same as bidirection 11preallocation procedures for the uniform distribution
For nonuniform distributions, however, the dynamic : theme is
likely to delay the first occurrence of an overf ow and
reduce the total number of data movements. The local
reallocation procedure is likely to further red1 ce the
total number of data movements. Both proced rres are
easy to implement and their combination perf )rms well
in many problems dealing with the multiple s acks and
other sequential lists.
lapped regions, the number of stacks which hi ve been
allocated memory space is either 2k - 1 or Pk. Now
suppose that a new stack has to be allocated. I r the
value is 2k - I, then the new incoming stack i beady
has the preallocated space. If the value is 2k, \ re search
for the largest free space area among the k reg ons and
split’this area into two halves.
2:
[Allocate
first
two stacks]
/*Allocate
the first
stack*/
2.1.
Read in the first
incoming
s-:ack
il;
AL(l)=
it,
BASE(il)=
0, TOP il)=
'i',
S(il)=
1.
1, D(il)=
2.2. Read in the next incoming
st#tck i2
2.3. While i2 = il,
do
If TOP(il)
1 m, then Call TABLE ,I$FULL,
else TOP(il)=
TOP(il)+l.
Read in the next incoming
stack i2
Endwhile;
/*Allocate
the
second
stack*/
2.4.
AR(l)=
i2, BASE(i.Z)=
D(i2)=
'd',
S(i2)=
1.
Read in
(INPUT)
stack i3
While i3 # null,
do
3:
the
m, TOP(:.2)=
next
m,
incoming
[Allocate]
Examine vector
S. If
S(i3)=
0, then allocate
space for
stack f3 as follows.
/*The right
end of the AVIL-th
frl!e
space area is available*/
4:
February
1986
Volume
29
Number
2
Research Contributions
4.1.
If AVIL # 0, then
AR(AVIL)=
i3,
BASE(i3)=
AVLOC,
TOP(f3)=
BASE(f3)$
1, D(f3)=
'd',
S(f3)=
AVIL,
AVIL = 0, else
/*Find
the largest
free
space
area*/
begin
T = 0;
for i = 1 to NFS do
> T
If (Top (AR (i))-Top(AL(i)))
then
T = TOP (AR (i))-TOP(AL(i)))
INX=
endif
endfor
end
endif
/*increase
the
by I*/
i
number
of
free
space
area
4.2.
NFS= NFS+l
/*stack
insertions
from the left
end
of the Free Space Area NFS*/
4.3.
AR(NFS)=
AR(INX),
AL(NFS)=
f3,
BASE(f3)=
TOP(f3)=
TOP(AL(INX))+
T/2,
S(f3)=
NFS, D(f3)=
'i'
AVIL=
INX,
AVLOC= BASE(f3).
[Insert]
Check overflow
? Set T = S(f3).
If
TOP(AL(T))
L TOP(AR(T)),
then overflow
occurs.
The reallocation
algorithms
are applied.
5.2.
Perform
stack
insertion.
If D(f3)
=
, 1 I , then TOP(f3)=
TOP(f3)
+ 1, else
TOP(f3)=
TOP(f3)
- 1.
Read in the next
incoming
stack
i3.
Endwhile;
5:
5.1.
Acknowledgments
We would like to thank our graduate student Kuo-Shu
Kuo for her programming efforts. The authors are also
grateful to the referees for their helpful comments.
CR Categories and Subject Descriptors: E.l [Data]: Data Structureslists: D.4.2 [Operating Systems]: Storage Management-allocafion/deallocationstrategies
General Terms: Algorithms. Performance
Additional Key Words and Phrases: stacks. stack management, multiple stacks, multiple sequential lists.
REFERENCES
1. Bobrow, D.G.. and Wegbreit, B.A. A model and stack implementation of multiple environment.
Commu~t. ACM 76, 10 (Oct. 19731,
591-603.
2. Davies, D.J.M. Technical Correspondence on “A multiple-stack
manipulation procedure” Commun. ACM 27, 11 [Nov. 1984). 1158.
3. Garwick. J.V. Data storage in compilers. Bit 4 (1964), 137-140.
4. Knuth, D.E. The Art of Compukr Programming:Vol. 1. Fundamental
Algorifhn~s. Addison-Wesley,
Reading, Mass., 1973. pp. 240-248.
5. Korsh. J.K.. and Laison. G. A multiple-stack
manipulation procedure.
Contmun. ACM 26. 11 [Nov. 1983). 921-923.
Reading,
6. Standish. T.A. Data Sfrucfure Techniques.Addison-Wesley,
Mass.. 1980, pp. 28-39.
Received l/85;
ACM Algorithms
Collected Algorithms from ACM (CALGO) now includes quarterly issues of complete algorithm listings on microfiche as part
of the regular CALGO supplement service.
revised 7/85: accepted 8/85
Authors’ Present Addresses: D. Yun Yeh. Department of Computer Science. Arizona State University, Tempe. AZ 85287. Toshinori Munakata.
Department of Computer and Information Science. Cleveland State University, Cleveland. OH 44115.
Permission to copy without fee all or part of this material is granted
provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication
and its date appear, and notice is given that copying is by permission of
the Association for Computing Machinery. To copy otherwise. or to
republish. requires a fee and/or specific permission.
To subscribe to CALGO, request an order form and a free
ACM Publications Catalog from the ACM Subscription Department, Association for Computing Machinery, 11 West
42nd Street, New York, NY 10036. To order from the ACM
Algorithms Distributions Service, refer to the order form that
appears in wery issue of ACM Transactions on Mathematical
Software.
The ACM Algorithms Distribution Service now offers microfiche
containing complete listings of ACM algorithms, and also okers
compilations of algorithms on tape as a substitute for tapes
containing single algorithms. The fiche and tape compilations
are available by quarter and by year. Tape compilations covering
five years will also be available.
February 1986
Volume 29
Number 2
Communications of the ACM
141