UNI
VERSI
TY OF TORONTO FACULTY OF APPLI
ED SCI
ENCE AND ENGI
NEERI
NG FI
NAL EXAMI
NATI
ON,
Apr
i
l
,
2013 Thi
r
d Year
– Mat
er
i
al
s ECE344H1
Oper
at
i
ng Syst
ems Cal
cul
at
or
Type:
2 Exam Type:
A Exami
ner
– D.
Yuan Please Read All Questions Carefully! Please keep your answers as concise as possible. You do not need to fill the whole space provided for answers. There are 20 total numbered pages, 11 Questions. You have 2.5 hours. Budget your time carefully! Please put your FULL NAME, UTORid, Student ID on THIS page only. Name: _______________________________________________________ UTORid: _____________________________________________________ Student ID: ___________________________________________________ Grading Page Total Marks Marks Received Question 1 10 Question 2 7 Question 3 7 Question 4 8 Question 5 10 Question 6 8 Question 7 8 Question 8 6 Question 9 15 Question 10 12 Question 11 9 Total 100 Question 1 (10 marks, 2 marks each): True or False (No explanations needed) (a) The mai
n r
eason t
o have a mul
t
i
l
evel
page t
abl
e i
s t
o speed up addr
ess t
r
ans
l
at
i
on.
Tr
ue Fal
se False. Main reason is to save space in memory. (b) The mai
n r
eason t
o have a har
dwar
e TLB i
s t
o speed up addr
ess t
r
ansl
at
i
on.
Tr
ue Fal
se True. That is the point of the TLB, to cache address translations and hence speed up the entire process. (c) Usi
ng a mul
t
i
l
evel
page t
abl
e i
ncr
eases TLB hi
t
r
at
e.
Tr
ue Fal
se False. TLB hit time is not affected by pagetable structure (d) For
bet
t
er
per
f
or
mance,
OS shoul
d pl
ace i
nodes cl
ose t
o t
hei
r
i
ndexed f
i
l
es on har
d di
sk.
Tr
ue Fal
se True. Because the inode and its indexed file are often accessed together. (e) On uni
x f
i
l
e syst
ems,
t
he i
nodes f
or
r
egul
ar
f
i
l
es have di
f
f
er
ent
f
or
mat
s t
han t
he i
nodes f
or
di
r
ect
or
i
es.
Tr
ue Fal
se False. Question 2 (7 marks): Hard disk Fr
om a har
d di
sk’
s per
spect
i
ve,
what
ar
e t
he t
hr
ee component
s t
hat
make up t
he dat
a acc
essi
ng t
i
me? Br
i
ef
l
y descr
i
be t
he meani
ng of
each component
.
Seek: move the head to the appropriate track. Rotational delay: wait for the block to rotate under the head. Data transfer. Question 3 (7 marks): Virtual memory What
ar
e t
he benef
i
t
s of
havi
ng vi
r
t
ual
memor
y,
i
nst
ead of
aski
ng pr
ogr
ammer
s t
o di
r
ect
l
y use physi
cal
memor
y
? Pl
ease l
i
st
t
wo.
1. Programs use more memory than the size of physical memory. 2. Programs can use memory from 0 XX bytes, and can be run on different machines without change. 3. Programmers do not need to worry about memory sharing with other processes. Question 4 (8 marks): Scheduling I I
n t
hi
s quest
i
on,
y
ou wi
l
l
answer
some quest
i
ons about
schedul
i
ng al
gor
i
t
hms.
Assume t
hat
sor
t
i
ng i
s an expensi
ve comput
at
i
on,
and r
equi
r
es some amount
of
t
i
me per
el
ement
sor
t
ed;
you can est
i
mat
e t
he t
i
me t
o sor
t
NumJobs el
ement
s wi
t
h a f
unct
i
on Sort(NumJobs).
Fur
t
her
assume t
hat
swi
t
chi
ng bet
ween j
obs t
akes a fixed amount
of
t
i
me (
say,
on a cont
ext
swi
t
ch,
when one j
ob ends and anot
her
begi
ns,
or
t
o swi
t
c
h i
n a new j
ob f
or
t
he f
i
r
st
t
i
me)
;
we wi
l
l
cal
l
t
hi
s val
ue Switch.
Fi
r
st
,
assume t
hr
ee j
obs ar
r
i
v
e at
t
he same t
i
me,
of
l
engt
hs 30,
20,
and 10 (
t
hey ar
r
i
ve i
n t
hi
s
or
der
)
.
Assume t
hat
Sort(NumJobs) = NumJobs × 10, and t
hat
Switch = 0 (
no swi
t
c
hi
ng over
head)
.
Assume we use non
pr
eempt
i
ve SJF (
Shor
t
est
Job Fi
r
st
)
Pol
i
cy.
(a)(2 marks) How l
ong wi
l
l
t
he syst
em t
ake t
o compute the schedule? Answer:
Thr
ee j
obs means 3 × 10 or
30 (b)(2 marks) How l
ong wi
l
l
i
t
t
ake t
o r
un t
he j
obs once t
he schedul
e has been comput
ed? Answer:
Tot
al
r
un t
i
me:
10 + 20 + 30 or
60 Now assume t
hat
swi
t
chi
ng i
s expensi
ve,
t
oo;
i
n f
act
,
Switch = 10.
(c)(2 marks) I
ncl
udi
ng schedul
i
ng and swi
t
chi
ng over
head,
what
i
s t
he aver
age turnaround t
i
me f
or
each j
ob? Answer:
turnaround time for job 1: 30 (sorting) + 10 (switch) + 10 (execution of job 1) = 50 turnaround time for job 2: 30 (sorting) + 10 (switch) + 10 (execution of job 1) + 10 (switch) + 20 (execution time for job 2) = 80 turnaround time for job 3: 30 (sorting) + 10 (switch) + 10 (execution of job 1) + 10 (switch) + 20 (execution time for job 2) + 10 (switch) + 30 (execution for job 3) = 120 Average turnaround time: (50 + 80 + 120) / 3 = 83 (Note: it is OK to assume there is no context switch between ‘sort’ and ‘job 1’). Now,
assume we ar
e t
i
r
ed of
SJF,
and woul
d l
i
ke t
o use pr
eempt
i
ve Round Robi
n pol
i
cy,
wi
t
h t
i
mesl
i
ce = 20.
(d)(2 marks) Ass
umi
ng t
he same schedul
i
ng and swi
t
chi
ng over
head (
Sort(NumJobs) = NumJobs × 10, and Switch = 0)
,
what
i
s the total execution time f
r
om t
he ar
r
i
val
of
t
he t
hr
ee j
obs t
o t
he c
ompl
et
i
on of
al
l
t
hr
ee? Answer: there is no sort time in Round Robin. Here is the execution sequence: 0 20: job 1 (10 remains) 20 40: job 2 (job 2 finishes) 40 50: job 3 (job 3 finishes) 50 60: job 1 (job 1 finishes) Total execution time: 60. Note: it is OK to assume switch = 10. Question 5 (10 marks): Unix File System Consi
der
t
he Uni
x Fi
l
e Syst
em as we di
scussed i
n t
he l
ect
ur
e.
I
t
uses i
nodes t
o i
ndex di
r
ect
or
i
es and f
i
l
es.
(a)(5 marks) Assume onl
y t
he addr
ess of
t
he dat
a bl
ock
f
or
r
oot
di
r
ect
or
y (
not
t
he i
node of
r
oot
di
r
ect
or
y)
i
s cached.
Al
l
ot
her
cont
ent
s st
or
ed i
n t
he Fi
l
e Syst
em ar
e not
c
ached.
Assume t
he cont
ent
of
di
r
ect
or
i
es can be st
or
ed i
n onl
y one dat
a bl
ock.
How many di
sk oper
at
i
ons ar
e needed t
o r
ead t
he f
i
r
st
byt
e of
a f
i
l
e:
/
1/
2/
mydoc.
t
xt
? Expl
ai
n your
answer
.
Answer: 1st: datablock of ‘/’, get the inode location of ‘/1’; 2nd: access the inode of ‘/1’, get the address of datablock of ‘/1’; 3rd: access the datablock of ‘/1’, get the inode location of ‘/1/2’; 4th: access the inode of '/1/2', get the address of datablock of '/1/2'; 5th: access the datablock of '/1/2', get the inode location of '/1/2/mydoc.txt'; 6th: access the inode '/1/2/mydoc.txt', get the address of datablock '/1/2/mydoc.txt'; 7th: access the datablock of ‘/1/2/mydoc.txt’ and read the first byte total 7 disk accesses. (b)(5 marks) Ass
ume wi
t
hi
n an i
node t
her
e ar
e 12 di
r
ect
poi
nt
er
s,
a si
ngl
e
i
ndi
r
ect
poi
nt
er
,
and a doubl
e
i
ndi
r
ec
t
poi
nt
er
.
Assume a 4KB bl
ock si
ze,
and di
sk addr
esses t
hat
ar
e 32 bi
t
s.
What
i
s t
he maxi
mum f
i
l
e si
ze (
measur
ed i
n number
of
bl
ocks)
on t
hi
s Uni
x syst
em? Answer: 12 direct: 12 blocks 1 singleindirect: 1 block (4KB) full of addresses: 4KB = 2^10 * 4bytes = 2^10 blocks = 1024 blocks 1 doubleindirect: 1 block full of single indirect addresses: 1024 addresses 1024 blocks of addresses, 1024 * 1024 blocks. 12 + 1024 + 1024*1024 blocks. Question 6 (8 marks): Synchronization I Consi
der
t
he f
ol
l
owi
ng code:
Thread 1: 1 lock(L1); 2 lock(L2); 3 // critical section requiring L1 and L2 locked. 4 unlock(L2); 5 unlock(L1); Thread 2: 6 lock(L3); 7 lock(L1); 8 // critical section requiring l3 and l1 locked. 9 unlock(L1); 10 unlock(L3); Thread 3: 11 lock(L2); 12 lock(L3); 13 // critical section requiring l2 and l3 locked. 14 unlock(L3); 15 unlock(L2); I
n t
he begi
nni
ng,
al
l
t
hr
ee l
oc
k
s ar
e i
ni
t
i
al
i
zed as “
not
l
ocked”
.
Al
so assume t
he t
hr
eads
can execut
e i
n any ar
bi
t
r
ar
y i
nt
er
l
eavi
ngs.
(a)(4 marks) Wi
l
l
t
her
e be any
pr
obl
em wi
t
h t
hi
s code? I
f
yes,
show an i
nt
er
l
eavi
ng.
I
f
no,
why? Yes. There is a deadlock. Consider the following interleaving: thread 1: lock (L1) thread 2: lock(L3); thread 3: lock(L2); Now there will be a circular wait: thread 1 waiting for L2 (held by thread 3), thread 2 waiting for L1 (held by thread 1), thread 3 waiting for L3 (held by thread 2). (b)(4 marks) I
f
t
her
e i
s a pr
obl
em,
how do you f
i
x i
t
? (
Not
e,
each cr
i
t
i
cal
sect
i
on r
equi
r
es
t
wo di
f
f
er
ent
l
ock
s,
you cannot
change t
hi
s assumpt
i
on)
.
Acquire the locks in order the order of L1, L2, L3. Question 7 (8 marks): Page Replacement When t
he physi
cal
memor
y i
s f
ul
l
,
a page f
aul
t
wi
l
l
cause t
he OS t
o evi
ct
a page.
Assume your
OS uses NRU (
LRU appr
oxi
mat
i
on,
not
LRU cl
ock)
al
gor
i
t
hm,
and t
her
e ar
e onl
y 6 physi
cal
pages.
The val
ue of
r
ef
er
ence bi
t
s and NRU Count
er
s at
t
he moment
of
page f
aul
t
ar
e l
i
st
ed as bel
ow.
Page # Ref
er
ence bi
t
NRU Count
er
1 0 0 2 1 6 3 1 3 4 0 1 5 0 2 6 1 0 (a)(3 marks) Af
t
er
you r
un t
he NRU al
gor
i
t
hm (
bef
or
e swappi
ng)
,
what
ar
e t
he val
ues of
r
ef
er
ence bi
t
s and NRU count
er
s? Pl
ease f
i
l
l
t
he t
abl
e bel
ow.
Page # Ref
er
ence bi
t
NRU Count
er
1 0 1 2 0 0 3 0 0 4 0 2 5 0 3 6 0 0 (b)(1 marks) Whi
ch page wi
l
l
t
he OS evi
ct
? Evict page 5. (c)(4 marks) Now y
ou deci
de t
o us
e LRU c
l
oc
k
al
gor
i
t
hm,
and t
he c
l
oc
k
hand at
t
he t
i
me of
page f
aul
t
i
s at
Page 2.
What
woul
d be s
t
at
es
af
t
er
y
ou r
un LRU
c
l
oc
k
(
bef
or
e s
wappi
ng)
? Fi
l
l
i
n t
he bl
anks
bel
ow (
y
ou don’
t
need t
he NRU c
ount
er
i
n LRU c
l
oc
k anymor
e)
.
Which page will you evict? Page # Evict page 4. Ref
er
ence bi
t
1 0 2 0 (
not
e:
‘
1’
i
n t
hi
s ent
r
y i
s al
so f
i
ne)
3 0 4 0 5 0 6 1 Question 8 (6 marks): Scheduling II Recal
l
t
hat
i
n Mul
t
i
pl
e
l
evel
f
eedbac
k
queues
(
MLFQ)
s
c
hedul
i
ng al
gor
i
t
hm,
OS uses
mul
t
i
pl
e queues t
o r
epr
esent
di
f
f
er
ent
j
ob t
y
pes
.
Di
f
f
er
ent
queues
hav
e di
f
f
er
ent
pr
i
or
i
t
i
es.
I
n f
act
,
di
f
f
er
ent
queues al
so hav
e di
f
f
er
ent
t
i
mes
l
i
c
e v
al
ues
.
For
ex
ampl
e,
i
f
a queue has a t
i
mesl
i
ce val
ue of
8ms,
t
hen when a j
ob i
n t
hat
queue get
s
s
c
hedul
ed i
t
has
a t
i
mes
l
i
c
e of
8ms
.
Now,
you ar
e t
o desi
gn MLFQ al
gor
i
t
hm f
or
Li
nux
.
(a)(2 marks) Do you assign larger timeslice to jobs in a higher-priority queue or a lower-prioirty
queue? Why?
Answer: Linux is an interactive OS with the principle to reward those interactive jobs. Therefore we
give high priority jobs a larger time slice as they are likely to give up CPU shortly after they are
scheduled. For low priority jobs, which are likely CPU hogs, we assign smaller time slices to favor
those high priority jobs.
(b)(2 marks) At
some poi
nt
,
y
ou wi
l
l
need t
o mov
e a j
ob f
r
om a hi
gher
pr
i
or
i
t
y
queue t
o a l
ower
pr
i
or
i
t
y queue based on its execution history.
How do y
ou mak
e s
uc
h a deci
s
i
on? Answer: again we want to reward those interactive jobs jobs that are frequently interrupted. Therefore, if a job always uses up its entire timeslice without being interrupted, we should lower its priority since it is not likely interactive. (c)(2 marks) Per
i
odi
c
al
l
y
y
ou wi
l
l
al
s
o want
t
o mov
e all the jobs in the lowest priority queue t
o t
he hi
ghest
pr
i
or
i
t
y
queue,
r
egar
dl
es
s
of
t
hei
r
ex
ec
ut
i
on hi
s
t
or
y
.
Why
? Answer: To avoid starvation of the jobs in the lowest priority queue. Question 9 (15 marks): TLB and Page faults Thi
s
quest
i
on as
ks you t
o cons
i
der
ev
er
y
t
hi
ng t
hat
happens
i
n a s
y
s
t
em on a memor
y
r
ef
er
ence.
As
sume t
he f
ol
l
owi
ng:
32
bi
t
v
i
r
t
ual
addr
es
s
es
,
4KB page s
i
z
e,
a 32
ent
r
y TLB,
1
l
ev
el
page t
abl
e,
and LRU r
epl
ac
ement
pol
i
c
i
es
whenev
er
s
uc
h a pol
i
c
y
mi
ght
be needed by ei
t
her
har
dwar
e or
s
of
t
war
e.
As
s
ume f
ur
t
her
t
hat
t
her
e ar
e onl
y
1024 pages of
phys
i
c
al
memor
y avai
l
abl
e.
I
n t
hi
s
ques
t
i
on,
y
ou wi
l
l
be r
unni
ng s
ome t
es
t
c
ode and t
el
l
what
happens when t
hat
c
ode i
s
r
un.
Bef
or
e t
he t
es
t
code i
s
r
un,
t
he f
ol
l
owi
ng i
ni
t
i
al
i
z
at
i
on c
ode i
s
r
un onc
e (
bef
or
e t
es
t
i
ng begi
ns)
.
Thi
s code si
mpl
y al
l
oc
at
es
(
NUM_PAGES*
PAGE_SI
ZE)
number
of
by
t
es and t
hen set
s t
he fir
st
i
nt
eger
on eac
h page t
o 0,
wher
e PAGE_SI
ZE i
s
4KB and NUM_PAGES i
s a c
onst
ant
(
defi
ned bel
ow)
.
As
s
ume mal
l
oc
(
)
r
et
ur
ns
page
al
i
gned dat
a i
n t
hi
s ex
ampl
e.
void *orig = malloc(NUM_PAGES * PAGE_SIZE);
void *ptr = orig;
for (i = 0; i < NUM_PAGES; i++) {
(int *) *ptr = 0; // init first value on each page
ptr += PAGE_SIZE;
}
The t
est
c
ode we ar
e now i
nt
er
es
t
ed i
n r
unni
ng i
s
t
he f
ol
l
owi
ng:
ptr = orig;
for (i = 0; i < NUM_PAGES; i++) {
int x = (int *) *ptr; // load value pointed to by ptr
ptr += PAGE_SIZE;
}
For
t
hese quest
i
ons,
as
sume we ar
e onl
y
i
nt
er
es
t
ed i
n memor
y
r
ef
er
enc
es
t
o t
he mal
l
oc
’
d r
egi
on t
hr
ough pt
r
(
t
hat
i
s
,
i
gnor
e s
t
or
es
t
o x
and i
ns
t
r
uc
t
i
on f
et
c
hes
)
.
(a)(3 marks) How many TLB hi
t
s
,
TLB mi
s
s
es
,
and page f
aul
t
s
oc
c
ur
dur
i
ng t
he t
es
t
c
ode when (
pl
ease f
i
l
l
i
n t
he bl
ank
s
bel
ow)
.
.
.
TLB hi
t
s
TLB mi
sses
Page f
aul
t
s
(
a)
NUM_PAGES i
s 16 16 0 0 (
b)
NUM_PAGES i
s 32 32 0 0 (
c)
NUM_PAGES i
s 2048 0 2048 2048 The workload just loops over some pages accessing each one once. Thus, the second time through the loop with a small number of pages (less than the TLB size, for example) just yields TLB hits. With a large number of pages, and LRU replacement, nothing we want is ever in the cache or TLB, hence all misses/page faults. (b)(3 marks) Ass
ume a memor
y r
ef
er
ence t
akes r
oughl
y t
i
me M and t
hat
a di
sk access t
akes t
i
me D.
I
gnor
e t
he TLB l
ookup t
i
me.
How l
ong does t
he t
est
code t
ake t
o r
un (
appr
oxi
mat
el
y)
,
i
n t
er
ms of
M and D,
when (
f
i
l
l
t
he bl
anks wi
t
h t
he appr
oxi
mat
e r
unt
i
me bel
ow)
.
.
.
Tot
al
execut
i
on t
i
me (
a)
NUM_PAGES i
s 16 16M (
b)
NUM_PAGES i
s 32 32M (
c)
NUM_PAGES i
s 2048 2048 *
2M + 2048D A hit means a load to memory which we assume costs M and hence the TLB hits just cost M. On TLB misses we need to consult memory twice, once for the page table (minimally) and once for the actual memory load, hence 2M per TLB miss. Finally, each page fault costs us a disk access. (c)(6 marks) Now assume we change t
he var
i
ous r
epl
acement
pol
i
ci
es i
n t
he sys
t
em t
o MRU
evi
ct
t
he Most
Recent
l
y Used.
Gi
ven t
hi
s change,
how many hi
t
s/
mi
sses/
f
aul
t
s,
and how l
ong does t
he t
est
code t
ake t
o r
un (
appr
oxi
mat
el
y)
,
i
n t
er
ms of
M and D,
when (
f
i
l
l
i
n t
he t
abl
e)
.
.
.
TLB hi
t
s
TLB mi
sses
Page f
aul
t
s
(
a)
NUM_PAGES i
s 16 16 0 0 (
b)
NUM_PAGES i
s 32 32 0 0 (
c)
NUM_PAGES i
s 2048 32 2016 1024 Tot
al
execut
i
on t
i
me (
a)
NUM_PAGES i
s 16 16M (
b)
NUM_PAGES i
s 32 32M (
c)
NUM_PAGES i
s 2048 32M + 2016 *
2M + 1024D With MRU, the first 31 accesses get lodged in the TLB, and the first 1023 pages get lodged in memory; subsequent accesses simply push the mostrecently used TLB entry/page out. The next time through the loop, the first 31 accesses are thus hits, and the first 1023 page accesses are in memory; everything else are misses until the last page, which also is a hit (do a small MRU example if this doesn’t make sense). Thus, 32 TLB hits (the rest misses); 1024 page faults (the rest in memory). (d)(3 marks) Fi
nal
l
y,
assume you ar
e t
o r
un t
hi
s code on a new machi
ne t
hat
you know ver
y
l
i
t
t
l
e about
.
I
n f
act
,
you wi
sh t
o use t
he t
est
code t
o l
ear
n how bi
g t
he TLB i
s and how muc
h memor
y i
s on t
he gi
ven syst
em.
How coul
d you use t
he t
est
code above t
o l
ear
n t
hese f
act
s about
t
he physi
cal
har
dwar
e? Answer: Just looking for something simple here. You could basically time how long an iteration of the loop lasts. If it lasts something like NUM PAGES times memory access time, those are TLB hits; if it lasts twice that, TLB misses, and thus you can know how big the TLB is. One more big jump occurs when NUM PAGES is finally bigger than memory and causes lots of (very slow) disk accesses. Question 10 (12 marks): OS161 (a)(4 marks) I
n OS161,
t
her
e i
s a f
unct
i
on named “
vm_f
aul
t
”
when i
s t
hi
s get
cal
l
ed? Answer: this is called when the address cannot be translated from the TLB either there is no such entry in TLB, or the translation is not valid. (b)(4 marks) Pagi
ng and segment
at
i
on ar
e t
wo t
echni
ques t
o i
mpl
ement
vi
r
t
ual
memor
y,
yet
many OS suppor
t
s bot
h at
t
he same t
i
me.
Descr
i
be how you i
mpl
ement
ed segment
at
i
on usi
ng pagi
ng i
n l
ab 3.
Answer: Segmentation is implemented on top of paging. In the address_space data structure, simply record the size of each segment. But for each virtual address, in the end it will be translated via paging. Anything makes sense is acceptable here. (c)(4 marks) Pl
ease descr
i
be a bug t
hat
you wr
ot
e i
n your
os161 l
abs whi
ch was
t
i
me
consumi
ng t
o debug.
How di
d you debug i
t
? Question 11 (9 marks): Synchronization II ● Consi
der
a concur
r
ent
pr
ogr
am wi
t
h t
hr
ee t
hr
eads onl
y.
Thr
ead 1 r
epeat
edl
y cal
l
s
Pr
ocedur
eA,
t
o pr
oduce i
t
ems of
t
ype A and pl
ace t
hem i
n an unbounded ar
r
ay Buf
f
A.
The code i
s shown i
n t
he next
2 pages.
Thr
ead 2 r
epeat
edl
y cal
l
s Pr
ocedur
eB,
t
o pr
oduce i
t
ems of
t
ype B and pl
ace t
hem i
n an unbounded ar
r
ay Buf
f
B.
Thr
ead 3 r
epeat
edl
y cal
l
s Pr
ocedur
eC,
whi
ch consumes one i
t
em of
t
ype A or
B,
dependi
ng on what
i
t
ems ar
e avai
l
abl
e.
I
f
an i
t
em of
t
ype A i
s avai
l
abl
e,
Pr
ocedur
eC consumes
i
t
.
I
f
no i
t
em of
t
ype A i
s avai
l
abl
e but
an i
t
em of
t
ype B i
s avai
l
abl
e,
Pr
ocedur
eC consumes t
he B i
t
em.
I
f
no A or
B i
t
ems ar
e av
ai
l
abl
e,
ProcedureC will block until some item is available.
Code f
or
Pr
ocedur
eA,
Pr
ocedur
eB,
and Pr
ocedur
eC i
s gi
ven on t
he next
page,
but
wi
t
hout
synchr
oni
zat
i
on.
Add synchr
oni
zat
i
on pr
i
mi
t
i
ves t
o t
hi
s code as needed t
o ensur
e t
hat
:
(
1)
Buf
f
A and Buf
f
B ar
e accessed by onl
y one t
hr
ead at
a t
i
me,
and (
2)
Pr
ocedur
eC bl
ocks i
f
no i
t
ems ar
e avai
l
abl
e and wakes up when i
t
ems ar
e avai
l
abl
e.
/* Global variables (including lock and semaphores) are declared here: */ array *BuffA, *BuffB; semaphore sem; /* counting semaphore, initialized to 0*/ lock LA, LB; ProcedureA() { item *A; produce (A); lock(LA); array_add(BuffA, A); unlock (LA); V(sem); } ProcedureB() { item *B; produce (B); lock(LB); array_add(BuffB, B); unlock (LB); V(sem); } ProcedureC { item *C; // Add code here to check whether to (i) remove an item from BuffA by // calling array_remove(BuffA, C), (ii) remove an item from BuffB by calling // array_remove(BBuff, C), or (iii) block. // Use consume(C) to consume the item. P (sem); lock (LA); if (length(BuffA) > 0) { array_remove(BuffA, C); } unlock (LA); if (C == NULL) { lock (LB); if (length(BuffB) > 0) { array_remove(BuffB, C); } unlock (LB); } if (C == NULL) { // should never happen! panic(“Procedure C is woken up, but found no available item to consume”); } consume(C); } Thi
s page i
s bl
ank.
You can use i
t
as an over
f
l
ow page f
or
your
answer
s.
© Copyright 2026 Paperzz