Response Time Analysis in a Data Broadcast System with User Cache
Jong-Deok Kim and Chong-Kwon Kim
School of Computer Science & Engineering
Seoul National University, Seoul, Korea
{kimjd, ckim}@popeye.snu.ac.kr
(Tel :+82-2-884-3936)
Abstract
Data broadcast has been considered a promising way of information dissemination to a massive number of users in a
wireless communication environment. Reducing user-waiting time is a major problem in developing a data broadcast
system. There are two approaches for this problem; One is to design a broadcast schedule at the server side which
reduces the mean response time, and the other is to utilize a local cache at the user side which may respond to a user
request instantly. Though these two approaches were addressed separately in the literature, they may be taken jointly
for better performance. The performance of system with joint approach depends on several factors such as broadcast
schedule, cache size, cache management strategy, etc. In this paper we analyze response time in a data broadcast
system with joint approach in which information items are structurally related with each other as in WWW. Based on
the worst-case assumption, we derive a lower bound on the system performance for a given set of broadcast schedule,
cache size, and cache management strategy. This result will be of help for designing and developing a data broadcast
system. We support our analysis by carrying out an extensive simulation on some interesting proposed broadcast
schedules and cache management strategies.
Keywords :
Push-based Data Broadcast, Broadcast Schedule, Cache, Linked Data Model
I. INTRODUCTION
Data broadcast has been considered a promising way of information dissemination to a massive number of users in a
wireless communication environment because of its scalability. In such an environment, since it is often not possible
or not desirable for clients to send explicit requests to the server, the one-way push based broadcast delivery is
becoming a method of main interest. In one-way push based broadcast, the server broadcasts information items
periodically and continuously to all users without any feedback. In order to get a certain item, a user has to wait until
the server broadcasts it. This latency is a major concern in a one-way push based data broadcast system.
Basically there are two approaches to alleviate this problem. One is to design a broadcast schedule at the server side
in a way to reduce the mean response time. The other is to utilize a local memory at the user side as a cache. The
cache selectively prefetches items from the broadcast and stores them prior to being requested, so in case a user
requests a prefetched item, it responds to the request instantaneously.
There has been a lot of preceding researches on this problem, especially on the issue of designing a broadcast
schedule (Su, Tassiulas and Tsotras (1999), Vaidya and Hammed (1996), Jiang and Vaidya (1998), Ammar (1987a,
1987b), Ammar and Wong (1985)). Vaidya, et al. determines the relationship between a broadcast schedule and the
mean and the variance of response time (Vaidya and Hammed (1996), Jiang and Vaidya (1998)). From the analytical
results, they developed conditions where the mean and the variance of response time are minimized respectively.
Problems of cache management were considered in Su and Tassiulas (1997, 1999), Acharya, Franklin, and Zdonik
(1996), Aksoy and Franklin (1998). Su and Tassiulas proposed a method for joint design of server broadcast schedule
and a user caching strategy that minimizes the mean response time (Su and Tassiulas (1997)) .
Flat Data
Model
Database
Linked Data
Model
Database
Information Item
Figure 1. Flat Data Model and Linked Data Model
However these studies have been carried out on a data model what we name "Flat Data Model (FDM).”
In FDM,
there is no explicit structural relation between information items. But in many real information systems, information
database is hierarchically structured, and each information item has explicit relation with others and even contains
explicit meta-information about that. For example, the well-known hyper-link information is contained explicitly in
HTML documents. We name this kind of data model "Linked Data Model (LDM)."
The concept of hyper-link is
generally accepted as one of the key factors to the success of WWW. Therefore it is not unusual to adopt a similar
scheme in the context of the data broadcast system under consideration. Ammar used a similar data model for the
analysis of response time in a teletext system from the perspective of an individual user (Ammar (1987a)). The
obvious advantage of adopting LDM is that locality of user request can be found easily. As the anticipation of user
request can be made effectively, the performance improvement by user cache would be significant.
In this paper, we analyze response time in a data broadcast system with LDM and user cache, and derive two formula
related to the mean and the variance of response time. One is an upper bound of the mean response time, and the
other is an approximation of the variance of response time. To demonstrate the validity of our analysis, we simulated
a broadcast system having the broadcast schedules and the cache management systems proposed in the previous
studies described above. We confirmed the validity by comparing the response time from the simulation and that of
our analysis.
The rest of the paper is organized as follows. In Section II, we introduce a one-way push based data broadcast system
with LDM and user cache. Section III shows our analysis results regarding the relationship between response time
and broadcast schedule, cache strategy and user model. Section IV shows our simulation configuration and discusses
the results. We summarize this paper in Section V.
II. MODEL DESCRIPTION
The one-way push based data broadcast system under consideration is depicted in Figure 2. This system consists of 3
parts; data model, user model and broadcast schedule. In this section, we introduce each part of this system.
Data Model
Database
Broadcast Schedule
User Model
Cache
1
User Terminal
2
3
M
1 2 3 1
Cache
User Terminal
4
Broadcast
Server
Cache
User Terminal
M
Figure 2. Data Broadcast System with Linked Data Model and User Cache
1. Data/Request Model
In FDM, a user can request any of available items independent of the previously requested item. However, in LDM,
every item has control information containing a list of linked items like hyper-link information in HTML documents
and a user is restricted to request only a linked item of the previously requested item.
For a given data model, we can describe/predict the characteristics of user request by determining request
probabilities for items. There may be two request probability models based on its perspective. One is from a user’s
perspective that represents the characteristics of individual user request and the other is from the server’s perspective
that represents the characteristics of aggregated user request. While an individual user request model may be used by
the cache system at the user side, an aggregated user request model may be used by the broadcast schedule system at
the server side.
1.1 Individual / Aggregated User Request Model
In FDM, the probability that an item i is requested by a user u is independent of the previously requested item of the
user and it is denoted by i (u ) . The request characteristics of a user in an FDM can be described by a probability
vector (u) ( i (u)) . In LDM, the probability that an item i is requested by a user u is dependent on the previously
requested item j. It may be expressed by the transition probability from j to i and is denoted by p ji (u ) . The request
characteristics of a user u in an LDM can be described by a transition probability matrix P(u ) ( pij (u )) . Assume that
there exists a path from i to j for every i, j pair. Then it reduces to a discrete Markov chain, so we can calculate the
steady state request probability i (u ) for an item i from the equation (u ) (u ) P(u ) .
Whether a data model that a server is serving is FDM or LDM, the aggregated user request model would be similar to
a FDM-based individual user request model. That is, the aggregated (global) user request probability g i for item i
may be used to describe aggregated user request characteristics. Assume that the number of all users is N and every
user has the same average request generation rate. Then g i can be easily derived from the individual user request
models of the all users that the server is serving. That is, g i
1
1
i (u ) in FDM and g i i (u ) in LDM.
N u
N u
If all users have the same individual user request model, gi i in FDM and gi i in LDM.
1.2 Discussions on User Request Model
It would be natural that questions are raised on the user request models addressed in the previous section. The
followings are two of them.
1) How the actual value of probability, such as i (u ) , p ji (u ) and g i , could be obtained/estimated by the system ?
2) Is it necessary to convey those probability values from clients to server / from server to clients ? If it is, how such
information can be conveyed ?
For question 1), it would be actually very difficult to obtain a user request model describing the actual user request
behavior similarly. A few of previous studies (Jain and Werth (1995)) have addressed this issue and suggested some
possible approaches. We do not present a new approach to obtain a user request model. We think that the approaches
suggested in the pervious studies could be borrowed for our system. However, we examine some other problems
related to this issue. That is, through the experiments addressed in Section IV, we evaluate and consider the system
performance and its characteristics when the predicted/applied individual user request model is inaccurate and when
the individual user request model is different from the aggregated user request model.
For question 2), we don’t think that it is necessary to convey such information between end systems. Even if it is
necessary, the amount of information should be bounded. Values for the individual user request model would be
estimated and determined at each user system, and they will be primarily used by the local cache at each user system.
Of course, all individual user request models obtained may be conveyed from the clients to the server to determine
the aggregated user request model. However, it would be more realistic that the server determines the aggregated user
request model based on individual user request models of some selected users, or based on its own prediction without
any on-line communication with the clients. In the opposite case, the aggregated user request probability information
would be vital for the server in constructing a broadcast schedule, but it is not essential for the operation of user
system. However, conveying the broadcast schedule information should be considered separately and some previous
studies addressed this issue (Vaidya and Hammed (1996)).
2. Broadcast Schedule
The very important result we've adopted from the previous broadcast schedule studies for the theoretical development
is the “Equal Spacing” property. Let tk be the beginning time of k-th transmission of i. For a schedule with the equal
spacing property, tk+1 - tk is constant independent of k. Let M be the total number of available items at the server. We
assume that these items are of the same size and numbered from 1 to M. In a particular schedule with the equal
spacing property, all transmissions of item i are equally spaced by a constant si and a schedule can be specified by a
vector called schedule vector, s1, s2, … , sM > . It has been observed that the broadcast schedule with minimum
overall mean response time is brought about when the instance of each item are equally spaced (Su, Tassiulas and
Tsotras (1999), Ammar (1987b), Ammar and Wong (1985)). For example, given aggregated user request probability
vector g g1 , g 2 , , g M , it is found that an equally spaced schedule vector s with si
1 M
g i achieves
g i i 1
minimum mean response time (Vaidya and Hammed (1996), Jiang and Vaidya (1998)). Actually, many known
broadcast schedule algorithms have been built based on the equal spacing property (Su, Tassiulas and Tsotras (1999),
Vaidya and Hammed (1996), Jiang and Vaidya (1998), Ammar (1987b)). We will consider only the schedules with
this property in the following analysis.
3. User Model
3.1 User Behavior Model
We assume that there are many users having the same individual user behavior model depicted in Figure 3. With a
large user population, we may assume that the aggregate request generation process is a Poisson process of a constant
rate (Vaidya and Hammed (1996)). In the individual user behavior model, a user may be in either "Waiting" or
"Thinking" state. Requesting an item i, a user waits in the "Waiting" state until the request is responded. After
receiving a response, he/she shifts to the "Thinking" state. We assume that he/she will think for a time that is
exponentially distributed with parameter i before making the next request. Ammar addressed the validity of the
above individual user behavior model (Ammar (1987a)).
Start :
Request a New Item
Responed Requesting Item
Waiting
Thinking
Thinking Done / Request a New Item
Figure 3. Individual User Behavior Model
3.2 User Cache Model
A user has a cache that can hold K items locally. At the end of each item transmission, the user may replace one of
the items in the cache with the newly transmitted one. The set of the K items residing in the cache at time t is
represented by Ct . A request for item i generated at time t will be satisfied immediately if i C t . If i C t , it will not
be satisfied until the first broadcast of item i after t. Assume that a user requests item h, i, j in a sequence at time t h ,
t i , t j respectively (See Figure 4). In LDM, i must be a linked item of h and j must be a linked item of i. To analyze
the response time of the request for j, it is necessary to know if j is in cache at time tj.
waiting time + thinking
time of item h
h
th
waiting time +
thinking time of item i
i
ti
j
j
the last transmission
time of j before tj
User Request
tj
Server Schedule
j
Figure 4. Cache Example
Note that there is time constraint in the cache system under consideration because we are adopting a cache system
based on passive prefetching. An item can be cached only when the server broadcasts it. That is, if j is the last
transmission time of item j before t j . To be j Ct , j must be prefetched at j .
j
In cache strategies based on LDM, we apply a restriction named “the last transmission time constraint” on the
allowable interval of j for simplification. In an LDM cache strategy, the items linked to the current items are likely
to be cached. That is to say, in order for an item to have a higher caching priority, the current item must be the parent
of the item. From Figure 4, if j is in the interval (ti , t j ] , j is a linked item of the current item i and would have a
high caching priority, but for interval (t h , ti ] j is not likely to be a linked item of the current item h and thus would
have a low caching priority. From the above observations, we apply a restriction on the allowable interval of j ; In
order for j to be in cache at t j , j must be in time interval (ti , t j ] . This time interval is composed of “waiting time”
and “thinking time” of item i.
Since the “waiting time” may be 0 if i in cache at t i , we require that j must be
transmitted during the “thinking time” of its parent item i.
In addition to satisfying “the last transmission time constraint”, j should not be replaced by items transmitted during
the time interval ( j , t j ] . In theoretical analysis, it is very difficult to know exactly which items will be transmitted
during the time interval ( j , t j ] . In the worst case, they may be all possible items broadcast by the server.
We apply
another restriction named “compete with all” requiring that j should have at least K-th priority among all possible
items broadcast by the server.
Note that "the last transmission time constraint" is due to the limitation of a cache system based on passive
prefetching and is inevitable in any cache management strategy based on LDM. But the effect of "compete with all"
condition is dependent on the cache size and cache management policy, and it is essentially affected by the
characteristics of the cache system. We stress that the above two conditions are the worst case assumptions for the
simplification of analysis, so j may be in Ct meeting none of them in a real system. Consequently, the analyzed
j
system performance based on these assumptions is a lower bound on the real system performance.
III. ANALYSIS
Let ij be the mean response time for an item j request given that the previous request was for item i. The
probability the requested item being j is dependent on the previous item i and defined by the transition probability pij
in the previous section. The probability the previous item being i is equal to the steady state request probability i .
The overall mean response time can be derived as follows
i pij ij (1)
M
M
i 1
j 1
We now derive conditional mean response time ij . Assume that a request for j is issued at time T, and the last
broadcast before T and the first broadcast after T of j are scheduled at time tk and tk+1 each (See Figure 5). According
to the equal spacing property, tk+1 - tk is always constant sj .
sj
L
Y
sj
Item j
Item j
tk
T-sj
T
tk+1
Figure 5. Relations between T, Y, L, tk
As the aggregated user request is governed by a Poisson process, T is uniformly distributed in the range (tk , tk+1].
Let L be the difference between T and tk+1. Then L is also uniformly distributed over (0, tk+1 -tk = sj]. From the
perspective of T, the last transmission time tk of j is in the range [T- sj, T).
Let Y be the difference of tk from T-
sjL=Y holds.
p.d.f of L, Y, f ( y ) 1 / s j (2)
0
We define an indicator function (i, j , t ) for cache hit. Note that L and Y are independent of cache hit.
0,
1,
(i, j, t )
if j CT
if j CT
(3)
Let ij be a random variable representing the response time of a request for item j. Then the followings hold.
ij (i, j , T ) L (i, j , T ) Y , ij E ( ij ) (4)
Now consider the effect of user cache. As mentioned earlier in the user cache model, we carry out an analysis based
on the worst case assumption for simplification. That is, we assume that an item is in cache only when it meets “the
last transmission time constraint” and “compete with all”.
Given user thinking time X i xi for the previous item i, if the schedule interval s j for j is smaller than xi , item j
always satisfies the last transmission time constraint. But it may not satisfy the constraint if s j is larger than xi .
As the last transmission time constraint is closely related to the user thinking time of the previous item i, we define an
indicator function for it.
0,
1,
( xi , s j )
xi s j (5)
xi s j
Even if it satisfies the last transmission time constraint, j may be replaced by other items transmitted during the time
interval (tk , T], so j should meet compete with all condition. We define another indicator function m (i, j, K ) for it.
Given cache management policy m, cache size K, and the last requested item i, m (i, j, K ) stands for whether j would
remain in cache where all items are assumed to compete for cache simultaneously. 0 stands for remaining in cache, 1
doesn’t.
Now we derive ij for two cases, xi s j and xi s j under a certain cache size K and a certain cache management
policy m, given X i x i .
sj
L
Y
sj
Item j
T-sj
Figure 6.
Item j
T
xi
In case ( x i , s j ) 0
Case 1 : ( x i , s j ) 0
For all possible Y y (0 y s j ) , it satisfies the last transmission time constraint. Cache hit of item j depends on
m (i, j, K ) . (i, j, T ) m (i, j, K ) .
From (3) and (4), it follows,
E ( ij | xi ( xi , s j ) 0) m (i, j, K )
sj
2
(6)
Case 2 : ( x i , s j ) 1
(A) For 0 y s j xi , it does not satisfy the last transmission time constraint; (i, j , T ) 1
(B) For s j xi y si , it does satisfy the last transmission time constraint. Cache hit of item j depends on
m (i, j, K ) ; (i, j, T ) m (i, j, K ) .
E (ij | xi ( xi , s j ) 1)
( s j xi ) 2
2s j
m (i, j, K )
xi (2s j xi )
2s j
E (ij | xi ) is either (6) or (7) depending on ( x i , s j ) .
ij E ( E ( ij | xi )) m (i, j , K )
(7)
Then
2
s j ( s j xi )
i e i xi dxi (8)
(1 m (i, j , K ))
0
2
2s j
sj
(A)
sj
L
Y
sj
Item j
Item j
T-sj
xi
T
(B)
sj
L
Y
sj
Item j
T-sj
Item j
xi
T
Figure 7. In case ( x i , s j ) 1, (A) : j is not broadcast during x i ; (B) : j is broadcast during x i
From (1) and (8), we can derive the mean response time . The worst-case assumption on cache hit makes to be
an upper bound on the mean response time and a lower bound on the system performance. In an ideal cache strategy,
m (i, j, K ) 0 may be accomplished for any i, j. However, does not reduce to 0 even in that case. This is due to
the time constraint inherent in an LDM cache system based on passive prefetching.
2
Let be the variance of the response time. We can derive E (ij2 ) in a similar way as we derive E ( ij ) .
M
i 1
M
j 1
2 i pij E ( E ( ij2 | xi )) 2 (9)
E ( E ( ij2 | xi )) m (i, j , K )
3
s j ( s j xi )
i e i xi dxi (10)
(1 m (i, j , K ))
0
3
3s j
s 2j
2
From (1), (8), (9), and (10), we can derive the variance of response time . Though the worst-case assumption on
2
cache hit does not guarantee as an upper bound of the real variance of response time, this may be used as a
helpful measure for the variance of response time in designing a data broadcast system with user cache.
IV. SIMULATION AND DISCUSSION
Based on a simple simulator with event-scheduling capability, we developed a data broadcast system simulator with
data/request model, user model, and broadcast schedule. Using this, we simulated a broadcast system having the
broadcast schedules and the cache management systems proposed in the previous studies. In this section, we present
the results of the analysis and simulation and make comparison between them. It will show the validity of our
analysis. Besides, we examine the performance characteristics of the applied broadcast schedules and cache
management strategies under various configurations; For example, we evaluate the performance of a cache system
when a predicted/applied individual user request model is not consistent to the actual user request characteristics.
We also examine the case when the aggregated user request model used to determine a broadcast schedule by the
server is far from the individual user request model of a user.
1. Configuration
1
0.223
0.447
0.077
2
0.131
0.578
0.175
0.262
0.727
14
0.027
15
0.025
0.210
0.092
16
0.010
0.175
0.577 0.646
0.659
17
0.005
0.652
0.062
7
0.055
6
0.028
0.240 0.376
0.359 0.697
4
0.050
3
0.055
0.135
0.284
5
0.105
0.192
0.254
0.212
0.124
18
0.036
8
0.056
0.184
0.405
19
0.010
Figure 8. User Request Model Example 1
0.388
0.400
20
0.022
9
0.008
10
0.010
0.447 0.302 0.393
0.467 0.607 0.662
21
0.025
0.302
0.116
0.111
0.113 0.069
0.430
22
0.002
23
0.004
0.231
0.048
11
0.026
0.391 0.354
0.289 0.500
24
0.004
25
0.009
0.213
0.253
12
0.024
0.479 0.385
0.480 0.596
26
0.012
27
0.009
0.453
0.614
28
0.011
13
0.013
0.043
0.341
29
0.001
0.334
0.607
30
0.005
1
0.146
0.150
0.100
2
0.027
0.200
0.099
0.500
0.600
14
0.006
15
0.005
0.400
0.099
16
0.005
0.600
0.600 0.200
0.600
17
0.011
0.200
0.099
7
0.024
6
0.018
0.400 0.300
0.600 0.600
4
0.108
3
0.045
0.300
0.099
5
0.012
0.599
0.100
0.250
0.100
18
0.005
8
0.020
0.700
0.600
19
0.017
0.400
0.600
20
0.008
9
0.039
10
0.030
0.500 0.900 0.400
0.600 0.600 0.600
21
0.010
0.250
0.099
0.300
0.400 0.099
0.099
22
0.035
23
0.012
0.250
0.099
11
0.057
0.500 0.500
0.600 0.600
24
0.015
25
0.028
0.400
0.099
12
0.061
0.400 0.200
0.600 0.600
26
0.023
27
0.012
13
0.095
0.400
0.600
28
0.025
0.300
0.600
29
0.018
0.900
0.600
30
0.085
Figure 9. User Request Model Example 2
1.1 Data/Request Model
Our simulation is carried out on a linked data model which is composed of 30 information items (M=30) of the same
size (=1 unit size) that are structured as a hierarchical tree. Each item has no more than 3 child items and the depth of
the tree is 4. It is assumed that each individual user's first request is always for the root (=item 1). All subsequent
requests are restricted to the root, one of the "children" or the "parent" of the immediately preceding request. So no
item has more than 5 links. Figure 8 and Figure 9 show two user request model examples based on the same data
model. Two transition probability values are labeled at each link. The upper value is the downward transition
probability and the lower value is the upward transition probability. For example in the user request model example 1,
if the previously requested item is item 7, user requests item 2, item 18 or item 19 with a transition probability 0.092,
0646 and 0.184 each. The transition probabilities for the root are not explicitly shown in the request model figures.
However, it can be easily determined by extracting other probabilities from 1. In case of item 7, the transition
probability of item 7 for the root is 0.078. ; = 1-(0.092+0.646+0.184). The steady state request probability i is
labeled at each information item node. These values are obtained by solving the equation P .
Note that the two
user request model examples are very different from each other.
To examine the effect on the performance when the predicted model is inaccurate or when the model applied at the
server is different from that of an individual user, we evaluate the response time under the following four scenarios.
In the followings, “Aggregated Model” is the aggregated user request model used by the server to determine the
broadcast schedule. There are two individual user request models for a user, one is “Actual Individual Model” that
governs the actual requests of the user, and the other is “Predicted Individual Model” that is used by the cache system
at the user side to determine the caching priority. These models conform to one of the two example user request
models aforementioned.
(A) Aggregated Model (EX1) = Actual Individual Model (EX1) = Predicted Individual Model (EX1)
gi i (example 1), pij (actual) pij (predicted) pij (example 1)
(B) Aggregated Model (EX1) ≠ Actual Individual Model (EX2) = Predicted Individual Model (EX2)
g i i (example 1), pij (actual) pij (predicted) pij (example 2)
(C) Aggregated Model (EX1) = Actual Individual Model (EX1) ≠ Predicted Individual Model (EX2)
g i i (example 1), pij (actual) pij (example 1), pij (predicted) pij (example 2)
(D) Aggregated Model (EX1) ≠ Actual Individual Model (EX2) ≠ Predicted Individual Model (EX1)
g i i (example 1), pij (actual) pij (example 2), pij (predicted) pij (example 1)
1.2 Broadcast Schedule
The bandwidth of the broadcast channel of the system is 1 (unit size / unit time), and it takes 30 (unit time) to
transmit all items without duplication. We examine two broadcast schedule algorithms. One is cyclic algorithm by
which items are scheduled orderly from 1 to M. It is trivial that the cyclic algorithm satisfies the equal spacing
property, and all items have the same spacing, i.e. 30 (unit time). The other is α-algorithm proposed by Vaidya, et al.
This algorithm attempts to achieve the equality (11). If this equality is achieved, the produced schedule has the equal
spacing property and spacing s i for item i is (12). Vaidya argued that this algorithm trades off between the mean
and the variance of response time based on the parameter alpha. When α=2, the α-algorithm reduces to “the Mean
Optimal Algorithm” and the mean response time is minimized, and when α is picked close to 3, it is expected to
produce a schedule which can make the variance of response time small with little penalty on the mean response time.
We examine α-algorithm for two αs, 2 and 3. As we applied the same aggregated user request model based on the
example 1, the schedule interval for item i is same in all scenarios and shown in Table 1.
1/
si g i constant, 1 i M (11),
1
si
gi
M
g i1 / (12)
i 1
Table 1. Schedule intervals of α-algorithm
α-2
item #
1
1
α
-
3
i
t
α
#
-
α
2
-
3
i
t
α
#
-
α
e
m
e
m
9
.
8
3
1
3
.
7
1
1
1
2
8
.
9
8
2
8
.
1
9
2
1
2
9
.
2
3
2
2
8
-
.
3
4
1
2
1
2
.
8
2
1
6
.
3
7
1
2
2
9
.
7
4
2
8
.
6
8
2
2
9
7
.
0
8
6
3
.
1
3
3
1
9
.
8
5
2
1
.
9
1
1
3
4
0
.
0
0
3
4
.
9
6
2
3
7
5
.
2
3
5
3
.
2
6
4
2
0
.
6
8
2
2
.
5
1
1
4
2
8
.
0
3
2
7
.
5
7
2
4
7
5
.
4
2
5
3
.
3
5
5
1
4
.
3
5
1
7
.
6
4
1
5
2
9
.
2
8
2
8
.
3
9
2
5
4
8
.
7
0
3
9
.
8
5
6
2
7
.
8
6
2
7
.
4
6
1
6
4
5
.
4
3
3
8
.
0
5
2
6
4
1
.
8
7
3
6
.
0
3
7
1
9
.
7
8
2
1
.
8
6
1
7
6
6
.
5
9
4
9
.
1
0
2
7
4
7
.
9
2
3
9
.
4
3
8
1
9
.
6
0
2
1
.
7
3
1
8
2
4
.
6
1
2
5
.
2
9
2
8
4
4
.
1
8
3
7
.
3
5
9
5
3
.
3
5
4
2
.
3
5
1
9
4
6
.
1
2
3
8
.
4
3
2
9
4
3
.
4
0
8
1
.
8
7
4
7
.
1
6
3
9
.
0
1
2
0
3
1
.
4
7
2
9
.
7
9
3
0
6
9
.
2
2
5
0
.
3
8
0
1
1.3 User/Cache Model
There are 10 users with the same individual user behavior model shown in Figure 3. For each item i, i . i.e. the
user thinking time distribution is same for all items. Due to the last transmission time constraint, the user thinking
time plays a significant role in the cache performance. We examine the response time in a range of average thinking
time 1 / spanning from 5 to 90 (unit time). For sufficiently long simulation time, each user keeps traversing the
same actual individual request model selected for each scenario. Measuring the response time of each request, we can
get the mean and variance of response time of each user.
We examine the following 4 cache management strategies.
Of course, values of i , pij below will be obtained
from the predicted individual user request model.
(1) GH
: if j h , then caching priority(j) > caching priority(h)
(2) GXI
: if j s j h sh , then caching priority(j) > caching priority(h)
(3) LH
: Given current item i; if pij pih , then caching priority(j) > caching priority(h)
(4) LXI
: Given current item i; if pij s j pih sh , then caching priority(j) > caching priority(h)
GH and GXI are cache strategies based on FDM and LH and LXI are based on LDM. To apply the result of the
analysis, it is necessary to determine the indicator function (i, j , K ) for each cache strategy. They can be
determined easily based on the user request model and given parameter parameters i, j, and K. Note that GH and GXI
are not based on LDM and are free from the last transmission time constraint. We use (13) and (14) instead of (8) and
(10) to analyze the mean and variance of response time in GH and GXI. Thus, GH and GXI show performance
independent of user thinking time. Free from the time constraint, the result of analysis for GH and GXI is not an
upper bound of the performance.
uij E ( E ( ij | xi )) m (i, j , K )
sj
2
(13), E ( E ( ij2 | xi )) m (i, j, K )
s 2j
3
(14)
Because all schedule interval values are same in the cyclic broadcast schedule, GH = GXI and LH = LXI in the cyclic
broadcast schedule. If the aggregated user request model is identical to the predicted individual user request model
( i (expected) gi , scenario (A) and (D)), GH = GXI in the α-algorithm from (15).
j
h
g
1
g j
g
i
j
1/
M
M
i 1
i 1
g h g 1j 1 / g 1h 1 / g 1j 1 / g i1 / g 1h 1 / g i1 /
M
g
1/
i
i 1
1
g h
g
h
1/
M
g
1/
i
(15)
g j s j gh sh
i 1
We evaluate the performance while varying cache size K from 1 to 5. The last transmission time constraint allows the
cache size K to be less than 6, because the number of items that can be requested from any item is restricted to 5.
However, it will be helpful in a real situation to have cache size larger than 5.
2. Results
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
Mean Response Time - GH
12
12
11
11
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
Mean Response Time - LH
2
Standard Deviation of Response Time - GH
Standard Deviation of Response Time - LH
Figure 10. Cyclic Schedule; X-axis : Mean user thinking time 1/ λ, [5, 10, 20, 30, 40, 50, 90]; Solid Line : Simulation result,
Dotted Line : Analysis result; Line classifier is labeled based on the cache size K, -0,-1,+-2,x-3,-4,*-5
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
Mean Response Time - GH
12
12
11
11
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
Mean Response Time - LH
2
Standard Deviation of Response Time - GH
Standard Deviation of Response Time - LH
Figure 11. α-2 Schedule; X-axis : Mean user thinking time 1/ λ, [5, 10, 20, 30, 40, 50, 90]; Solid Line : Simulation result,
Dotted Line : Analysis result; Line classifier is labeled based on the cache size K, -0,-1,+-2,x-3,-4,*-5
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
12
12
11
11
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
Mean Response Time - GH
Mean Response Time - LH
2
Standard Deviation of Response Time - GH
Standard Deviation of Response Time - LH
Figure 12. α-3 Schedule; X-axis : Mean user thinking time 1/ λ, [5, 10, 20, 30, 40, 50, 90]; Solid Line : Simulation result,
Dotted Line : Analysis result; Line classifier is labeled based on the cache size K, -0,-1,+-2,x-3,-4,*-5
First, in order to compare the result of analysis and simulation, we present figures that plot both results. We arrange
the figures based on the adopted broadcast schedule. Figure 10, 11, and 12 show the response time results for cyclic,
α-2 and α-3 schedule respectively under scenario (A). Each figure consists of 4 sub-figures; from the left, the mean
response time of cache strategy GH, the mean response time of LH, and the standard deviations of the response time
for GHand LH.
In each sub-figure, the y-axis shows the response time (mean or standard deviation), and the x-axis
shows the thinking time parameter 1/ , which is varied from 5 to 90. Lines are classified based on the cache size by
the classifier (:0,:1,+:2,x:3,:4,*:5). Dotted lines represent the results of analysis, and solid lines represent the
results of simulation respectively. We argue that the mean response time from our analysis is an upper bound of the
real mean response time. We also argue that the variance of response time from our analysis is not guaranteed to be
an upper bound of the real variance of response time, but it is a helpful measure for the real variance of response time.
Figure 10, 11, and 12 demonstrate the validity of these arguments.
The difference between two results is relatively small with long thinking time parameter but relatively large with
short thinking time parameter. It shows that the two assumptions for cache hit are tough with short thinking time. For
example, the compete with all condition assumes that all items may be transmitted during the thinking time.
However as the number of items transmitted during the short thinking time is small, this assumption does not hold in
most cases with short thinking time. Although a little loose upper bound is acquired with short thinking time and it is
necessary to devise tighter assumptions for short thinking time, we think that the result is still useful in many cases.
We’ve got similar plots that supporting the validity of our analysis in the other scenarios and cache management
strategies. We omit them due to the limitation of the space. Table 2, Table 4, Table 5, and Table 6 include values of
the mean and standard deviation of response time obtained in all scenarios.
Table 2. Mean (μ) and standard deviation (σ) of response time without cache.
c
e
n
a
A
B
,
,
r
C
D
i
μ
o
α
α-2
cyclic
s
σ
μ
-
σ
1
5
.
0
0
0
8
.
6
6
0
1
0
.
7
6
0
1
5
.
0
0
0
8
.
6
6
0
1
8
.
9
0
8
1
3
μ
σ
9
.
9
7
1
1
1
.
1
6
9
8
.
6
9
7
1
6
.
0
4
0
1
8
.
1
6
9
2
.
5
3
5
3. Discussion
We now discuss some interesting results from the observation. Table 3 summarizes and compares the overall
performance of the broadcast schedules and cache management strategies for each scenario.
Table 3. Overall performance comparison result; = : equal, ≈ : similar, X < Y : Y is better than X, << : much better
Under Cyclic
Under α-2
Under α-3
Scenario
Schedule
A
Cyclic << α-2 ≈ α-3
GH = GXI << LH = LXI GH = GXI << LH < LXI GH = GXI << LH < LXI
B
α-2 < α-3 < Cyclic
GH = GXI << LH = LXI GH < GXI << LH ≈ LXI GH < GXI << LH ≈ LXI
C
Cyclic << α-2 ≈ α-3
GH = GXI << LH = LXI GXI < GH << LH ≈ LXI GXI < GH << LH ≈ LXI
D
α-2 < α-3 < Cyclic
GH = GXI << LH = LXI GH = GXI << LH < LXI GH = GXI << LH < LXI
For all broadcast schedules and scenarios, the overall performance of LDM based cache strategies are better than that
of FDM based cache strategies. While FDM based strategies is independent of user thinking time as mentioned earlier,
LDM based strategies get better with longer user thinking time. It is notable that long user thinking time with a small
cache size shows better performance than short user thinking time with a large cache size in LDM based strategies.
With enough user thinking time, LDM based strategies are more vulnerable both to the inaccuracy of the predicted
individual user request model (scenario (C) and (D)) and to the difference between the actual individual request
model and the aggregated user request model.
For example, LH/LXI with an inaccurate individual user request
model (scenario (C)) show better performance than GH/GXI with an individual user request model same to the actual
individual user request model (scenario (A)). Besides, LH/LXI applied when the actual individual user request model
is far from the aggregated user request model (scenario (B)) show better performance than GH/GXI applied when
the actual individual user request model is consistent to the aggregated user request model (scenario (A)). It can be
thought that the locality benefit exploited in LDM with long user thinking time is high enough to countervail the
inaccuracy of the prediction and the difference between individual and aggregated user request model.
LXI that takes schedule interval into consideration as well as the transition probability shows better performance than
LH that considers the transition probability only. GH and GXI also show a little performance gap between them when
α-algorithm is applied. However they are discriminated from LH and LXI in its inconsistency. That is, GH is better
than GXI under scenario (C), but GXI is better than GH under scenario (D).
α-algorithm is better than cyclic algorithm in both respects of the mean and the variance of response time when the
actual individual user request model is identical to the aggregated user request model (scenario (A) and (C)).
However, α-algorithm shows very poor performance when the actual individual user request model is very different
from the aggregated user request model (scenario (B) and (D)). Actually, α-2 schedule shows the worst performance
under these scenarios. We think that the characteristics of α-3 schedule minimizing the variance of response time
makes it relatively more vulnerable to this problem.
V. CONCLUSION
In this paper, we introduce a new data model named “Linked Data Model” for the information database of a data
broadcast system. Taking advantage of the LDM, anticipation of user request can be made effectively, so significant
performance improvement by user cache can be achieved. To quantify this improvement is necessary for the design
and decision of user cache strategy and size. We analyze the mean and the variance of response time of a data
broadcast system with LDM and user cache. Under a set of assumptions, we derive two expressions for the response
time.
One is an upper bound of the mean response time, and the other is an approximation of (with no guarantee on
its boundness) the variance of response time. To demonstrate the result of our analysis, we simulated a broadcast
system having the broadcast schedules and the cache management systems proposed in the previous studies and
present the results.
Future research will consider the response time under more general assumptions as well as various configurations,
such as different item size, multiple broadcast channel in the server, impatient user model, and so on. In addition,
future research will consider how the response time with user cache can be optimized.
REFERENCES
C. J. Su, L. Tassiulas and V. J. Tsotras. (1999). “Broadcast Scheduling for Information Distribution”. ACM/Baltzer
Journal of Wireless Networks 5(2): 137-147.
Nitin H. Vaidya and Sohail Hameed. (1996). “Scheduling Data Broadcast in Asymmetric Communication
Environments”. Technical Report 96-022, Computer Science Department, Texas A&M University, College Station.
Shu Jiang and Nitin Vaidyan. (1998). “Response Time in Data Broadcast Systems: Mean, Variance and Trade-Off”.
Proceeding of Workshop on Satellite Based Information Services (WOSBIS), Dallas, TX.
M. Ammar. (1987). “Response Time in a Teletext System : An Individual User's Perspective”, IEEE Transactions on
Communication, V. 35, N. 11. November 1987.
M. Ammar and J.W. Wong. (1987) “On the Optimality of Cyclic Transmission in Teletext Systems”, IEEE
Transaction on Communication, COM-35(1):68-73.
M. Ammar and J.W. Wong. (1985) “The Design of Teletext Broadcast Cycles”, Performance Evaluation, 5(4):235242
C. J. Su and L. Tassiulas. (1999) “Joint Broadcast Scheduling and User's Cache Management for Efficient
Information Delivery”, ACM Journal on Wireless Networks.
L. Tassiulas and C. J. Su. (1997). “Optimal Memory Management Strategies For Mobile Computing,”
IEEE Journal on Selected Area in Communication, Vol. 15, No. 7.
S. Acharya, M. Franklin, and S. Zdonik. (1996) “Prefetching from a Broadcast Disk”, In Proceedings of International
Conference on Data Engineering.
Demet Aksoy, and Michael Franklin. (1998) “Scheduling for Large-Scale On-Demand Data Broadcasting”, In
Proceedings of IEEE INFOCOM.
R. Jain and J. Werth. (1995) “Airdisks and airraid:modeling and scheduling periodic wireless data broadcast”,
Computer Architecture News, 23(4):23-28.
Table 4. Mean (μ) and standard deviation (σ) of the cyclic schedule. ; scen-K : Scenario-Cache Size; G-μ, G-σ : G* Cache
Results; μ-#, σ-# : L* Cache Results, # = mean user thinking time = 1/λ
G-μ
Cache scen-K
G
=
H
G
L
=
X
I
H
L
X
I
G
-
σ
μ
-
σ
5
-
μ
5
-
1
σ
0
-
1
μ
0
A
-
1
1
1
.
6
5
7
9
.
8
6
2
1
2
.
9
0
8
8
.
8
9
0
1
1
.
5
7
0
9
.
1
7
2
B
-
1
1
2
.
8
1
0
9
.
5
9
7
1
2
.
5
7
3
8
.
8
8
0
1
1
.
0
2
2
9
.
1
3
3
C
-
1
1
1
.
6
5
7
9
.
8
6
2
1
3
.
7
3
7
8
.
8
5
9
1
2
.
9
3
0
9
.
1
2
8
D
-
1
1
2
.
8
1
0
9
.
5
9
7
1
3
.
6
7
6
8
.
8
6
4
1
2
.
8
2
9
9
.
1
3
9
A
-
2
9
.
6
9
4
9
.
9
9
5
1
1
.
7
7
1
8
.
8
0
6
9
.
7
0
8
8
.
9
0
B
-
2
1
1
.
1
8
1
9
.
9
3
0
1
1
.
3
1
1
8
.
7
3
0
8
.
9
5
3
8
.
6
C
-
2
1
0
.
9
0
2
9
.
9
5
9
1
2
.
4
9
2
8
.
8
7
6
0
.
8
8
9
9
.
D
-
2
1
2
.
4
0
3
9
.
7
0
7
1
1
.
9
3
7
8
.
8
2
7
9
.
9
8
0
8
A
-
3
8
.
1
2
6
9
.
8
2
3
1
1
.
2
2
7
8
.
7
1
3
8
.
8
1
6
B
-
3
9
.
7
6
4
9
.
9
9
7
1
0
.
9
3
3
8
.
6
4
9
8
.
3
3
C
-
3
1
0
.
7
0
0
9
.
9
7
5
1
1
.
3
1
9
8
.
7
3
1
8
.
9
D
-
3
1
2
.
2
2
6
9
.
7
4
9
1
1
.
2
0
8
8
.
7
1
0
8
.
A
-
4
7
.
2
8
5
9
.
6
2
4
1
0
.
8
3
6
8
.
6
2
5
8
B
-
4
8
.
4
8
9
9
.
8
8
5
1
0
.
8
3
2
8
.
6
2
4
C
-
4
1
0
.
6
3
3
9
.
9
8
0
1
0
.
9
7
7
8
.
6
5
D
-
4
1
1
.
9
3
1
9
.
8
1
2
1
0
.
9
0
8
8
.
6
A
-
5
6
.
4
6
1
9
.
3
5
3
1
0
.
8
3
1
8
.
B
-
5
7
.
5
6
8
9
.
7
0
0
1
0
.
8
3
1
8
C
-
5
1
0
.
2
6
8
9
.
9
9
6
1
0
.
9
7
0
D
-
5
1
1
.
5
7
7
9
.
8
7
5
1
0
.
8
3
1
-
2
σ
0
-
2
μ
0
-
3
σ
0
-
3
μ
0
-
4
σ
0
-
4
μ
0
-
5
σ
0
-
5
μ
0
-
9
σ
0
-
9
0
0
.
1
6
1
9
.
4
4
4
9
.
4
6
0
9
.
5
4
6
9
.
0
4
7
9
.
5
9
4
8
.
7
7
6
9
.
6
2
0
8
.
2
4
2
9
.
6
5
6
9
.
3
8
6
9
.
3
3
3
8
.
5
7
4
9
.
3
8
2
8
.
0
9
5
9
.
3
9
1
7
.
7
8
0
9
.
3
8
9
7
.
1
6
1
9
.
3
6
4
1
2
.
0
7
9
9
.
4
4
3
1
1
.
6
5
6
9
.
5
9
6
1
1
.
4
0
7
9
.
6
8
3
1
1
.
2
4
3
9
.
7
3
9
1
0
.
9
2
1
9
.
8
4
6
1
1
.
9
3
7
9
.
4
5
6
1
1
.
4
9
4
9
.
6
0
9
1
1
.
2
3
3
9
.
6
9
6
1
1
.
0
6
1
9
.
7
5
2
1
0
.
7
2
3
9
.
8
5
7
2
7
.
5
3
2
8
.
7
9
0
6
.
4
5
2
8
.
6
1
3
5
.
8
1
5
8
.
4
6
2
5
.
3
9
5
8
.
3
4
2
4
.
5
7
2
8
.
0
5
5
7
6
6
.
4
6
7
8
.
2
7
6
5
.
2
3
3
7
.
8
8
4
4
.
5
0
4
7
.
5
7
4
4
.
0
2
5
7
.
3
3
4
3
.
0
8
5
6
.
7
6
1
1
1
9
9
.
1
9
8
9
.
2
9
6
8
.
3
5
9
9
.
3
2
9
7
.
8
6
4
9
.
3
2
7
7
.
5
3
8
9
.
3
1
6
6
.
8
9
9
9
.
2
7
3
.
9
6
6
7
.
9
1
6
8
.
9
3
7
6
.
8
9
1
8
.
8
1
9
6
.
2
8
6
8
.
7
1
1
5
.
8
8
8
8
.
6
2
3
5
.
1
0
8
8
.
4
0
8
8
.
6
2
8
6
.
2
7
3
8
.
1
6
4
5
.
0
1
1
7
.
7
2
3
4
.
2
6
6
7
.
3
7
6
3
.
7
7
6
7
.
1
0
7
2
.
8
1
4
6
.
4
6
1
4
8
.
4
3
7
5
.
5
9
4
7
.
7
2
1
4
.
2
3
3
7
.
0
7
6
3
.
4
3
0
6
.
5
6
7
2
.
9
0
2
6
.
1
6
6
1
.
8
6
5
5
.
1
6
6
6
6
8
.
6
8
1
6
.
4
8
5
8
.
2
8
6
5
.
2
5
4
7
.
8
9
8
4
.
5
2
7
7
.
5
9
2
4
.
0
4
8
7
.
3
5
4
3
.
1
1
0
6
.
7
8
8
7
8
5
8
.
6
1
6
6
.
2
3
0
8
.
1
3
8
4
.
9
6
1
7
.
6
8
5
4
.
2
1
2
7
.
3
3
0
3
.
7
2
0
7
.
0
5
3
2
.
7
5
3
6
.
3
9
0
.
1
7
5
8
.
3
6
7
5
.
3
6
8
7
.
5
5
5
3
.
9
7
5
6
.
8
2
9
3
.
1
5
3
6
.
2
5
2
2
.
6
1
2
5
.
7
9
2
1
.
5
5
1
4
.
6
1
6
8
.
1
6
8
8
.
3
6
4
5
.
3
5
9
7
.
5
4
8
3
.
9
6
4
6
.
8
1
8
3
.
1
4
1
6
.
2
3
8
2
.
6
0
0
5
.
7
7
5
1
.
5
3
7
4
.
5
9
1
9
8
.
4
0
6
8
.
4
6
7
5
.
6
9
5
7
.
7
9
3
4
.
3
4
9
7
.
1
8
2
3
.
5
5
5
6
.
7
0
1
3
.
0
3
2
6
.
3
2
3
2
.
0
0
7
5
.
3
9
0
4
3
8
.
2
9
3
8
.
4
1
9
5
.
5
3
6
7
.
6
7
9
4
.
1
6
7
7
.
0
1
4
3
.
3
5
9
6
.
4
8
9
2
.
8
2
7
6
.
0
7
4
1
.
7
8
5
5
.
0
3
3
6
2
4
8
.
1
6
7
8
.
3
6
3
5
.
3
5
8
7
.
5
4
7
3
.
9
6
4
6
.
8
1
7
3
.
1
4
0
6
.
2
3
7
2
.
5
9
9
5
.
7
7
4
1
.
5
3
7
4
.
5
8
9
.
6
2
4
8
.
1
6
7
8
.
3
6
3
5
.
3
5
8
7
.
5
4
7
3
.
9
6
4
6
.
8
1
7
3
.
1
4
0
6
.
2
3
7
2
.
5
9
9
5
.
7
7
4
1
.
5
3
7
4
.
5
8
9
8
.
6
5
7
8
.
3
9
4
8
.
4
6
2
5
.
6
7
8
7
.
7
8
1
4
.
3
3
0
7
.
1
6
5
3
.
5
3
4
6
.
6
8
0
3
.
0
1
1
6
.
2
9
8
1
.
9
8
4
5
.
3
5
5
8
.
6
2
4
8
.
1
6
7
8
.
3
6
3
5
.
3
5
8
7
.
5
4
7
3
.
9
6
4
6
.
8
1
7
3
.
1
4
0
6
.
2
3
7
2
.
5
9
9
5
.
7
7
4
1
.
5
3
7
4
.
5
8
9
1
1
Table 5. Mean (μ) and standard deviation (σ) of the α-2 schedule. ; scen-K : Scenario-Cache Size; G-μ, G-σ : G* Cache
Results; μ-#, σ-# : L* Cache Results, # = mean user thinking time = 1/λ
G-μ
Cache scen-K
G
H
L
G
L
H
X
X
I
I
A
-
1
B
-
1
C
-
1
D
-
1
A
-
2
B
-
2
C
-
2
D
-
2
A
-
3
B
-
3
C
-
3
D
-
3
A
-
4
B
-
4
C
-
4
D
-
4
A
-
5
B
-
5
C
-
5
D
-
5
G
-
σ
μ
9
.
6
6
5
1
0
.
7
0
6
1
8
.
1
9
1
1
9
.
2
7
4
9
.
6
6
5
1
0
.
7
0
6
1
8
.
1
9
1
1
9
.
2
7
4
8
.
8
2
6
1
1
.
0
8
9
1
7
.
0
6
8
1
9
.
8
9
0
9
.
1
4
5
1
0
.
8
2
7
1
8
.
0
1
7
1
9
.
3
9
9
8
.
0
7
6
1
1
.
3
3
4
1
5
.
1
7
9
2
0
.
1
5
3
8
.
8
7
6
1
0
.
7
1
9
1
7
.
9
3
2
1
9
.
4
5
6
7
.
5
2
7
1
1
.
3
9
5
1
2
.
2
3
6
1
8
.
7
3
6
8
.
7
2
0
1
0
.
5
1
0
1
7
.
7
3
9
1
9
.
5
6
8
6
.
9
8
3
1
1
.
4
2
7
1
1
.
3
2
4
1
8
.
8
2
7
8
.
3
5
9
1
0
.
4
6
3
1
7
.
5
0
6
1
9
.
6
9
9
A
-
1
9
.
6
6
5
1
0
.
7
0
6
B
-
1
1
5
.
9
6
6
1
7
.
7
8
8
C
-
1
1
0
.
6
0
5
9
.
7
7
6
D
-
1
1
8
.
1
9
1
1
9
.
2
7
4
A
-
2
8
.
8
2
6
1
1
.
0
8
9
B
-
2
1
4
.
0
7
7
1
7
.
9
6
6
C
-
2
1
0
.
3
3
6
9
.
6
9
7
D
-
2
1
8
.
0
1
7
1
9
.
3
9
9
A
-
3
8
.
0
7
6
1
1
.
3
3
4
B
-
3
1
2
.
3
5
7
1
6
.
0
3
0
C
-
3
1
0
.
2
2
5
9
.
4
4
1
D
-
3
1
7
.
9
3
2
1
9
.
4
5
6
A
-
4
7
.
5
2
7
1
1
.
3
9
5
B
-
4
1
1
.
0
3
7
1
2
.
7
1
2
C
-
4
1
0
.
1
5
0
9
.
1
3
7
D
-
4
1
7
.
7
3
9
1
9
.
5
6
8
A
-
5
6
.
9
8
3
1
1
.
4
2
7
B
-
5
9
.
9
1
4
1
3
.
0
2
5
C
-
5
9
.
3
0
6
D
-
5
1
9
.
6
9
9
1
9
.
6
3
0
7
.
5
0
6
-
σ
5
-
μ
5
8
.
9
9
6
1
0
.
1
6
4
1
6
.
4
0
2
1
8
.
2
2
8
9
.
5
4
7
9
.
5
7
6
1
7
.
6
3
4
1
8
.
8
9
9
7
.
9
8
4
9
.
8
6
7
1
5
.
3
0
5
1
8
.
2
2
4
8
.
4
5
6
9
.
3
7
1
1
5
.
9
8
2
1
8
.
5
3
2
7
.
4
7
4
9
.
6
5
7
1
4
.
9
6
3
1
8
.
3
0
5
7
.
4
9
6
9
.
4
7
9
1
5
.
2
8
1
1
8
.
6
3
3
7
.
1
7
1
9
.
6
3
4
1
4
.
8
8
9
1
8
.
3
4
5
7
.
2
6
4
9
.
5
6
6
1
4
.
9
7
7
1
8
.
6
0
6
7
.
1
6
6
9
.
6
0
1
1
4
.
8
8
9
1
8
.
3
4
5
7
.
2
6
0
9
.
5
6
8
1
4
.
8
8
9
1
8
.
3
4
5
8
.
9
7
0
9
.
9
1
3
1
6
.
4
3
0
1
7
.
9
7
9
9
.
6
2
2
9
.
5
6
4
1
6
.
9
4
4
1
8
.
3
0
0
7
.
9
6
0
9
.
5
8
8
1
5
.
3
0
5
1
8
.
2
2
4
8
.
4
5
6
9
.
3
7
1
1
5
.
6
5
4
1
8
.
4
1
8
7
.
4
5
8
9
.
5
0
3
1
4
.
9
6
3
1
8
.
3
0
5
7
.
4
9
6
9
.
4
7
9
1
4
.
9
7
8
1
8
.
2
9
9
7
.
1
7
1
9
.
6
0
0
1
4
.
8
8
9
1
8
.
3
4
5
7
.
2
6
4
9
.
5
6
6
1
4
.
8
8
9
1
8
.
3
4
5
7
.
1
6
6
9
.
6
0
1
1
4
.
8
8
9
1
8
.
3
4
5
7
.
2
6
0
9
.
5
6
8
1
4
.
8
8
9
1
8
.
3
4
5
-
1
σ
0
-
1
μ
0
8
.
1
5
0
1
0
.
1
8
6
1
4
.
6
1
0
1
7
.
8
3
0
8
.
8
1
6
9
.
3
4
9
1
6
.
8
2
1
1
9
.
0
3
8
6
.
5
9
0
9
.
5
6
1
2
.
9
0
5
1
7
.
5
2
7
.
1
4
2
8
.
8
6
9
1
4
.
0
7
9
1
8
.
1
7
8
5
.
7
8
7
9
.
1
2
1
2
.
3
7
7
1
7
.
5
9
5
.
7
7
1
8
.
8
3
7
1
2
.
9
6
0
1
8
.
2
4
2
5
.
3
5
4
8
.
9
9
1
2
.
2
7
6
1
7
.
6
3
5
.
4
7
1
8
.
9
0
1
2
.
4
4
7
1
8
.
1
5
5
.
3
4
4
8
.
9
3
1
2
.
2
7
5
1
7
.
6
3
5
.
4
6
5
8
.
9
0
1
2
.
2
7
5
1
7
.
6
3
8
.
0
3
5
9
.
8
2
0
1
4
.
6
3
6
1
7
.
3
4
7
8
.
9
2
9
9
.
3
2
7
1
5
.
5
3
8
1
7
.
9
9
0
6
.
4
7
7
9
.
1
1
1
2
.
9
0
5
1
7
.
5
2
7
.
1
4
2
8
.
8
6
9
1
3
.
4
9
1
1
7
.
9
6
4
5
.
7
2
7
8
.
8
6
1
2
.
3
7
7
1
7
.
5
9
5
.
7
7
1
8
.
8
3
1
2
.
3
9
9
1
7
.
5
8
5
.
3
5
0
8
.
9
3
1
2
.
2
7
6
1
7
.
6
3
5
.
4
7
1
8
.
9
0
1
2
.
2
7
6
1
7
.
6
3
5
.
3
4
4
8
.
9
3
1
2
.
2
7
5
1
7
.
6
3
5
.
4
6
5
8
.
9
0
1
2
.
2
7
5
1
7
.
6
3
-
2
σ
0
-
2
μ
0
7
.
3
5
5
1
0
.
1
4
2
1
2
.
3
3
1
1
7
.
1
6
6
8
.
0
2
9
9
.
0
5
8
1
5
.
8
8
8
1
9
.
1
9
9
2
5
.
2
3
0
9
.
0
3
4
9
.
9
3
4
1
6
.
1
3
8
.
1
1
5
1
7
.
5
0
0
8
.
2
3
1
6
.
1
0
7
.
7
5
1
7
.
3
9
7
.
9
1
1
6
.
1
4
7
.
7
7
1
7
.
1
7
7
.
7
7
1
6
.
1
4
7
.
7
7
1
6
.
1
4
9
.
6
3
7
1
6
.
2
6
6
9
.
0
1
7
1
7
.
5
1
1
8
.
3
5
1
6
.
1
3
8
.
1
1
1
7
.
1
0
7
.
7
7
1
6
.
1
0
7
.
7
5
1
6
.
1
0
7
.
7
7
1
6
.
1
4
7
.
7
7
1
6
.
1
4
7
.
7
7
1
6
.
1
4
7
.
7
7
1
6
.
1
4
5
.
7
6
7
1
.
7
3
5
7
4
.
1
2
6
0
9
.
2
1
8
1
4
.
0
4
6
0
.
1
7
2
8
3
.
5
7
0
2
9
.
0
9
6
5
3
.
6
9
8
6
9
.
4
1
2
1
3
.
5
5
2
2
9
.
0
9
5
6
3
.
6
9
1
2
9
.
0
9
5
1
7
.
1
1
6
1
2
.
3
1
2
8
.
1
8
0
1
3
.
7
3
3
7
4
.
9
8
4
4
9
.
9
3
4
5
.
7
6
7
0
.
8
1
4
0
3
.
9
9
9
0
9
.
2
1
8
7
4
.
0
4
6
4
9
.
2
4
5
0
3
.
5
5
9
2
9
.
0
9
6
5
3
.
6
9
8
2
9
.
0
9
6
1
3
.
5
5
2
2
9
.
0
9
5
6
3
.
6
9
1
2
9
.
0
9
5
1
-
3
σ
0
-
3
μ
0
6
.
9
7
5
1
0
.
0
9
8
1
0
.
9
7
2
1
6
.
6
6
6
7
.
6
1
7
8
.
8
8
0
1
5
.
3
8
2
1
9
.
2
8
2
0
4
.
5
6
1
8
.
6
6
7
8
.
1
6
9
1
4
.
9
7
7
.
6
0
1
6
.
9
7
7
.
6
1
1
4
.
8
4
6
.
9
8
1
6
.
6
9
7
.
1
5
1
4
.
8
7
6
.
9
6
1
6
.
3
5
6
.
9
4
1
4
.
8
7
6
.
9
6
1
4
.
8
7
9
.
5
0
1
5
.
4
1
8
.
8
1
9
1
7
.
1
6
6
7
.
8
3
1
4
.
9
7
7
.
6
0
1
6
.
4
2
7
.
0
0
1
4
.
8
4
6
.
9
8
1
4
.
8
4
6
.
9
4
1
4
.
8
7
6
.
9
6
1
4
.
8
7
6
.
9
4
1
4
.
8
7
6
.
9
6
1
4
.
8
7
5
.
0
6
2
0
.
3
5
4
3
3
.
3
0
5
9
7
.
3
6
0
9
3
.
1
8
5
0
8
.
5
6
1
6
2
.
6
8
9
2
7
.
2
2
9
5
2
.
8
1
9
1
7
.
6
6
6
9
2
.
6
6
4
2
7
.
2
2
8
5
2
.
8
1
1
2
7
.
2
2
8
1
6
.
6
6
4
1
0
.
8
8
7
7
.
7
8
6
1
2
.
6
4
5
4
4
.
2
3
5
7
8
.
1
6
9
5
5
.
0
6
2
3
9
.
2
4
0
8
3
.
1
3
7
9
7
.
3
6
0
9
3
.
1
8
5
5
7
.
3
9
0
9
2
.
6
7
2
2
7
.
2
2
9
5
2
.
8
1
9
2
7
.
2
2
9
9
2
.
6
6
4
2
7
.
2
2
8
5
2
.
8
1
1
2
7
.
2
2
8
-
4
σ
0
-
4
μ
0
-
5
σ
0
-
5
μ
0
-
9
σ
0
6
.
7
5
2
1
0
.
0
6
5
6
.
6
0
6
1
0
.
0
4
0
6
.
3
1
8
1
0
.
0
7
6
1
6
.
2
8
7
9
.
4
4
4
1
5
.
9
9
3
8
.
0
8
9
7
.
3
6
4
8
.
7
5
9
8
.
6
7
4
1
5
.
0
6
7
1
9
.
3
3
1
1
9
.
3
6
4
8
4
.
1
6
1
8
.
4
1
8
.
2
3
5
7
.
0
0
0
1
4
.
0
2
1
3
.
2
5
2
4
.
6
3
3
7
.
2
3
6
.
9
5
3
9
.
4
4
8
1
6
.
5
7
1
6
.
2
6
2
2
.
8
1
5
7
.
1
6
6
.
8
3
9
6
.
1
3
5
1
3
.
8
1
1
2
.
9
5
5
2
.
6
6
8
6
.
4
1
5
.
9
6
4
7
.
5
1
4
1
6
.
1
5
1
5
.
7
2
1
2
.
1
6
3
6
.
5
9
6
.
1
6
0
5
.
9
9
9
1
3
.
8
2
1
2
.
9
6
1
2
.
2
9
1
6
.
3
5
5
.
8
8
8
6
.
5
3
6
1
5
.
7
2
1
5
.
2
1
7
2
.
1
3
3
6
.
3
2
5
.
8
4
1
5
.
9
9
9
1
3
.
8
2
1
2
.
9
6
0
2
.
2
8
3
6
.
3
5
5
.
8
8
1
5
.
9
9
9
1
3
.
8
2
1
2
.
9
6
8
6
.
3
9
4
9
.
4
1
9
.
3
5
5
9
.
9
2
7
1
4
.
7
4
1
4
.
2
0
8
.
6
8
2
8
.
5
8
2
1
6
.
9
1
2
1
6
.
7
1
8
7
.
4
6
7
.
1
9
1
4
.
0
2
1
3
.
2
5
7
.
2
3
6
.
9
5
1
5
.
9
0
1
5
.
4
8
6
.
4
2
5
.
9
8
1
3
.
8
1
1
2
.
9
5
6
.
4
1
5
.
9
6
1
3
.
8
1
1
2
.
9
6
6
.
3
2
5
.
8
4
1
3
.
8
2
1
2
.
9
6
6
.
3
5
5
.
8
8
1
3
.
8
2
1
2
.
9
6
6
.
3
2
5
.
8
4
1
3
.
8
2
1
2
.
9
6
6
.
3
5
5
.
8
8
1
3
.
8
2
1
2
.
9
6
7
.
5
4
2
1
.
9
2
3
3
3
.
7
8
4
5
7
.
0
0
0
2
4
.
6
3
3
4
8
.
2
0
9
2
2
.
6
2
0
9
6
.
1
3
5
5
2
.
6
6
8
7
6
.
1
6
7
8
2
.
1
4
1
0
5
.
9
9
9
1
2
.
2
9
1
0
5
.
9
9
9
7
2
.
1
3
3
1
5
.
9
9
9
0
2
.
2
8
3
1
5
.
9
9
9
1
7
.
1
9
4
4
.
8
5
2
5
3
.
8
9
6
9
6
.
1
6
8
4
4
.
3
4
4
1
8
.
8
0
9
7
2
.
4
8
9
4
5
.
2
6
7
2
2
.
3
2
2
3
6
.
7
8
0
2
1
.
8
1
4
7
5
.
1
2
8
5
1
.
9
4
0
1
5
.
7
4
6
7
1
.
7
7
9
7
5
.
1
2
8
4
1
.
9
3
2
7
5
.
1
2
8
7
6
.
2
1
5
4
9
.
2
3
8
1
7
.
3
7
7
1
.
4
1
0
4
3
.
4
8
3
9
6
.
1
6
8
4
4
.
3
4
4
0
7
.
4
8
5
7
2
.
2
7
4
4
5
.
2
6
7
2
2
.
3
2
2
5
5
.
2
9
9
8
1
.
7
8
7
7
5
.
1
2
8
5
1
.
9
4
0
7
5
.
1
2
8
7
1
.
7
7
9
7
5
.
1
2
8
4
1
.
9
3
2
7
5
.
1
2
8
1
6
.
8
4
6
4
.
4
1
4
0
3
.
3
6
6
1
4
.
3
6
3
9
3
.
7
6
0
0
7
.
4
4
2
4
1
.
8
3
9
8
3
.
3
9
0
9
1
.
6
2
9
9
5
.
2
2
1
7
1
.
1
1
7
2
3
.
2
4
5
5
1
.
2
3
5
7
4
.
0
7
3
5
1
.
0
7
0
2
3
.
2
4
4
4
1
.
2
2
7
2
3
.
2
4
4
1
5
.
8
5
8
6
7
.
7
2
2
1
7
.
0
3
8
0
.
3
0
6
0
2
.
8
7
9
1
4
.
3
6
3
9
3
.
7
6
0
9
5
.
9
4
1
3
1
.
5
8
1
8
3
.
3
9
0
9
1
.
6
2
9
1
3
.
4
2
3
7
1
.
0
7
8
2
3
.
2
4
5
5
1
.
2
3
5
2
3
.
2
4
5
5
1
.
0
7
0
2
3
.
2
4
4
4
1
.
2
2
7
2
3
.
2
4
4
1
-
9
0
9
.
9
8
5
1
5
.
2
8
2
8
.
4
8
7
1
9
.
4
2
6
7
.
8
1
8
1
1
.
1
6
8
6
.
3
1
8
1
5
.
5
0
5
6
.
0
5
9
1
0
.
6
3
0
4
.
8
7
6
1
4
.
6
8
5
5
.
1
4
1
1
0
.
6
1
0
4
.
7
1
3
1
3
.
9
6
6
4
.
6
3
7
1
0
.
6
1
0
4
.
7
0
9
1
0
.
6
1
0
9
.
2
0
4
1
2
.
8
2
2
8
.
3
5
9
1
6
.
2
5
8
6
.
5
6
5
1
1
.
1
6
8
6
.
3
1
8
1
4
.
4
8
1
4
.
8
8
6
1
0
.
6
3
0
4
.
8
7
6
1
0
.
6
3
9
4
.
6
4
1
1
0
.
6
1
0
4
.
7
1
3
1
0
.
6
1
0
4
.
6
3
7
1
0
.
6
1
0
4
.
7
0
9
1
0
.
6
1
0
Table 6. Mean (μ) and standard deviation (σ) of the α-3 schedule. ; scen-K : Scenario-Cache Size; G-μ, G-σ : G* Cache
Results; μ-#, σ-# : L* Cache Results, # = mean user thinking time = 1/λ
G-μ
Cache scen-K
G
H
L
G
L
H
X
X
I
I
A
-
1
B
-
1
C
-
1
D
-
1
A
-
2
B
-
2
C
-
2
D
-
2
9
.
6
4
1
1
5
.
0
3
9
9
.
6
4
1
1
5
.
0
3
9
8
.
5
6
9
1
3
.
8
1
7
9
.
0
7
4
1
4
.
8
1
7
7
.
6
4
7
1
2
.
1
6
6
8
.
8
3
9
4
.
7
1
3
7
.
0
3
9
0
.
0
2
4
8
.
7
2
6
4
.
4
9
9
A
-
3
B
-
3
C
-
3
D
-
3
A
-
4
B
-
4
C
-
4
D
-
4
A
-
5
6
.
4
3
8
B
-
5
9
.
1
4
4
C
-
5
8
.
3
7
7
D
-
5
4
.
2
4
1
A
-
1
9
.
6
4
1
B
-
1
1
3
.
8
9
9
C
-
1
1
1
.
0
5
5
D
-
1
1
5
.
0
3
9
1
1
1
1
A
-
2
8
.
5
6
9
B
-
2
1
2
.
2
4
8
C
-
2
1
0
.
8
2
0
D
-
2
1
4
.
8
1
7
A
-
3
7
.
6
4
7
B
-
3
1
1
.
0
2
5
C
-
3
1
0
.
2
5
4
D
-
3
1
4
.
7
1
3
A
-
4
7
.
0
3
9
B
-
4
9
.
9
0
7
C
-
4
1
0
.
1
8
2
D
-
4
1
4
.
4
9
9
A
-
5
6
.
4
3
8
B
-
5
8
.
9
0
6
C
-
5
8
.
6
5
4
D
-
5
4
.
2
4
1
1
G
-
σ
μ
9
.
1
9
5
1
3
.
3
8
2
9
.
1
9
5
1
3
.
3
8
2
9
.
6
1
1
1
4
.
0
0
0
9
.
3
0
9
1
3
.
5
3
8
9
.
8
2
2
1
4
.
1
5
7
9
.
2
4
1
3
.
6
0
7
9
.
8
2
9
3
.
2
6
7
9
.
1
4
2
3
.
7
2
2
9
.
7
9
5
3
.
2
6
9
9
.
1
0
3
3
.
8
5
4
9
.
1
9
5
2
.
2
1
9
8
.
0
9
0
1
3
.
3
8
2
9
.
6
1
1
1
2
.
4
1
0
8
.
0
6
9
3
.
5
3
8
9
.
8
2
2
2
.
8
1
0
8
.
2
7
9
3
.
6
0
7
9
.
8
2
9
1
.
8
5
2
8
.
1
8
4
3
.
7
2
2
9
.
7
9
5
2
.
2
5
3
9
.
0
4
4
3
.
8
5
4
1
1
1
1
1
1
1
1
1
1
1
1
1
-
σ
5
9
.
2
9
7
1
3
.
5
6
5
9
.
9
4
5
1
4
.
7
5
3
8
.
2
4
6
1
2
.
4
1
1
8
.
8
0
5
1
3
.
0
6
8
7
.
7
2
7
1
2
.
0
5
9
7
.
7
7
5
2
.
3
6
2
7
.
3
9
4
1
.
9
7
5
7
.
5
0
4
2
.
0
6
1
7
.
3
8
9
1
.
9
7
5
7
.
4
9
8
1
.
9
7
5
9
.
3
1
0
3
.
5
8
5
9
.
9
8
8
1
4
.
0
9
7
8
.
2
5
6
1
2
.
4
1
1
8
.
8
0
5
2
.
7
5
6
7
.
7
2
0
2
.
0
5
9
7
.
7
7
5
2
.
0
8
1
7
.
3
9
5
1
.
9
7
5
7
.
5
0
4
1
.
9
7
5
7
.
3
8
9
1
.
9
7
5
7
.
4
9
8
1
.
9
7
5
1
1
1
1
1
1
1
1
1
1
1
1
1
-
μ
5
8
.
4
3
7
1
2
.
1
6
1
7
.
9
1
6
1
2
.
7
5
0
8
.
1
9
1
1
2
.
2
1
3
7
.
7
7
9
1
2
.
5
0
0
7
.
9
8
1
2
.
2
6
7
.
8
3
2
2
.
5
4
0
7
.
9
5
2
.
2
7
.
2
-
1
σ
0
8
.
3
3
9
1
1
.
8
7
4
9
.
2
0
8
1
3
.
9
5
3
6
.
7
0
5
1
0
.
0
7
3
7
.
4
4
5
1
.
1
9
4
2
5
.
8
8
5
2
9
.
5
2
3
5
.
9
1
9
0
.
0
6
5
5
5
.
3
9
4
9
9
9
.
4
0
3
9
0
6
5
.
5
4
4
.
4
8
5
9
.
5
6
5
7
.
9
3
6
5
.
3
8
5
2
.
2
9
9
9
.
4
0
3
7
.
9
0
8
5
.
5
3
6
2
.
2
9
9
9
.
4
0
3
8
.
2
0
8
8
.
2
9
8
2
.
0
0
8
1
.
8
9
3
7
.
9
3
8
9
.
2
8
1
1
2
.
2
5
0
1
2
.
7
6
8
7
.
9
4
9
6
.
6
6
2
1
2
.
2
1
3
1
0
.
0
7
3
7
.
7
7
9
7
.
4
4
5
2
.
3
6
5
0
.
6
4
8
7
.
8
6
5
5
.
8
5
0
2
.
2
6
2
9
.
5
2
3
7
.
8
3
2
5
.
9
1
9
2
.
2
5
5
9
.
5
5
8
7
.
9
3
5
5
.
3
9
2
2
.
2
9
9
9
.
4
0
3
7
.
9
0
6
5
.
5
4
4
2
.
2
9
9
9
.
4
0
3
7
.
9
3
6
5
.
3
8
5
2
.
2
9
9
9
.
4
0
3
7
.
9
0
8
5
.
5
3
6
2
.
2
9
9
9
.
4
0
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-
1
μ
0
-
2
σ
0
8
.
5
6
6
7
.
4
4
7
1
1
.
9
2
7
9
.
8
9
4
7
.
8
5
1
1
2
.
9
3
4
8
.
0
3
1
1
.
7
1
7
.
5
0
1
2
.
3
2
7
.
5
1
1
.
7
7
.
3
2
.
7
8
.
4
4
5
3
.
0
9
0
6
5
.
2
3
5
6
7
.
4
1
0
9
6
.
0
7
1
3
9
.
0
8
1
8
9
4
.
1
1
2
1
0
6
.
6
6
7
6
0
4
.
1
1
0
2
6
3
7
.
5
1
7
.
4
4
0
3
.
4
7
8
1
.
7
4
4
6
.
5
1
7
7
.
3
9
2
3
.
6
5
5
2
.
1
1
2
6
.
7
9
7
7
.
4
0
1
3
.
4
6
2
1
.
7
4
4
6
.
5
1
7
7
.
3
9
3
3
.
6
4
5
1
.
7
4
4
6
.
5
1
7
8
.
2
4
4
7
.
3
2
2
1
.
6
4
0
9
.
8
8
2
7
.
8
8
1
8
.
5
4
9
1
2
.
1
1
2
1
.
2
0
4
7
.
6
6
5
5
.
1
0
3
1
1
.
7
1
6
7
.
4
1
0
7
.
5
0
9
6
.
0
7
1
2
.
0
6
7
8
.
2
5
6
7
.
3
9
3
4
.
0
3
2
1
.
7
1
0
6
.
6
6
7
7
.
3
6
0
4
.
1
1
0
1
.
7
0
6
6
.
7
1
2
7
.
4
0
1
3
.
4
7
1
1
.
7
4
4
6
.
5
1
7
7
.
3
9
2
3
.
6
5
5
1
.
7
4
4
6
.
5
1
7
7
.
4
0
1
3
.
4
6
2
1
.
7
4
4
6
.
5
1
7
7
.
3
9
3
3
.
6
4
5
1
.
7
4
4
6
.
5
1
7
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-
2
μ
0
-
3
σ
0
8
.
6
3
5
7
.
0
3
2
1
1
.
5
4
9
8
.
8
1
3
7
.
7
8
5
1
3
.
1
4
2
7
.
6
8
1
0
.
7
0
7
.
0
8
1
1
.
9
1
6
.
8
1
0
.
5
6
.
5
1
.
6
-
3
μ
0
-
4
σ
0
8
.
6
4
6
6
.
7
9
3
1
1
.
2
6
4
8
.
1
4
0
7
.
7
4
5
1
3
.
2
4
5
4
2
.
8
9
9
5
.
3
9
6
6
.
8
5
7
.
9
4
8
1
.
5
7
3
.
2
6
7
6
3
8
5
.
1
3
3
9
1
3
3
.
2
4
1
5
9
2
6
.
1
6
8
.
4
7
9
2
.
5
6
0
.
5
4
8
4
.
9
6
.
4
4
2
2
.
1
.
2
4
7
5
6
.
4
0
4
0
.
5
4
6
.
4
0
.
8
0
5
0
7
.
6
8
3
7
.
7
1
7
1
3
.
3
0
4
4
.
1
3
2
7
.
2
5
5
.
0
7
4
9
.
2
6
0
5
4
.
9
9
7
6
.
6
5
8
6
7
.
2
4
8
1
.
.
3
3
8
2
.
7
7
5
5
.
5
7
6
4
.
1
8
4
8
5
.
9
0
6
2
.
7
3
3
1
.
0
4
5
5
.
3
4
0
6
5
.
7
8
6
2
.
0
3
7
1
9
.
5
6
3
4
.
0
7
5
2
5
.
7
5
4
2
.
.
3
3
4
0
.
5
4
3
4
2
.
5
4
6
5
.
6
7
9
8
4
.
9
7
1
9
.
5
6
4
1
2
.
7
4
2
5
.
7
5
4
8
4
.
9
7
1
9
.
.
2
2
4
6
.
8
5
7
8
1
.
0
5
6
8
.
7
6
5
7
.
8
2
1
8
.
1
8
5
1
1
.
9
2
1
0
.
3
4
4
7
.
1
5
0
4
.
3
5
1
0
.
7
0
9
5
.
9
7
7
.
0
8
9
5
.
3
1
.
4
4
9
6
.
6
.
5
3
7
3
0
.
5
3
8
6
.
5
1
0
.
5
6
.
0
1
.
9
.
7
1
7
0
1
8
1
7
7
1
7
1
3
9
8
3
1
5
.
3
3
1
.
5
6
8
1
4
.
8
1
2
9
σ
0
6
3
1
4
9
5
6
.
1
6
4
-
4
.
1
0
6
μ
0
6
7
1
.
.
4
.
2
1
8
2
-
8
-
5
9
σ
0
6
4
3
6
.
3
3
8
1
0
.
8
8
8
6
.
7
5
1
7
.
6
9
7
1
3
.
3
4
2
7
.
6
8
8
.
2
0
6
0
3
.
8
6
7
7
.
1
2
6
4
.
4
6
5
8
.
7
6
0
8
4
.
7
3
4
6
.
4
3
3
9
6
.
7
7
4
1
.
.
9
7
0
2
.
4
5
3
5
.
8
1
7
3
.
5
3
9
8
5
.
4
6
6
2
.
4
0
0
0
.
6
2
9
4
.
7
8
2
6
5
.
2
7
9
1
.
6
9
1
4
8
.
7
8
5
3
.
3
2
2
6
5
.
2
5
0
1
.
.
4
3
7
0
.
0
0
3
3
2
.
0
1
2
5
.
1
4
4
3
4
.
0
1
4
8
.
7
8
5
1
2
.
2
1
6
5
.
2
5
6
3
4
.
0
1
4
8
.
.
1
8
8
6
.
5
8
6
8
0
.
6
2
4
8
.
0
6
1
7
.
7
8
1
7
.
9
6
8
1
.
7
8
2
9
.
8
0
7
2
6
.
7
9
1
3
.
9
1
0
9
.
8
9
7
5
.
0
7
9
6
6
.
8
0
5
4
.
9
9
7
1
0
.
9
6
0
6
.
.
1
6
0
5
.
9
2
4
2
5
.
1
3
3
9
.
5
7
6
3
3
.
2
4
1
5
.
9
0
4
2
5
.
1
8
4
9
.
5
4
0
6
2
.
5
5
6
5
.
.
5
4
8
4
.
9
7
1
9
6
.
4
4
2
2
.
7
5
2
0
.
5
4
8
4
.
9
7
6
.
4
0
4
2
.
5
0
.
5
4
8
4
.
6
.
4
4
1
2
0
.
5
4
8
4
1
-
.
2
1
μ
0
8
-
9
0
8
.
6
3
2
1
0
.
5
0
6
7
.
6
5
2
1
3
.
4
1
6
7
.
3
9
1
1
.
8
5
4
1
3
.
3
5
0
6
.
8
3
7
9
3
.
2
2
0
7
.
5
2
4
6
5
4
.
2
1
7
6
.
1
4
6
1
5
0
5
.
8
0
9
0
.
7
0
7
.
6
9
6
1
.
8
2
6
5
.
0
6
9
.
2
0
9
2
.
2
2
4
6
.
6
3
9
5
.
1
3
4
1
.
7
4
8
4
.
3
4
8
0
.
3
0
9
3
.
6
5
1
9
.
5
4
8
0
4
.
8
9
2
1
.
0
1
4
3
.
9
5
7
6
5
8
.
1
6
1
2
.
0
4
2
6
.
5
4
1
8
8
2
4
.
8
6
5
1
.
2
0
9
3
.
9
3
0
.
8
3
3
9
.
5
8
4
2
.
6
1
1
8
.
5
6
8
1
.
6
6
3
4
.
7
3
3
0
.
9
8
1
3
.
7
1
9
5
3
.
3
6
5
8
.
1
6
1
2
.
0
4
1
6
.
5
4
1
4
5
1
.
8
7
1
4
.
8
5
9
1
.
1
9
8
3
.
9
2
0
7
8
5
3
.
3
6
5
8
.
1
6
1
2
.
0
4
1
6
.
5
4
1
.
1
5
8
6
.
4
0
9
8
.
1
3
5
6
.
0
6
3
8
.
0
8
1
0
.
3
0
2
7
.
5
7
8
0
.
0
5
5
6
.
5
8
3
9
.
4
6
9
7
.
7
5
3
7
.
8
2
4
7
.
7
3
3
7
.
5
3
9
7
.
6
8
7
1
.
6
7
9
9
.
4
4
2
1
.
6
0
0
8
.
6
9
3
1
.
4
1
6
2
6
.
5
3
9
3
.
6
2
4
6
.
3
5
6
3
.
0
6
0
5
.
9
4
6
4
9
.
2
6
6
4
.
4
6
5
8
.
7
6
9
3
.
2
2
0
7
.
5
2
4
9
7
6
.
6
0
8
4
.
7
3
4
6
.
4
6
5
4
.
2
1
7
6
.
1
4
6
1
7
7
0
.
5
9
1
5
.
6
4
0
0
.
3
0
8
4
.
5
4
8
9
.
6
3
7
.
6
5
1
5
.
4
7
9
2
.
3
1
7
5
.
1
4
4
1
.
6
6
4
4
.
3
4
9
4
.
1
8
4
8
.
8
1
7
3
.
5
3
9
8
.
2
0
9
2
.
2
2
4
6
.
6
3
9
6
2
.
7
3
3
5
.
4
6
6
2
.
4
0
0
5
.
1
3
4
1
.
7
4
8
4
.
3
4
8
8
7
4
.
2
3
6
8
.
8
3
6
3
.
5
9
4
8
.
2
3
4
2
.
2
8
2
6
.
6
8
2
6
8
3
2
.
0
2
3
5
.
1
5
0
1
.
6
7
4
4
.
7
4
0
0
.
9
9
3
3
.
7
3
0
.
5
6
3
4
.
0
1
4
8
.
7
8
5
3
.
3
6
5
8
.
1
6
1
2
.
0
4
2
6
.
5
4
1
5
.
7
5
4
2
.
2
2
6
5
.
2
5
0
1
.
8
8
2
4
.
8
6
5
1
.
2
0
9
3
.
9
3
0
1
9
.
5
6
3
4
.
0
1
4
8
.
7
8
5
3
.
3
6
5
8
.
1
6
1
2
.
0
4
2
6
.
5
4
1
4
6
5
.
6
7
9
2
.
0
1
2
5
.
1
4
4
1
.
6
6
3
4
.
7
3
3
0
.
9
8
1
3
.
7
1
9
9
7
1
9
.
5
6
3
4
.
0
1
4
8
.
7
8
5
3
.
3
6
5
8
.
1
6
1
2
.
0
4
1
6
.
5
4
1
.
7
4
2
5
.
7
5
1
2
.
2
1
6
5
.
2
4
5
1
.
8
7
1
4
.
8
5
9
1
.
1
9
8
3
.
9
2
0
.
9
7
1
9
.
5
6
3
4
.
0
1
4
8
.
7
8
5
3
.
3
6
5
8
.
1
6
1
2
.
0
4
1
6
.
5
4
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
© Copyright 2026 Paperzz