Optimizations of Stored VBR Video Transmission on CBR Channel

Optimizations of Stored VBR Video Transmission
on CBR Channel
Ray-I Chang, Meng Chang Chen, Jan-Ming Ho and Ming-Tat Ko
Institute of Information Science, Academia Sinica
Nankang, Taipei 11529, Taiwan
ABSTRACT
This paper presents a linear-time method to optimize stored VBR video transmission on CBR channel. Given the
transmission bandwidth, we have presented a linear-time method to minimize both the client buer size and playback
work-ahead. In this paper, the service connection time is minimized to maximize the network utilization for the given
transmission bandwidth and client buer. The required work-ahead is also minimized for the minimum response
time. The proposed scheme can be extended easily to transmit the VBR video with minimum rate variability.
Experiments of many well-known benchmark videos indicate that the proposed method can obtain better (or at least
comparable) resource utilization and requires less memory buer than conventional approaches. By considering the
transmission of MPEG-1 video Advertisements with the same work-ahead time, our obtained results shows better
network utilization than that of D-BIND. When compares by transmitting the long MPEG-1 movie Star War, our
approach uses smaller memory buer than that of the combination of MVBA and D-BIND to achieve the same
network utilization.
Keywords: VBR stream, CBR channel, optimization, minimum buer, minimum work-ahead, maximum utilization
1. INTRODUCTION
In a distributed multimedia system, media data (such as video and audio) are transmitted over high-speed networks.
They are required to be playbacked continuously at remote client sites with end-to-end quality of service (QoS)
guarantees. Generally, the size of multimedia le is much larger than that of the conventional le. Besides, they are
usually with some timing constraints for real-world applications. These special needs imply that the conventional
system (client, network, and server) should be tailored. As the multimedia streams consume a large network bandwidth and require rigorous response time for work-ahead, the design of a good network transmission system with
high utilization and low work-ahead is a key to the needs. Given the allocated network bandwidth, we can minimize
the client buer size and the work-ahead time. In this paper, the network utilization is maximized for the applied
client buer size and network bandwidth. The required work-ahead is also minimized with the shortest response
time. In this paper, our network utilization is measured as the bandwidth consumed by the transmission streams
divided by the peak transmission bandwidth allocated on the communication network. Generally, the higher the
network utilization means more streams can be admitted.
As system resources are usually allocated exclusively in xed-size chunks to each service stream, it is relatively
simple for networks to support CBR (constant-bit-rate) service.1 In 1996, Grossglauser and Keshav1 investigated the
performance of CBR trac in large-scale networks with many connections and switches, and developed a framework
for simulation. They concluded that CBR network queuing delays are less than one cell times per switch even
under heavy loading. It is in contrast to VBR (variable-bit-rate) network services in which network queuing delay is
usually one of the most signicant factors in end-to-end delays. Thus, the QoS for media playback is not guaranteed.
Although the CBR service is good in terms of network management complexity and overhead, media streams are
VBR in nature due to coding and compression technologies.2,1 For example, in MPEG-1 video, the average rate of
this VBR stream is usually less than 25% of its peak rate. It would be a waste of network bandwidth if the peak
rate channel is allocated to transmit this VBR stream. One solution to this resource under-utilization problem is to
E-mail of the authors: fwilliam,mcc,hoho,[email protected].
use a statistical service model to multiplex VBR streams to share the entire network bandwidth. However, as QoS
is only statistically guaranteed, it is not suitable for applications that require deterministic guaranteed QoS.
In this paper, we focus on the transmission of stored VBR video streams on a CBR channel with deterministic
guaranteed QoS. Dierent from live media, the characteristics of stored media can be analyzed o-line to allow better
control of on-line transmission. A typical example of deterministic guaranteed server model is the deterministic
bounding interval dependent (D-BIND) trac model.3,4 D-BIND can model VBR trac more accurately than
those based on peak rate, average rate, and rate variance.3 However, the required buer size is large and the
obtained network utilization is low. In 1996, Salehi et al.5 studied the trac smoothing problem, given a xed
work-ahead and a client buer, to minimize the rate variation in network transmission. In this minimum variability
bandwidth allocation (MVBA) method,6 a combination of MVBA with the D-BIND service model (MVBA+D-BIND)
is presented to transport VBR streams. However, as this approach is a VBR transmission method, admission control
and network transmission scheduling at intermediate nodes are complicated in terms of analyzing and implementing
the VBR service model. Recently, a deterministic guaranteed CBR service called constant-rate transmission and
transport (CRTT) is presented in.7 CRTT transmits video data to a client at a constant rate. The client playbacks
the rst frame of video data after receiving a certain amount of data called build up. By choosing appropriate
transmission rate and build up, the authors showed that the client buer can be minimized using the dynamic
programming technique. As CRTT transmits data at a constant rate for the entire connection time, it has the
drawback of requiring large memory buer size. It motivates us to design a new deterministic CBR transmission
scheme.
In this paper, a linear-time transmission scheme is proposed to optimize a VBR stream transmission on a CBR
channel. The goal is, given a transmission rate (the allocated network bandwidth), to minimize client buer requirement, playback work-ahead and network utilization. Given the allocated network bandwidth, we have presented a
linear-time method to decide the minimum client buer size and the minimum playback work-ahead. In this paper,
our design attempts to decide a transmission schedule for a video stream such that its network utilization is maximized with the given client buer size and network bandwidth. Based on the minimum client buer, work-ahead, and
network connection time, a modied MVBA method is introduced to optimally smooth the trac with the minimum
rate variability under the maximum network utilization. In some cases, the obtained rate variability is even smaller
than that of MVBA. The remainder of this paper is organized as follows. In Section 2, problem denitions and
related mathematical formulations are given. The proposed algorithms are presented in Section 3. In Section 4,
some experimental results are shown. Concluding remarks and future directions are oered in Section 5.
2. PROBLEM DEFINITION
This paper addresses the problem of transmitting VBR video via CBR network services to support VOD (video-ondemand) and other similar applications. The primary goal is to design a network transmission schedule to optimize
system parameters, such as client buer requirement, network utilization, and playback work-ahead. Consider the
end-to-end transmission of stored video streams from a video server to a client across the network. For each video
stream, data are rst retrieved from the storage subsystem and stored in the server buer as dictated by a retrieval
scheduler (such as GSS8 or SCAN-EDF9 ). In this paper, it is assumed that the retrieval scheduler always retrieves
the data into server buer before needed by the transmission scheduler. The transmission scheduler then reads data
from the server buer and sends them to the client at the proper time. To concentrate on the formalization of the
problem, assume no network transmission errors and the network delay is zero (i.e. perfect network). At the client
side, incoming data streams are temporarily stored in the bounded-capacity client buer. Video data stored at the
client buer is then retrieved and played back frame by frame periodically by the playback scheduler. If a frame
arrives late or is incomplete at its scheduled playback time, unpleasant jittery eects are perceived by the audience.
To avoid jittery playback, the transmission schedule must always be ahead of the playback schedule so that the client
buer is not underowed.
Given a video stream V = ff0 f1 : : : fn,1 ; Tf g. Its cumulative frame size is Fi = Fi,1 + fi . The cumulative
playback function CPF(t) is dened as
( 0; t < 0
CPF(t) = Fi ; iTf t < (i + 1)Tf ; where 0 i < (n , 1);
Fn ; (n , 1)Tf t.
Frame Size
ft
CPF(t)
playback schedule
Tf
Tf
t
(a)
t
(b)
Figure 1. (a) A VBR video stream and (b) its cumulative playback functionCPF(t).
Assume that a video stream is played back at t = 0. Figure 1 presents an example of a video stream V and its
cumulative playback function CPF(t).
In this paper, like CRTT, a network transmission schedule which transmits data at a constant rate r is studied.
As restated, the network transmitter either sends data at its constant transmission rate or sends no data at all.
This transmission model generally referred as on-o transmission.10 An on-o transmission schedule is dened as
= fr; t0 ; t1 ; : : : ; t2m+1 g and
2 (t2i ; t2i+1 ); 0 i m;
(t) = r;0; totherwise.
The cumulative transmission function ,(t) is dened as the integration of (t) and ,(kTf ) is denoted as ,k . It is
dened as the amount of data sent by the transmitter up to time t.
Following the zero network delay assumption, ,(t) also denotes the amount of data received by the receiver up
to time t. It can be easily generalized to networks with a bounded delay.3 For a jitter-free playback from time t = 0,
the transmission schedule must be ahead of the playback of the video stream by a sucient amount of work-ahead
w = ,t0 . Consider a transmission schedule which is feasible for transporting a video stream V . Since ,(t),CPF(t)
is the transmitted data temporarily stored in the client buer, the required client buer size can be computed as
bV ; = maxf,(t),CPF(t); 8tg. Instead of computing over the entire continuous time domain for the required buer
size, it suces to compute the required buer size for m + 1 sample points. The correction can be easily proved.
Obviously, buer size bV ; is not smaller than the maximum frame size maxffi g, and is not larger than the size of the
video stream V . An example to illustrate the cumulative playback function of a video stream, a feasible cumulative
playback function, the work-ahead and the buer size at the client is depicted in Figure 2. If a client has a limited
buer space b, then obviously b bV ; must hold in order to exist a feasible transmission schedule.
Given a transmission rate, we have presented a linear-time algorithm to minimize the client buer size and the
playback work-ahead. In this paper, network utilization is considered as another important system parameter to be
optimized. In this paper, the network utilization uV ; is dened as the ratio of the video stream size jVj and the
allocatedPbandwidth to the video stream for the entire network connection time. uV ; = jVj=(r Tc) = TON =Tc where
TON = mi=0 (t2i+1 , t2i ) and Tc = t2m+1 , t0 denote the total on time and the total connection time, respectively.
Based on the optimized client buer size, playback work-ahead, and network connection time, we modify the MVBA
scheme5 to minimize rate variability with the maximum network utilization. Notice that, in the above denitions,
a discrete model that assumes the client playbacks a frame per-unit time is considered to formulate the proposed
problem model. Of course, a ner-granularity time discretization could be also applied without loss of generality.
Client Buffer Size
CPF(t)
Γ(t)
time t
Work-ahead
Figure 2. An illustrative example of the relations among the cumulative transmission function, the cumulative
playback function, the work-ahead and the client buer size.
Lazy Scheme
Given:
r
Transmission Rate
Minimize:
Buffer Size
Work-Ahead
Tf
t
Figure 3. A simple processing example of the lazy scheme.
3. PROPOSED APPROACHES
Given a video stream V = ff0f1 : : : fn,1 ; Tf g and let Fk , for k 0, denote the cumulative frame size of the video
stream V . The lazy transmission sequence Lk ; k 0 is dened as:
Li = jVj
8i n , 1; and
Lk = maxfFk ; Lk+1 , rTf g 80 k < n , 1:
The related transmission schedule L = fr; t0 ; t1 ; : : : ; t2m+1 g can be easily computed by a linear-time algorithm.11
Note that, as data is transmitted to client as late as possible, the minimum required client buer and playback
work-ahead can be obtained. An example illustrating the computation of the lazy transmission schedule L is shown
in Figure 3.
Note that, in a lazy transmission schedule, some horizon segments with o transmission states are introduced
to keep the client buer size to be as small as possible. However, the maximum client buer size may not be
on all these segments. The client buer and the network bandwidth have not been fully utilized in these time
periods. In this paper, based on the minimum client buer ^bV (r) and the minimum work-ahead w^V (r) obtained,
a transmission schedule of V is designed to fully utilize the allocated client buer and network bandwidth. This
Aggressive Scheme
Given
r
Transmission Rate
Buffer size
Initial delay
Maximize
Network Utilization
Smoothed Transmission
t
Figure 4. An example of A.
aggressive transmission sequence Ak ; k 0 is dened as
A0 = L0;and
Ak = minfjVj; Fk + ^bV (r); Ak,1 + rTf g; 8k > 0:
A simple example to present the processing of the aggressive scheme is shown in Fig. 4. As the aggressive scheme
transmits data to the client if the client buer is not full, the buer is fuller that the schedule is more resilient to
transmission delay. By considering the network connection time, the obtained result is the shortest that maximizes
the network utilization. The related aggressive transmission schedule A = fr; t0 ; t1 ; : : : ; t2m+1 g can be computed by
the following algorithm.
Algorithm: Aggressive Scheme
input: a constant rate r, a video stream V , the client
buer ^bV (r) and the work-ahead w^V (r)
output: a transmission schedule A
Compute A0 = L0 , and
Ak = minfjVj; Fk + ^bV (r); Ak,1 + rTf g, 8k > 0;
t0 = ,A0 =r; j = 1;
for k = 0 to n , 1 do
begin
if Ak+1 , Ak < rTf then
begin
tj = kTf + (Ak+1 , Ak )=r;
tj+1 = (k + 1)Tf ;
j = j + 2;
end
end
Note that, given a transmission rate r, the aggressive scheme can compute a maximum utilization transmission
schedule for any feasible client buer size b0 > ^bV (r) and any feasible work-ahead w0 > w^V (r). The proof is shown as
follows.
Given:
Feasible Buffer Size
Minimize:
Rate Variability
the original MVBA
our modified MVBA
t
Figure 5. An example of the proposed maximum network utilization MVBA scheme.
THEOREM: Given the feasible client buer and work-ahead, the aggressive scheme can compute
a transmission schedule with the maximum network utilization.
Theorem 3.1.
Proof:
To show that the maximum network utilization is obtained, the connection time of the transmission schedule A
must be proved to be a minimum. Assume there exists a transmission schedule 0 which has a shorter connection
time than A . Since schedules have the same work-ahead time to start transmission, there must has a transmission
schedule 0 such that its completion time is earlier than that of A . Consider the following three cases:
(1) There is some i0 i such that Ai , Fi = ^bV (r).
Without loss of generality, we assume that i0 is the largest integer satisfying these conditions. Then we have
,0i , Fi ,0i , (i , i0 ) Tf , Fi > Ai , (i , i0 ) Tf , Fi = Ai , Fi = ^bV (r). Which contradicts the client buer
size constraint.
(2) For each i0 i, Ai , Fi0 < ^bV (r).
Furthermore, there is some j 0 ; i < j 0 < n0 such that Aj , Fj 0 = ^bV (r), where n0 Tf denotes the completion time of
A . Then, ,0i , Fi > Aj , Fi Aj , Fj = ^bV (r). Again, this contradicts the client buer size constraint.
(3) An , Fn0 = ^bV (r), while for each i0 < n0 ; Ai , Fi0 < ^bV (r).
In this case, it is obviously that the connection time of 0 is no less than that of A .
Q.E.D.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
The algorithm which combines both the lazy scheme and the aggressive scheme is called LA (Lazy-Aggressive).
As both the lazy scheme and the aggressive scheme run in O(n) time, it is obvious that a transmission schedule with
the optimal client buer size, playback work-ahead, and network utilization can be obtained in O(n) time. Note that
the utilization of network bandwidth is not maximized in the conventional MVBA scheme. We can easily extend the
proposed aggressive transmission scheme by combining the MVBA scheme12 to minimize the rate variability with the
maximum network utilization. It can be easily proved that the obtained peak transmission rate is r. This modied
maximum network utilization MVBA scheme can be easily illustrated by a simple example as shown in Fig. 5.
By xing transmission rate r and work-ahead w^V (r), the completion time of the transmission schedule is moved
forward by increasing the client buer size. It means the network utilization is increased by giving a larger client buer
size. Notably, the required buer size, work-ahead and the obtained network utilization are dierent for transmitting
the same video stream with dierent transmission rates. To better manage the system resource and provide QoS
guarantees, a suitable transmission rate should be decided to satisfy the problem constraints and optimize the problem
objectives. In this admission control process, the relations among those parameters are extremely important and,
thus, worth further exploring. We have presented an O(n log n)-time algorithm to the admission control process
which decides the suitable client buer size and playback work-ahead to maximize network utilization.13
4. EXPERIMENTAL RESULTS
In this paper, the proposed scheme is examined on several popular VBR-encoded MPEG traces.2,14,4,12,15 In Table 1,
the examined VBR traces are listed with their related statistics including the number of frames, the maximum frame
size (maximum rate) and the average frame size (average rate). As the time complexity is only O(n), the proposed
scheme takes only 3 seconds (on a Sun Workstation) to compute the transmission schedule of an over 2 hours long
video Star War with 174136 frames. The rst trace examined in our analysis is a Star War MPEG1 movie. The
Stream Name
Star War
Princess Bride
CNN News
Wizard of OZ
Advertisements
Lecture
No. of
Frames
174136
167766
164748
41760
16316
16316
Max Frame
Size (KB)
22.62
29.73
30.11
41.89
10.08
6.14
Avg Size
(KB/f)
1.90
4.89
4.89
5.09
1.86
1.37
Table 1. Statistics of the Examined VBR Traces.
average frame size is 1.9 KB with the maximum frame size 22.62 KB and the minimum frame size 0.28 KB. Two
dierent views of its cumulative playback function are shown in Figure 6(a) and (b), respectively. It can be found
that the frame size variability is large. For a transmission schedule obtained by CRTT, it requires a 23 MB memory
buer and a 37 seconds work-ahead. Our experiments show that, by applying LA algorithm, the Star War video
stream can be transmitted using only 6 MB memory buer and 26 ms work-ahead. Figure 6(c) presents the minimum
buer size and the maximum network utilization obtained by the LA approach for dierent peak transmission rates.
Given the transmission rate 0.47 Mbps, the obtained network utilization is nearly 80%.
When evaluated with dierent long video streams such as Princess Bride and CNN News, the proposed scheme
also obtains good results. Figure 7(a) and (b) show the cumulative playback function of the 90 minutes long Princess
Bride movie with 167766 frames. The minimum buer size and the maximum network utilization obtained by the
proposed approach for dierent transmission rates is shown in Figure 7(c). The cumulative playback function of
160
350000
Cumulative Playback Function: Star War
Cumulative Playback Function: Star War
Network Utilization
1
30000
140
300000
27000
Lines --> buffer
.9
Dots --> utilization
120
24000
.8
21000
.7
18000
.6
15000
.5
12000
.4
9000
.3
6000
.2
250000
Buffer Size (KB)
100
KBytes
KBytes
200000
80
150000
60
100000
40
3000
50000
20
0
.1
0
0
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1 1.2 1.3 1.4 1.5
Transmission Rate (Mbps): Star War
1.6
1.7
1.8
1.9
0
0
20
40
60
80
Frame Index (the first 100 frames)
100
120
0
20000
40000
60000
80000 100000
Frame Index
120000
140000
160000
180000
Figure 6. Star War: (a)(b) The cumulative playback function. (c) Buer and network utilization obtained by LA
for dierent transmission rates.
the CNN News video trace is shown in Figure 8(a) and (b) at 30 fps (frame-per-second) frame rate. Figure 8(c)
shows the minimum buer size and the maximum network utilization obtained by the proposed scheme for dierent
transmission rates. Algorithm LA guarantees the services for these two video traces by only 38 KB memory buer at
1.16 Mbps transmission rate. The obtained network utilization is nearly 100%. On an OC-3 link providing 127.155
500
900000
Cumulative Playback Function: Princess Bride
Cumulative Playback Function: Princess Bride
450
Network Utilization
1
600000
800000
540000
400
350
600000
Buffer Size (KB)
KBytes
300
KBytes
.9
Dots --> utilization
700000
250
Lines --> buffer
500000
400000
480000
.8
420000
.7
360000
.6
300000
.5
240000
.4
180000
.3
120000
.2
200
300000
150
200000
100
60000
.1
0
50
0
0
0.4
100000
0.5
0.6
0.7
0.8 0.9
1
1.1 1.2 1.3 1.4 1.5
Transmission Rate (Mbps): Princess Bride
1.6
1.7
1.8
1.9
0
0
20
40
60
80
Frame Index (the first 100 frames)
100
120
0
20000
40000
60000
80000 100000
Frame Index
120000
140000
160000
180000
Figure 7. Prince Bride: (a)(b) The cumulative playback function. (c) Buer and network utilization obtained by
LA for dierent transmission rates.
550
900000
Cumulative Playback Function: CNN News
Cumulative Playback Function: CNN News
500
Network Utilization
1
600000
800000
540000
450
Lines --> buffer
.9
Dots --> utilization
700000
480000
.8
420000
.7
360000
.6
300000
.5
240000
.4
180000
.3
120000
.2
400
600000
250
Buffer Size (KB)
300
KBytes
KB
350
500000
400000
200
300000
150
200000
60000
.1
100
0
50
0
0
0.4
100000
0.5
0.6
0.7
0.8 0.9
1
1.1 1.2 1.3 1.4 1.5
Transmission Rate (Mbps): CNN News
1.6
1.7
1.8
1.9
0
0
20
40
60
80
Frame Index (the first 100 frames)
100
120
0
20000
40000
60000
80000 100000
Frame Index
120000
140000
160000
180000
Figure 8. CNN News: (a)(b) The cumulative playback function. (c) Buer and network utilization obtained by LA
for dierent transmission rates.
Mbps for video data transport, the transmission schedule can support over 109 video streams. By considering the
23 minutes long Wizard of Oz video trace as shown in Figure9(a)(b), the transmission schedule for all these 41760
frames can be guaranteed using 6 MB memory buer and 1.5 Mbps transmission rate. The network utilization
exceeds 80%, and 84 video streams can be supported on an OC-3 link. Figure 9(c) shows the minimum buer size
and the maximum network utilization obtained by the proposed LA approach for dierent peak transmission rates.
The Advertisements and the Lecture video traces are all with 16316 frames in length. Both are 10 minutes long at
30 fps frame rate with a 160x120 frame size. By applying the proposed algorithm, the Advertisements video stream
can be transmitted using 420 KB memory buer with 513 ms work-ahead. The obtained network utilization is 75%
by applying 0.59 Mbps transmission rate. The cumulative playback function of the Advertisements video is shown
in Figure10(a) and (b). Figure 10(c) shows the minimum buer size and the maximum network utilization obtained
by the proposed approach for dierent transmission rates. The Lecture video trace, a lecture showing the speaker
and his slides along with zooming and panning, can be transmitted under 570 KB memory buer with 77 ms workahead. The cumulative playback function of the Lecture video trace is shown in Figure 11(a) and (b). Figure 11(c)
shows the minimum buer size and the maximum network utilization obtained by the proposed approach for dierent
transmission rates. The obtained network utilization is 89% with 0.36 Mbps transmission rate. The obtained network
utilization is over 90% by only 1 MB memory buer to guarantee the services for these two video traces. By assuming
that the transmission work-ahead is 100 ms, the proposed algorithm can achieve nearly 70% network utilization for
the transmission schedule of Advertisements. It is better than the D-BIND4 approach which achieves only 26%
450
250000
Cumulative Playback Function: Wizard of OZ
Cumulative Playback Function: Wizard of OZ
Network Utilization
1
160000
400
144000
200000
350
KBytes
Buffer Size (KB)
150000
250
200
.9
Dots --> utilization
300
KBytes
Lines --> buffer
128000
.8
112000
.7
96000
.6
80000
.5
64000
.4
48000
.3
32000
.2
100000
150
100
16000
50000
.1
0
0
0.4
50
0
0.5
0.6
0.7
0.8 0.9
1
1.1 1.2 1.3 1.4 1.5
Transmission Rate (Mbps): Wizard of OZ
1.6
1.7
1.8
1.9
0
0
20
40
60
80
Frame Index (the first 100 frames)
100
120
0
5000
10000
15000
20000 25000
Frame Index
30000
35000
40000
45000
Figure 9. Wizard of OZ: (a)(b) The cumulative playback function. (c) Buer and network utilization obtained by
LA for dierent transmission rates.
300
35000
Cumulative Playback Function: Advertisements
Cumulative Playback Function: Advertisements
Network Utilization
1
3000
30000
250
2700
Lines --> buffer
.9
Dots --> utilization
2400
.8
2100
.7
1800
.6
1500
.5
1200
.4
900
.3
600
.2
25000
Buffer Size (KB)
200
KBytes
KBytes
20000
150
15000
100
10000
300
50
5000
0
.1
0
0
0.4
0.5
0.6
0.7
0.8 0.9
1
1.1 1.2 1.3 1.4 1.5
Transmission Rate (Mbps): Advertisements
1.6
1.7
1.8
1.9
0
0
20
40
60
80
Frame Index (the first 100 frames)
100
120
0
2000
4000
6000
8000
10000
Frame Index
12000
14000
16000
18000
Figure 10. Advertisements: (a)(b) The cumulative playback function. (c) Buer and network utilization obtained
by LA for dierent transmission rates.
network utilization. Although Salehi et. al.'s approach5 can achieve the similar network utilization, it requires more
than 1 MB (1159 KB) of memory buer. While the proposed scheme requires only a 316 KB memory buer. More
comparisons for the relation between the work-ahead and the network utilization is shown in Figure 12.
5. CONCLUSION AND FUTURE WORK
In this paper, a linear-time algorithm is presented to design an optimal CBR transmission schedule with minimum
buer size, minimum work-ahead, and maximum network utilization for jitter-free VBR stream playback. We have
explored the proposed scheme for transmitting several benchmark video streams. By considering a presented transmission rate, the proposed scheme has achieved better resources allocation and utilization than that of the previous
approach. It requires much smaller memory buer than CRTT and D-BIND. When comparing with MVBA, the
proposed approach can transport VBR streams with the minimum buer size and the maximum network utilization.
These problem parameters are not optimized in MVBA. By considering the trac smoothing, a modication of
MVBA to minimize rate variability with the maximum network utilization is also proposed in this paper. The same
idea can be easily extended to design an optimal disk retrieving schdule with the minimum server buer and the
maximum bandwidth utilization. In our future work, a frame dropping scheme will be considered to reduce the
minimum video quality for maximizing the improvement of resource allocation and utilization. The algorithms to
calculate optimal parameters for supporting VCR features are also under study.
1000
25000
Cumulative Playback Function: Lecture
Cumulative Playback Function: Lecture
Network Utilization
1
3000
900
2700
800
20000
Buffer Size (KB)
15000
KBytes
600
500
400
.9
Dots --> utilization
700
KBytes
Lines --> buffer
2400
.8
2100
.7
1800
.6
1500
.5
1200
.4
900
.3
600
.2
10000
300
200
300
5000
.1
0
0
0.4
100
0
0.5
0.6
0.7
0.8
0.9
1
1.1 1.2 1.3 1.4 1.5
Transmission Rate (Mbps): Lecture
1.6
1.7
1.8
1.9
0
0
10
20
30
40
50
60
70
Frame Index (the first 100 frames)
80
90
100
0
2000
4000
6000
8000
10000
Frame Index
12000
14000
16000
18000
Figure 11. Lecture: (Left)(Middle) The cumulative playback function. (Right) Buer and network utilization
obtained by LA for dierent transmission rates.
0.8
D-BIND
Salehi et. al. (135K)
Salehi et. al. (1M)
LA (300K)
0.7
Network Utilization
0.6
0.5
0.4
0.3
0.2
0.1
0
100
200
300
Work-Ahead (ms)
400
500
Figure 12. The relation between the work-ahead and the obtained network utilization of Advertisements.
ACKNOWLEDGEMENTS
The research described in this paper was partially supported by NSC under grant NSC86-2223-E-001-018 and NSC862223-E-001-019.
REFERENCES
1. M. Grossglauser and S. Keshav, \On CBR service," in Proc. IEEE INFOCOM, March 1996.
2. M. Garrett and W. Willinger, \Analysis, modeling and generation of self-similar VBR video trac," in Proc.
ACM SIGCOMM, pp. 269{280, August 1994.
3. H. Zhang and E. W. Knightly, \A new approach to support delay-sensitive VBR video in packet-switched
networks," in Proc. 5th Workshop on Network and Operating Systems Support for Digital Audio and Video,
1995.
4. E. W. Knightly, D. E. Wrege, J. Liebeherr, and H. Zhang, \Fundamental limits and tradeos of providing
deterministic guarantees to VBR video trac," in Proc. ACM SIGMETRICS, pp. 47{55, 1995. D-BIND model.
5. J. D. Salehi, Z. L. Zhang, J. F. Kurose, and D. Towsley, \Supporting stored video: reducing rate variability
and end-to-end resource requirements through optimal smoothing," in Proc. ACM SIGMETRICS, pp. 222{231,
1996.
6. W. Feng and S. Sechrest, \Smoothing and buering for delivery of prerecorded compressed video," in IS&T/SPIE
Multimedia Computing and Networking, pp. 234{242, February 1995.
7. J. M. McManus and K. W. Ross, \Video on demand over ATM: Constant-rate transmission and transport," in
Proc. IEEE INFOCOM, March 1996.
8. M. Chen, D. D. Kandlur, and P. S. Yu, \Optimization of the grouped sweeping scheduling (GSS) with heterogeneous multimedia streams," in ACM Multimedia Conference, pp. 235{142, 1993.
9. E. Chang and A. Zakhor, \Scalable video data placement on parallel disk arrays," in IS&T/SPIE Symposium
on Electronic Imaging Science and Technology, February 1994.
10. K. Sohraby, \On the theory of general ON-OFF source with applications in high-speed networks," in IEEE
INFOCOM, pp. 401{410, 1993.
11. R.-I. Chang, M. C. Chen, M.-T. Ko, and J.-M. Ho, \Fixed rate CBR transmission for jitter-free VBR video
playback." 1996.
12. A. R. Reibman and A. W. Berger, \Trac descriptors for VBR video teleconferencing over ATM networks,"
IEEE/ACM Transactions on Networking , June 1995.
13. M.-T. Ko, J.-M. Ho, M. C. Chen, and R.-I. Chang, \An O(nlogn) algorithm to compute rate-buer curve for
the admission control of VBR video transmission on CBR channel." 1996.
14. M. Krunz and H. Hughes, \A trac model for mpeg-coded VBR streams," in ACM SIGMETRICS, pp. 47{55,
May 1995.
15. M. videos, \data available via anonymous archive. URL http::/www.eeb.ele.tue.nl/mpeg/,"