Selecting among Replicated Batching Video-on-Demand Servers Meng Guo, Mostafa H. Ammar, Ellen W. Zegura NOSSDAV’02, May 12-14,2002 Copyright 2002 ACM Outline • • • • • • • Introduction Batching VOD Servers Server Selection Algorithms Simulation Setup Performance Evaluation Implementation Issues Conclusion 2 Introduction • VoD data delivery schemes – Periodic broadcast – Scheduled multicast • VoD server replication allows a VoD service to handle a large number of clients • Our work focuses on the design of server selection techniques for such a replicated service and the effect of the server selection approach on the performance of a replicated batching-server VoD system 3 Batching VOD Servers 4 Batching VOD Servers ( cont.) • A batching VoD server operates in two phases – Batch scheduling – Channel allocation 5 Batching VOD Servers ( cont.) • Batch scheduling – First Come First Serve (FCFS) – Maximum Queue Length first (MQL) – Maximum Factored Queue length (MFQ) • Select the batch with the maximum value of – qi is the queue length – fi is the relative frequency of the arrivals of video i • Channel allocation – The persistent approach – The video patching approach – The hierarchical multicast stream merging (HMSM) 6 HMSM 7 Server Selection Algorithms • Server selection algorithm – The Closest-server-first algorithm – The Optimized closest-server-first algorithm – The Register all algorithm – The Maximum-MFQ-rank-first algorithm – The Minimum Expected Cost (MEC) algorithm – The Merging-Aware Minimum Expected Cost (MAMEC) algorithm 8 The Closest-server-first algorithm • Selects the server which is closest to the client using the network hop count measure 9 The Optimized closest-server-first algorithm • Selects the closest server among those with free channels • If no servers have free channels, the closest server is selected 10 The Register all algorithm • Put the client request into the corresponding queue at all of the video servers • When the client request is satisfied at any one server, the request is withdrawn from the other server queues 11 The Maximum-MFQ-rank-first algorithm • Compute the destination queue rank at each server and sends the client request to the queue with the best MFQ rank 12 The Minimum Expected Cost (MEC) algorithm • The client selects the server with smallest expected cost • The expected cost at server i • • • • • i is the server number j is the videoID mi,j is the MFQ value of video j at server i ci is the number of free channels at server i a is the adjust parameter which is the load balancing threshold controller • di is the hop count from the client to server i • Wk’s are the weight associated with the various • W1>>W2>W3 13 The Merging-Aware Minimum Expected Cost (MAMEC) algorithm • The client selects the server whose merging aware expected cost value is smallest • ri,j is the time when the client requests video j at server i • si,l is the starting time of the latest regular channel l which is broadcasting video j • avgj is the average latency when requesting video j • B is the client side buffer size in terms of video playback time • N is some large number • W1>>W4>>W2>W3 14 Dual server reception • A client is allowed to receive video stream from channels of different VoD servers 15 Simulation Setup • Simulation Environment – Use GT-ITM transit-stub model to generate a network topology composed of about 1400 routers • Average degree of graph is 2.5 • Core routers have a much higher degree than edge routers – All VoD servers have the same configuration • • • • M=100 video programs Each video program has a play time of 100 minutes Capacity of transmitting C=1000 video streams simultaneously MFQ is used in all the servers – as the factored queue length » △tj is the time interval since the last time this video is scheduled 16 Simulation Setup ( cont.) • Each client’s request can be represented by a three tuple (requesttime, clientID,videoID) • Poisson arrival rate (20~110 per minute) • The video selection probability conforms to a Zipf distribution • The total simulation time is one day 17 Performance Metrics • User perceived latency L is measured by the total amount of latency experienced by the clients over the total number of client requests R is the set of all client requests Lr represent the access latency for request r 18 Performance Metrics • Network bandwidth consumption B is measured by the total amount of bandwidth used by all the multicast channels throughout the simulation time |t| is the number of edges in tree t 19 Performance Metrics • Channel merge rate as the number of clients that are merged over the total number of clients P is the set of patching channels A is the set of all the scheduled channels 20 Performance Evaluation 21 Performance Evaluation (cont.) 22 Performance Evaluation (cont.) 23 Performance Evaluation (cont.) 24 Performance Evaluation (cont.) 25 Performance Evaluation (cont.) 26 Performance Evaluation (cont.) 27 Performance Evaluation (cont.) 28 Implementation Issues 29 Conclusion • With the exception of the very naive Closest Server selection technique, server replication can indeed be used as a way to increase the capacity of the service leading to improved performance 30
© Copyright 2026 Paperzz