A Simple Model for Analyzing P2P Streaming Protocols Zhou Yipeng Chiu DahMing John, C.S. Lui The Chinese University of Hong Kong Outline Introduction Model & Chunk Selection Strategies Simulation Conclusion Introduction Unicast Client server is the bottleneck and waste bandwidth Bottleneck Router Waste Bandwidth Introduction Application Layer Multicast (or CDN) Rely on a single distribution tree Leaf peers server Untapped bandwidth resource Weak point Introduction P2P Streaming System -P2P resolves this scalability problem by using all resources of all clients. It is like using multiple trees simultaneously to deliver content. Server Peer Peer Peer Fully connected Peer Peer Peer Peers maintain: * buffer * neighbor list Introduction P2P application: -file distribution, p2p streaming Summary work on p2p streaming: -PPlive, PPstream, CoolStreaming, BiTos -Much work on system study, architecture design and measurement but little theoretic work Our Contributions: -Analytical Models on p2p streaming system to better understand -Chunk selection strategy study and a new strategy is proposed. -Trade off between continuity and scalability Outline Introduction Model & Chunk Selection Strategies Simulation Conclusion Model & Chunk Selection Strategies How buffer works? server t=1 t=2 t=3 playback Server sends out chunks sequentially. Peer downloads one chunk every time slot Buffer shits ahead one position one time slot 1 2 3 1 2 1 Buffer ………. Model & Chunk Selection Strategies M peers with the same playback requirement Each has a playback buffer In each time slot, the server randomly selects one peer and uploads one chunk Users’ metric is the continuity, defined as p(n) , the probability chunk n available To compute p(n), recursively compute p(i). p(i) is defined as: p(i)=prob(position i filled) server playback 1/M 1 2 …………… n 2 …………… n 1/M 1 … M peers 1/M 1 2 …………… n Model & Chunk Selection Strategies Each peer’s buffer is a sliding window In each time slot, each peer downloads a chunk from server or its neighbor q(i) = the probability Buf[i] gets filled at this time slot, for i>1 p(1)=1/M time=t p(i 1) p(i) q(i) P2p technology effect 1 2 p(n)=? …………… n sliding window t+1 1 2 p(1)=1/M …………… n Model & Chunk Selection Strategies q(i) w(i) h(i) s(i) sliding window w(i) = probability peer wants to fill Buf[i] w(i)=1-p(i) h(i) = probability the selected peer has the content for Buf[i] h(i)=p(i) s(i) = Buf[i] determined by chunk selection strategy p(1)=1/M q(i ) w(i ) h(i ) s(i ) p(n) peer 1 2 …… i … n neighbor 1 2 .….. i … p(1)=1/M n Model & Chunk Selection Strategies Greedy Strategy -try to fill the empty buffer closest to playback Rarest First Strategy -try to fill the empty buffer for the newest chunk since p(i) is an increasing function, this means “Rarest First” An example playback 1 Buffer map 2 3 X RF Selection 4 5 X X 6 7 8 X Greedy Selection Model & Chunk Selection Strategies Greedy p(i+1)=p(i)+ (1-p(i)) * p(i) * (1-p(1)-p(n)+p(i+1)) w(i) h(i) s(i) Rarest first p(i+1)=p(i)+ (1-p(i)) * p(i) * (1-p(i)) w(i) h(i) s(i) Also studied continuous forms for these difference equations to study sensitivity Simulation to validate models Model & Chunk Selection Strategies From our models we can get the following conclusions: Rarest First Strategy is more scalable than the Greedy Strategy as the peer population increases. The Greedy Strategy can achieve better continuity than Rarest First Strategy for small number of peers. A New Chunk Selection Strategy Partition the buffer into [1,m] and [m+1,n] Use RF for [1,m] first If no chunks available for download by RF, use Greedy for [m+1,n] Buffer map 1 …….. .. m m+1 ....……… n First do RF Second do Greedy Difference equations become 1 M p (i 1) p (i ) p (i )(1 p (i )) 2 p (1) p (i 1) p (i ) p (i )(1 p (i ))(1 p (m) p (n) p (i 1)) for i = 1,…,m-1 for i = m, … n-1 Outline Introduction Model & Chunk Selection Strategies Simulation Conclusion Comparing Different Chunk Selection Strategies What do you mean by “better”? Playback continuity: p(n) as large as possible n Start-up Latency: E[Chunks] / DR p(i) / 1 i 1 Given buffer size (n) and relatively large peer population (M) 1) “Rarest first” is better in continuity! 2) “Greedy” is the best in start-up latency 3) “Mixed” is the best one of them Simulation M=1000 N=40 In simulation, # neighbors=60 Uploads at most 2 in each time slot for one peer Validate our model Simulation Rarest First Mixed Greedy 1000 peers, 40 buffer Compare three strategies, especially the curve for Mixed. Simulation Mixed RF Mixed RF Greedy 1000 peers, buffer length varies from 20 to 40. For different buffer sizes Mixed achieves best continuity than both RF and Greedy Mixed has better start-up latency than RF Greedy Simulation RF Greedy RF Greedy For (a), there are 40 peers. Greedy is better. For (b), the continuity requirement is fixed at 0.93. RF is better Simulation Simulate 1000 peers, 2000 time slots Continuity is the average continuity of all peers Continuity for Mixed is more consistent, as well highest Mixed Simulation How to adapt m for the mixed strategy Mixed RF Adjust m so that p(m) achieves a target probability (e.g. 0.3) In simulation study, 100 new peers arrive every 100 slots m adapts to a larger value as population increases Outline Introduction Model & Chunk Selection Strategies Simulation Conclusion Conclusion Related work -Coolstreaming, BiTos Summary work on p2p streaming: -There are many designed p2p streaming systems, such as PPLive, PPstream -Many measurement papers on these system -Little work on model analysis -Little study on chunk selection strategies Our Contribution: -Analytical Models on p2p streaming system to better understand -Chunk selection strategy study -Mixed strategy is proposed, which is better than RF or Greedy -Trade off between continuity and scalability
© Copyright 2026 Paperzz