Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Improving Fairness, Efficiency, and Stability i...

Improving Fairness, Efficiency, and Stability in HTTP-based Adaptive Video Streaming with FESTIVE

Avatar for Kevin Tong

Kevin Tong

May 13, 2013
Tweet

More Decks by Kevin Tong

Other Decks in Technology

Transcript

  1. Improving Fairness, Efficiency, and Stability in HTTP-based Adaptive Video Streaming

    with FESTIVE JUNCHEN JIANG, VYAS SEKAR, HUI ZHANG PRESENTED BY TANGKAI
  2. Table of Content  Introduction  BACKGROUND AND MOTIVATION 

    DESIGN  ANALYSIS OF FESTIVE  EVALUATION 2
  3. Table of Content  Introduction  BACKGROUND AND MOTIVATION 

    DESIGN  ANALYSIS OF FESTIVE  EVALUATION 3
  4. Introduction  Video traffic becomes the dominant share of Internet

     Reason: is driven by technology trend  Customized connection-oriented video transport protocls(RTMP) ->  HTTP-based adaptive streaming protocols  Design of robust adaptive http streaming is not only important to video application but also for Internet  TCP congestion control -> to prevent “congestion collapse” 4
  5. Introduction  Design of a robust adaptive video algorithm 

    Single -> multiple  Three (potentially conflicting) goals  Fairness  Efficiency  Stability 5
  6. Introduction  State-of-art HTTP adaptive streaming protocols:  Smooth-Streaming [12],

    Netflix [8], Adobe OSMF [2], and Akamai HD [3]  Contribution  Chunk Scheduling  Bitrate Selection  Bandwidth Estimation 6
  7. Introduction  DASH  Process  Pros:  HTTP 

    Web server/cache  Stateless  Parallel  Load balancing, tolerance 7
  8. Table of Content  Introduction  BACKGROUND AND MOTIVATION 

    DESIGN  ANALYSIS OF FESTIVE  EVALUATION 8
  9. Background & Motivation  Desired properties  The smaller the

    better  Inefficiency:  Unfairness:  Instability: 9
  10. Background & Motivation  video adaptation vs. traditional TCP ->

    key difference  Protocol stack  Application layer within sandbox vs. trans layer  Driven  Receiver driven vs. sender driven  Granularity  Chunk level (~x00 KB) sec vs. Packet level (1536 bits ~1KB) ms  Rate Adaptation  Request lower bitrate vs. Delay transmission 10
  11. Background & Motivation  Design Space  Protocol stack level

     Trans layer; Joint Design  App-layer sandbox  Where in network  Server-side; in-network  Client-side  Coordinated vs. Decentralized?  Decentralized 12
  12. Table of Content  Introduction  BACKGROUND AND MOTIVATION 

    DESIGN  ANALYSIS OF FESTIVE  OSMF-BASED IMPLEMENTATION  EVALUATION 13
  13. Design  1. Schedule when the next chunk will be

    downloaded.  2. Select a suitable bitrate for the next chunk.  3. Estimate the network bandwidth. 14
  14. Design  Chunk Scheduling  Immediate download:  Bandwidth-waste (user

    leave prematurely)  Preclude the option of switching up  N/A in live streaming  Suitable for initial ramp-up phase  Periodic download:  Keep a constant buffer 15
  15. Design  Chunk Scheduling  Randomized scheduling  Randbuf uniformly

    at random from the range  Ensures that each player is not biased by its start time 16
  16. Design  Bitrate Selection  Bias with stateless selection 

    stateless approaches as it only considers the estimated bandwidth without considering the current bitrate or whether it is ramping up or ramping down its bitrate.  Unfair <- dicrete bitrates 17
  17. Design  Bitrate Selection  Proposed approach  compensate for

    the above bias so that the players can converge to a fair allocation irrespective of their current bitrates.  Our current design chooses option (2) and we simply keep the rate of decrease a constant function.  gradual switching strategy(qoe) 18
  18. Design  Delayed Update two potentially conflicting goals: efficiency and

    fairness vs. stability  efficiency cost(lower the better) for  w is the estimated bandwidth and bref is the reference bitrate from the previous section. 19
  19. Design  Delayed Update  stability cost  n denote

    the number of bitrate switches in the last k = 20 seconds.  using an exponential function of n is that score stability (b ref ) - score stability (b cur ) is monotonically increasing with n 20
  20. Design  Bandwidth Estimation  smoothed value computed over the

    last several chunks rather than instantaneous throughput  harmonic mean rather than arithmetic mean  Robust to outlier 21
  21. Design  The FESTIVE algorithm  Fair, Efficient, Stable, adaptive

     harmonic bandwidth estimator  K = 20  stateful and delayed bitrate update  The randomized scheduler 22
  22. Table of Content  Introduction  BACKGROUND AND MOTIVATION 

    DESIGN  ANALYSIS OF FESTIVE  EVALUATION 23
  23. Table of Content  Introduction  BACKGROUND AND MOTIVATION 

    DESIGN  ANALYSIS OF FESTIVE  EVALUATION 27
  24. Evaluation  Compare performance of FESTIVE against (emulated) commercial players

     Validate each component  Evaluate how critical each component is to the overall performance  Evaluate the robustness of FESTIVE as a function of bandwidth variability, number of players, and the set of available bitrates 28
  25. Evaluation  Evaluation setup:  Difficult to run controlled experiments

    with real commercial player and to automating experiment  Trace-driven  A custom emulation framework to closely mimic each commercial player.  A conservative approximation: the lower bounds of unfairness, inefficiency and instability  employ a stateless bitrate selection algorithm  linear function of the throughput(previously) 29
  26. Evaluation  Evaluation setup:  Flexible framework evaluate different algorithms

    for chunk scheduling, bitrate selection, and bandwidth estimation.  Java modules(~1000 lines)  Client – Bottleneck(dummynet) – server  Client decide bitrate when issue the request  Server generates a file on the fly when received request  350Kbps to 2750Kbps / 2 sec chunks 30
  27. Comparison with Commercial Players 31  3 Player 3Mbps 

    Little worse than OSMF in efficiency  our FESTIVE parameters are customized for the chunk sizes and bitrate levels  Outperform the best SS  More player better
  28. Component-wise Validation  Bandwidth estimator:  arithmetic mean, median, EWMA,9

    and harmonic mean  observed throughput of the k = 20  the CDF of the prediction error 32
  29. Component-wise Validation  Chunk scheduling:  stateless bitrate selection, instant

    update, harmonic bandwidth estimation  periodic chunk scheduling vs. randomized scheduling 33
  30. Component-wise Validation  Stateful bitrate selection:  fixed scheduler with

    stateless selection (baseline)  randomized scheduler with stateless selection  randomized scheduler with stateful selection 35
  31. Component-wise Validation  Delayed Update:  Larger \alpha provides higher

    efficiency at the cost of stability  \alpha increases from 5 to 30 36
  32. Robustness  Available bitrates:  We create a set of

    10 available bitrate levels by  g controls the gap between the bitrates 41
  33. Summary of main results  FESTIVE outperforms existing solutions in

    terms of fairness by 40%, stability by 50%, and efficiency by 10%.  Each component of FESTIVE works as predicted by our analysis and is necessary as they complement each other.  FESTIVE is robust against various number of players, bandwidth variability, and different available bitrate set. 42
  34. Discussion  Heterogeneous algorithms:  2 player each for 8Mbps

     More stable more efficient  TODO  Case  Friendliess 43
  35. Discussion  Interaction with non-video traffic:  Wide-area effects: 

    More background traffic, less synchronization but many more players, multiple bottlenecks, interaction with router buffer sizing 44