Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Gearbox: Cache-friendly Congestion Control for DASH

Gearbox: Cache-friendly Congestion Control for DASH

Yunfeng He, Varun Singh, Jörg Ott
Aalto University, Paris, ABR Workshop, 19.05.2015

Varun Singh

May 19, 2015
Tweet

More Decks by Varun Singh

Other Decks in Technology

Transcript

  1. Gearbox
    Yunfeng He, Varun Singh, Jörg Ott
    Aalto University

    View full-size slide

  2. Rate  Switching
    500kbps
    800kbps
    1000kbps
    1200kbps
    Time line
    The clients switch between different representations of the same
    media content based on bandwidth estimation

    View full-size slide

  3. CDN  operation
    0
    1000
    2000
    3000
    4000
    5000
    00:00
    01:00
    02:00
    03:00
    04:00
    05:00
    06:00
    07:00
    08:00
    Bitrate [kbps]
    CacheHit:Impulse; Miss:Blank
    DownloadTime Min:Second
    RepresentationBW
    ReceiveRate
    CacheHit

    View full-size slide

  4. Default  algorithm:  dash.js
    0
    1000
    2000
    3000
    4000
    5000
    48:00
    50:00
    52:00
    54:00
    56:00
    58:00
    00:00
    02:00
    04:00
    Bitrate [kbps]
    CacheHit:Impulse; Miss:Blank
    DownloadTime Min:Second
    RepresentationBW
    ReceiveRate
    CacheHit
    • Caches segments.
    • May have incomplete
    representations
    • Innumerable switches
    This is the 101th client.
    12 minute content.
    1-2s chunk

    View full-size slide

  5. Gearbox
    • adaptation agility and adaptation accuracy.
    • network fluctuations (3G/LTE) and incomplete cached representations
    • Rate of change of buffer-fill level.
    • Relies on 2 main metrics:
    • buffer-fill level
    • end-to-end latency measurement

    View full-size slide

  6. Gearbox
    • Intuition: if the buffer is filling up
    • slowly, switch to a lower representation, smaller chunk size
    • quickly, switch to a higher resolution, larger chunk size.
    • At startup:
    • Adapt upwards slowly, want to get the content quickly
    • Representation level is only re-evaluated when
    drastic buffer-level-change happens.
    • Number of gears
    • 3 too few
    • 6 too many
    Checks current
    Gear position
    Gear-shift Flag
    boolean value?
    True
    No
    Upon last
    Segment received,
    Call Gearbox
    Signal the HTTP
    client to download
    the next segment
    Gear RSP &
    Input data
    COUNTER =
    CYCLE ?
    Yes
    False
    Evaluates
    representation
    ( decide Ri(t)+1 )
    Checks the change
    of buffer-level ∆η &
    Compare ∆η with ∆g
    Threshold ∆g
    breached?
    Yes
    Increments
    COUNTER
    Checks buffer-level
    in percentage β
    Set Gear-shift Flag
    boolean to False
    Evaluates
    representation
    ( decide Ri(t)+1 )
    Gear Boundary
    Reached?
    No
    Yes
    No
    Shift Gear &
    Set Gear-shift Flag
    boolean to True
    Reset COUNTER
    to Zero
    Output: Ri(t)+1
    (representation for
    Next segment)
    Algorithm Input:
    β,∆η,Bc,
    Ri(t),{Ri|Ri R}

    Maintain previous
    representation level
    Ri(t)+1 : = Ri(t)
    Gearbox Algorithm
    β:real time buffer level
    ∆η:buffer level change
    Bc: current Bandwidth
    Ri(t):last chosen Rep
    Ri(t)+1: next chosen Rep
    {Ri|Ri R}: available Reps

    RSP: Rep switching policy
    ∆g: buffer level change threshold
    Not dive into the algorithm today

    View full-size slide

  7. Testbed
    • Content Server:
    • Nginx/Apache configured to listen several ports
    • Content in several representations and several chunk sizes.
    • Cache proxy:
    • LRU and heap LFUDA
    • Single cache, and cascaded cache
    • Client:
    • dash.js
    • Cache tests with multiple clients request the same content.
    • Clients have different end-to-end latencies
    • Poisson arrival
    • Metrics:
    • Number of Buffer underruns
    • Number of switches
    • Average buffer-fill level
    • Average bit rate

    View full-size slide

  8. Basic  Test  (vary  latency  1/2)
    • Trying to keep up
    buffer as full as
    possible.
    0
    20
    40
    60
    80
    100
    00:00
    01:00
    02:00
    03:00
    04:00
    05:00
    06:00
    07:00
    08:00
    0
    1
    2
    3
    4
    5
    BufferLevel (in %)
    GearPosition(4 in total)
    Time Minute:Second
    Bufferlevel_Gearbox
    GearPosition_Gearbox
    Bufferlevel_Baseline

    View full-size slide

  9. Basic  Test  (vary  latency  2/2)

    View full-size slide

  10. Varying  Packet  
    loss
    0
    0.2
    0.4
    0.6
    0.8
    1
    0 20 40 60 80 100
    CDF
    Bufferlevel(in %)
    plr0
    plr.01
    plr.02
    plr.05
    Remains at lowest
    representation

    View full-size slide

  11. Cascaded  Cache  (LFUDA)
    0
    1000
    2000
    3000
    4000
    5000
    00:00
    01:00
    02:00
    03:00
    04:00
    05:00
    06:00
    07:00
    Bitrate [kbps]
    CacheHit:Impulse; Miss:Blank
    DownloadTime Min:Second
    RepresentationBW
    ReceiveRate
    CacheHit
    0
    1000
    2000
    3000
    4000
    5000
    00:00
    01:00
    02:00
    03:00
    04:00
    05:00
    06:00
    07:00
    08:00
    Bitrate [kbps]
    CacheHit:Impulse; Miss:Blank
    DownloadTime Min:Second
    RepresentationBW
    ReceiveRate
    CacheHit
    Baseline Gearbox

    View full-size slide

  12. Buffer  fill  level
    0
    20
    40
    60
    80
    100
    00:00
    01:00
    02:00
    03:00
    04:00
    05:00
    06:00
    07:00
    08:00
    0
    1
    2
    3
    4
    5
    BufferLevel (in %)
    GearPosition(4 in total)
    Time Minute:Second
    Bufferlevel_Gearbox
    GearPosition_Gearbox
    Bufferlevel_Baseline
    0
    0.2
    0.4
    0.6
    0.8
    1
    0 1000 2000 3000 4000 5000 6000
    CDF
    BitRate
    Baseline
    Gearbox

    View full-size slide

  13. LFUDA
    Baseline
    Gearbox

    View full-size slide

  14. Conclusions
    • Work in progress, need to compare with better than baseline (early)
    algorithms.
    • We find that LFUDA generates more cache hit for DASH, since
    LFUDA intend to keep big files in the cache.
    • Different Cache Replacement Policies can affect the cache hit rate in
    unique ways.

    View full-size slide