Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms

Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms

Presentation slides of our paper "Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms" at MOBILESoft 2021.
Presentation: https://youtu.be/91b3juLFbeU

Yixue Zhao

May 05, 2021
Tweet

More Decks by Yixue Zhao

Other Decks in Research

Transcript

  1. Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms

    Yixue Zhao1, Siwei Yin2, Adriana Sejfia3, Marcelo Schmitt Laser3, Haoyu Wang2, Nenad Medvidović3 MOBILESoft 2021, Virtual Event 1 2 3
  2. PhD Thesis How to speed up mobile apps using prefetching?

    2 Reducing User-Perceived Latency in Mobile Apps via Prefetching and Caching Yixue Zhao tinyurl.com/yixuedissertation
  3. History-based Prefetching ▪ Input: user’s historical requests ▪ Method: prediction

    model ▪ Output: user’s future request(s) 4 Biggest Challenge!
  4. Public Dataset: LiveLab 6 Ref: Shepard et al. LiveLab: measuring

    wireless networks and smartphone users in the field. SIGMETRICS Performance Evaluation Review. 2011 ▪ Subject: 25 iPhone users ▪ Size: an entire year ▪ Time: 2011 (a decade ago)
  5. ▪ Subject: 25 iPhone users ▪ Size: an entire year

    ▪ Time: 2011 (a decade ago) Public Dataset: LiveLab 7
  6. ▪ Subject: 25 iPhone users ▪ Size: an entire year

    ▪ Time: 2011 (a decade ago) Public Dataset 8 Small models!
  7. We got data! after tons of paper work, back and

    forth, ethical considerations etc… 11
  8. LiveLab ▪ An entire year LiveLab vs. Our dataset Our

    Dataset ▪ A random day (24hrs) 12 400X shorter time
  9. LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at

    Rice university LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university 13 400X more users
  10. LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at

    Rice university ▪ Hire participants LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university ▪ No contact with users 14
  11. 3 Research Questions Possibility? Do small prediction models work? à

    repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 15
  12. 3 Research Questions Possibility? Do small prediction models work? à

    repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 16
  13. 3 Research Questions Possibility? Do small prediction models work? à

    repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 17
  14. Results (RQ2) 21 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline)
  15. Results (RQ2) 22 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline)
  16. Results (RQ2) 23 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) MP Static Precision: 0.16 [Wang et al. WWW’12]
  17. Results (RQ2) 24 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) MP Static Precision: 0.16 [Wang et al. WWW’12] Small models are promising!
  18. Results (RQ2) 25 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18]
  19. Results (RQ2) 26 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18] Existing algorithms are promising!
  20. Even smaller? Can we reduce training size even more? à

    good & enough training data Results (RQ3) 27
  21. Results (RQ3) 28 Even smaller? Can we reduce training size

    even more? à good & enough training data
  22. Results (RQ3) 29 Even smaller? Can we reduce training size

    even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request
  23. Results (RQ3) 30 Even smaller? Can we reduce training size

    even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request Cut-off point?
  24. Results (RQ3) 31 ▪ Sliding-Window approach to explore cut-off points

    ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1,000) ▪ ANOVA post-hoc test (pair-wise comparison)
  25. Results (RQ3) 32 ▪ Sliding-Window approach to explore cut-off points

    ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 10,000) ▪ ANOVA post-hoc test (pair-wise comparison)
  26. Results (RQ3) 33 ▪ Sliding-Window approach to explore cut-off points

    ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 10,000) ▪ ANOVA post-hoc test (pair-wise comparison)
  27. Takeaways 34 ▪ Small models work! ▪ We can reuse

    existing solutions ▪ Less is more (reduce size AND improve accuracy) ▪ Challenged prior conclusion ▪ Re-open this area