Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms

Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms

Presentation slides of our paper "Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms" at MOBILESoft 2021.
Presentation: https://youtu.be/91b3juLFbeU

E4ec94f3e536e3066279a59adc0cf14d?s=128

Yixue Zhao

May 05, 2021
Tweet

Transcript

  1. Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms

    Yixue Zhao1, Siwei Yin2, Adriana Sejfia3, Marcelo Schmitt Laser3, Haoyu Wang2, Nenad Medvidović3 MOBILESoft 2021, Virtual Event 1 2 3
  2. PhD Thesis How to speed up mobile apps using prefetching?

    2 Reducing User-Perceived Latency in Mobile Apps via Prefetching and Caching Yixue Zhao tinyurl.com/yixuedissertation
  3. History-based Prefetching ▪ Input: user’s historical requests ▪ Method: prediction

    model ▪ Output: user’s future request(s) 3
  4. History-based Prefetching ▪ Input: user’s historical requests ▪ Method: prediction

    model ▪ Output: user’s future request(s) 4 Biggest Challenge!
  5. Why no dataset? 5 Privacy!

  6. Public Dataset: LiveLab 6 Ref: Shepard et al. LiveLab: measuring

    wireless networks and smartphone users in the field. SIGMETRICS Performance Evaluation Review. 2011 ▪ Subject: 25 iPhone users ▪ Size: an entire year ▪ Time: 2011 (a decade ago)
  7. ▪ Subject: 25 iPhone users ▪ Size: an entire year

    ▪ Time: 2011 (a decade ago) Public Dataset: LiveLab 7
  8. ▪ Subject: 25 iPhone users ▪ Size: an entire year

    ▪ Time: 2011 (a decade ago) Public Dataset 8 Small models!
  9. Easier said than done… 9

  10. ICSE 2018, Gothenburg, Sweden 10 Co-author to- be: Haoyu PhD

    advisor: Neno Me after my ICSE talk
  11. We got data! after tons of paper work, back and

    forth, ethical considerations etc… 11
  12. LiveLab ▪ An entire year LiveLab vs. Our dataset Our

    Dataset ▪ A random day (24hrs) 12 400X shorter time
  13. LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at

    Rice university LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university 13 400X more users
  14. LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at

    Rice university ▪ Hire participants LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university ▪ No contact with users 14
  15. 3 Research Questions Possibility? Do small prediction models work? à

    repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 15
  16. 3 Research Questions Possibility? Do small prediction models work? à

    repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 16
  17. 3 Research Questions Possibility? Do small prediction models work? à

    repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 17
  18. HiPHarness framework 18

  19. HiPHarness framework 19 Per user 15 million requests à 7

    million models Prediction accuracy
  20. Results of 7+ million models for individual users 20

  21. Results (RQ2) 21 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline)
  22. Results (RQ2) 22 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline)
  23. Results (RQ2) 23 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) MP Static Precision: 0.16 [Wang et al. WWW’12]
  24. Results (RQ2) 24 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) MP Static Precision: 0.16 [Wang et al. WWW’12] Small models are promising!
  25. Results (RQ2) 25 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18]
  26. Results (RQ2) 26 Existing solution? Can we reuse existing algorithms?

    à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18] Existing algorithms are promising!
  27. Even smaller? Can we reduce training size even more? à

    good & enough training data Results (RQ3) 27
  28. Results (RQ3) 28 Even smaller? Can we reduce training size

    even more? à good & enough training data
  29. Results (RQ3) 29 Even smaller? Can we reduce training size

    even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request
  30. Results (RQ3) 30 Even smaller? Can we reduce training size

    even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request Cut-off point?
  31. Results (RQ3) 31 ▪ Sliding-Window approach to explore cut-off points

    ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1,000) ▪ ANOVA post-hoc test (pair-wise comparison)
  32. Results (RQ3) 32 ▪ Sliding-Window approach to explore cut-off points

    ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 10,000) ▪ ANOVA post-hoc test (pair-wise comparison)
  33. Results (RQ3) 33 ▪ Sliding-Window approach to explore cut-off points

    ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 10,000) ▪ ANOVA post-hoc test (pair-wise comparison)
  34. Takeaways 34 ▪ Small models work! ▪ We can reuse

    existing solutions ▪ Less is more (reduce size AND improve accuracy) ▪ Challenged prior conclusion ▪ Re-open this area
  35. Acknowledgement Co-authors: Siwei Yin, Adriana Sejfia, Marcelo Schmitt Laser, Haoyu

    Wang, Nenad Medvidović 35
  36. Thanks! 36 Any questions? yixuezhao@cs.umass.edu @yixue_zhao https://people.cs.umass.edu/~yixuezhao/