Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms
Presentation slides of our paper "Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms" at MOBILESoft 2021.
Presentation: https://youtu.be/91b3juLFbeU
PhD Thesis How to speed up mobile apps using prefetching? 2 Reducing User-Perceived Latency in Mobile Apps via Prefetching and Caching Yixue Zhao tinyurl.com/yixuedissertation
Public Dataset: LiveLab 6 Ref: Shepard et al. LiveLab: measuring wireless networks and smartphone users in the field. SIGMETRICS Performance Evaluation Review. 2011 ▪ Subject: 25 iPhone users ▪ Size: an entire year ▪ Time: 2011 (a decade ago)
LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at Rice university LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university 13 400X more users
LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at Rice university ▪ Hire participants LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university ▪ No contact with users 14
3 Research Questions Possibility? Do small prediction models work? à repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 15
3 Research Questions Possibility? Do small prediction models work? à repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 16
3 Research Questions Possibility? Do small prediction models work? à repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 17
Results (RQ2) 24 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) MP Static Precision: 0.16 [Wang et al. WWW’12] Small models are promising!
Results (RQ2) 25 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18]
Results (RQ2) 26 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18] Existing algorithms are promising!
Results (RQ3) 29 Even smaller? Can we reduce training size even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request
Results (RQ3) 30 Even smaller? Can we reduce training size even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request Cut-off point?
Takeaways 34 ▪ Small models work! ▪ We can reuse existing solutions ▪ Less is more (reduce size AND improve accuracy) ▪ Challenged prior conclusion ▪ Re-open this area