Slide 1

Slide 1 text

Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms Yixue Zhao1, Siwei Yin2, Adriana Sejfia3, Marcelo Schmitt Laser3, Haoyu Wang2, Nenad Medvidović3 MOBILESoft 2021, Virtual Event 1 2 3

Slide 2

Slide 2 text

PhD Thesis How to speed up mobile apps using prefetching? 2 Reducing User-Perceived Latency in Mobile Apps via Prefetching and Caching Yixue Zhao tinyurl.com/yixuedissertation

Slide 3

Slide 3 text

History-based Prefetching ▪ Input: user’s historical requests ▪ Method: prediction model ▪ Output: user’s future request(s) 3

Slide 4

Slide 4 text

History-based Prefetching ▪ Input: user’s historical requests ▪ Method: prediction model ▪ Output: user’s future request(s) 4 Biggest Challenge!

Slide 5

Slide 5 text

Why no dataset? 5 Privacy!

Slide 6

Slide 6 text

Public Dataset: LiveLab 6 Ref: Shepard et al. LiveLab: measuring wireless networks and smartphone users in the field. SIGMETRICS Performance Evaluation Review. 2011 ▪ Subject: 25 iPhone users ▪ Size: an entire year ▪ Time: 2011 (a decade ago)

Slide 7

Slide 7 text

▪ Subject: 25 iPhone users ▪ Size: an entire year ▪ Time: 2011 (a decade ago) Public Dataset: LiveLab 7

Slide 8

Slide 8 text

▪ Subject: 25 iPhone users ▪ Size: an entire year ▪ Time: 2011 (a decade ago) Public Dataset 8 Small models!

Slide 9

Slide 9 text

Easier said than done… 9

Slide 10

Slide 10 text

ICSE 2018, Gothenburg, Sweden 10 Co-author to- be: Haoyu PhD advisor: Neno Me after my ICSE talk

Slide 11

Slide 11 text

We got data! after tons of paper work, back and forth, ethical considerations etc… 11

Slide 12

Slide 12 text

LiveLab ▪ An entire year LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) 12 400X shorter time

Slide 13

Slide 13 text

LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at Rice university LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university 13 400X more users

Slide 14

Slide 14 text

LiveLab ▪ An entire year ▪ 25 iPhone-using undergraduates at Rice university ▪ Hire participants LiveLab vs. Our dataset Our Dataset ▪ A random day (24hrs) ▪ 10K+ diverse mobile users at BUPT university ▪ No contact with users 14

Slide 15

Slide 15 text

3 Research Questions Possibility? Do small prediction models work? à repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 15

Slide 16

Slide 16 text

3 Research Questions Possibility? Do small prediction models work? à repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 16

Slide 17

Slide 17 text

3 Research Questions Possibility? Do small prediction models work? à repetitive requests Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Even smaller? Can we reduce training size even more? à good & enough training data 17

Slide 18

Slide 18 text

HiPHarness framework 18

Slide 19

Slide 19 text

HiPHarness framework 19 Per user 15 million requests à 7 million models Prediction accuracy

Slide 20

Slide 20 text

Results of 7+ million models for individual users 20

Slide 21

Slide 21 text

Results (RQ2) 21 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline)

Slide 22

Slide 22 text

Results (RQ2) 22 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline)

Slide 23

Slide 23 text

Results (RQ2) 23 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) MP Static Precision: 0.16 [Wang et al. WWW’12]

Slide 24

Slide 24 text

Results (RQ2) 24 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) MP Static Precision: 0.16 [Wang et al. WWW’12] Small models are promising!

Slide 25

Slide 25 text

Results (RQ2) 25 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18]

Slide 26

Slide 26 text

Results (RQ2) 26 Existing solution? Can we reuse existing algorithms? à accuracy of DG, PPM, MP, Naïve (baseline) Static Precision: 0.478 [Zhao et al. ICSE’18] Existing algorithms are promising!

Slide 27

Slide 27 text

Even smaller? Can we reduce training size even more? à good & enough training data Results (RQ3) 27

Slide 28

Slide 28 text

Results (RQ3) 28 Even smaller? Can we reduce training size even more? à good & enough training data

Slide 29

Slide 29 text

Results (RQ3) 29 Even smaller? Can we reduce training size even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request

Slide 30

Slide 30 text

Results (RQ3) 30 Even smaller? Can we reduce training size even more? à good & enough training data 200 400 600 800 1000 Static Precision trend w.r.t. #request Cut-off point?

Slide 31

Slide 31 text

Results (RQ3) 31 ▪ Sliding-Window approach to explore cut-off points ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1,000) ▪ ANOVA post-hoc test (pair-wise comparison)

Slide 32

Slide 32 text

Results (RQ3) 32 ▪ Sliding-Window approach to explore cut-off points ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 10,000) ▪ ANOVA post-hoc test (pair-wise comparison)

Slide 33

Slide 33 text

Results (RQ3) 33 ▪ Sliding-Window approach to explore cut-off points ▪ 11 window sizes (50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 10,000) ▪ ANOVA post-hoc test (pair-wise comparison)

Slide 34

Slide 34 text

Takeaways 34 ▪ Small models work! ▪ We can reuse existing solutions ▪ Less is more (reduce size AND improve accuracy) ▪ Challenged prior conclusion ▪ Re-open this area

Slide 35

Slide 35 text

Acknowledgement Co-authors: Siwei Yin, Adriana Sejfia, Marcelo Schmitt Laser, Haoyu Wang, Nenad Medvidović 35

Slide 36

Slide 36 text

Thanks! 36 Any questions? yixuezhao@cs.umass.edu @yixue_zhao https://people.cs.umass.edu/~yixuezhao/