Slide 1

Slide 1 text

What is your machine learning test score? Tania Allard, PhD Developer Advocate @ Microsoft Google Developer expert - ML / Tensorflow

Slide 2

Slide 2 text

2 Let’s avoid disappointment @ixek

Slide 3

Slide 3 text

3 Scoring is also called prediction, and is the process of generating values based on a trained machine learning model, given some new input data. @ixek

Slide 4

Slide 4 text

4 Scores may refer to a quantification of a model or algorithm performance on various metrics. @ixek

Slide 5

Slide 5 text

5 @ixek So what are we talking about?

Slide 6

Slide 6 text

6 This is what we are covering: @ixek ● Machine learning systems validation / quality assurance ● How to establish clear testing responsibilities ● How to establish a rubric to measure how good we are at testing ● We are not covering generic software engineering best practices ● Or specific techniques like unit-testing, smoke or pen testing ● This is not a technical dive on ML learning testing strategies

Slide 7

Slide 7 text

7 Why do we need testing or quality assurance anyway? @ixek

Slide 8

Slide 8 text

8 The “subtle” differences between production systems and offline or R&D examples @ixek

Slide 9

Slide 9 text

9 The (ML) systems are continuously evolving: from collecting and aggregating more data, to retraining models and improving their accuracy @ixek

Slide 10

Slide 10 text

10 @ixek Pet projects can be a bit more forgiving

Slide 11

Slide 11 text

11 We can also get some good laughs... @ixek https://www.reddit.com/r/funny/comments /7r9ptc/i_took_a_few_shots_at_lake_louis e_today_and/dsvv1nw/

Slide 12

Slide 12 text

12 A high number of false negatives or type-II errors can lead to havoc (i.e. healthcare and financial sectors) @ixek

Slide 13

Slide 13 text

13 @ixek Automation bias: “The tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct” (Parasuraman & Riley, 1997)

Slide 14

Slide 14 text

14 @ixek

Slide 15

Slide 15 text

15 Quality control and assurance should be performed before the consumption by users to increase the reliability and reduce bias in our systems @ixek

Slide 16

Slide 16 text

16 Where do unit tests fit in software? @ixek

Slide 17

Slide 17 text

17 @ixek

Slide 18

Slide 18 text

18 @ixek If only ML looked like this

Slide 19

Slide 19 text

19 @ixek But they look a bit more like this

Slide 20

Slide 20 text

20 @ixek So what do we test?

Slide 21

Slide 21 text

21 @ixek What should we keep an eye on

Slide 22

Slide 22 text

22 Who is responsible? @ixek

Slide 23

Slide 23 text

23 Keeping a score @ixek For manual testing 1 point Automated testing 1 point

Slide 24

Slide 24 text

24 @ixek Features and data

Slide 25

Slide 25 text

25 @ixek Test your features and distributions Do they match your expectations? From the iris data set: is the sepal length consistent? Is the width what you’d expect?

Slide 26

Slide 26 text

26 @ixek The cost of each feature

Slide 27

Slide 27 text

27 @ixek Test the correlation between features and target

Slide 28

Slide 28 text

28 @ixek https://www.tylervigen.com/spurious-correlations

Slide 29

Slide 29 text

29 @ixek Test the correlation between features and target

Slide 30

Slide 30 text

30 @ixek Test your privacy control across the pipeline Towards the science of security and privacy in Machine Learning. N Papernot, P McDaniel et al. https://pdfs.semanticscholar.org/ebab/687cd1be7d25392c11f89fce6a63bef7219d.pdf

Slide 31

Slide 31 text

31 @ixek Great expectations - Python package Test all code that creates input features

Slide 32

Slide 32 text

32 Model development

Slide 33

Slide 33 text

33 @ixek Best practices

Slide 34

Slide 34 text

34 @ixek Every piece of code is peer reviewed

Slide 35

Slide 35 text

35 @ixek Test the impact of each tunable hyperparameter

Slide 36

Slide 36 text

36 @ixek Test for model staleness

Slide 37

Slide 37 text

37 @ixek Test against a simpler model

Slide 38

Slide 38 text

38 @ixek Test for implicit bias

Slide 39

Slide 39 text

39 Infrastructure @ixek

Slide 40

Slide 40 text

40 @ixek Integration of the full pipeline From ingestion through training and serving

Slide 41

Slide 41 text

41 @ixek Test model quality before serving Test against known output data

Slide 42

Slide 42 text

42 @ixek Test how quickly and safely you can rollback

Slide 43

Slide 43 text

43 Test the reproducibility of training Train at least two models on the same data: differences in aggregated metrics, sliced metrics or example-example predictions. @ixek

Slide 44

Slide 44 text

44 @ixek Adding up

Slide 45

Slide 45 text

45 Getting your score 3. Add points for infrastructure @ixek 2. Add points for development 1. Add points for features and data Which is your lowest score???

Slide 46

Slide 46 text

46 0 points: not production ready 1-2 points: might have reliability holes 3-4 points: reasonably tested 5-6 points: good level of testing 7+ points: very strong levels of automated testing @ixek

Slide 47

Slide 47 text

47 Thank you @ixek

Slide 48

Slide 48 text

Rate today’s session Session page on conference website O’Reilly Events App