Slide 1

Slide 1 text

Trustworthy Machine Learning David Evans University of Virginia jeffersonswheel.org Bertinoro, Italy 26 August 2019 19th International School on Foundations of Security Analysis and Design 3: Privacy

Slide 2

Slide 2 text

Course Overview Monday Introduction / Attacks Tuesday Defenses Today Privacy 1

Slide 3

Slide 3 text

Machine Learning Pipeline 2 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service

Slide 4

Slide 4 text

Potential Privacy Goals 3 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service Data Subject Privacy API User

Slide 5

Slide 5 text

Potential Privacy Goals 4 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service Data Subject Privacy Distributed (Federated) Learning API User

Slide 6

Slide 6 text

5 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service Data Subject Privacy Distributed (Federated) Learning Inference Attack API User

Slide 7

Slide 7 text

6 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service Data Subject Privacy Distributed (Federated) Learning Inference Attack API User

Slide 8

Slide 8 text

7 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service Data Subject Privacy Distributed (Federated) Learning Inference Attack API User Model Stealing Attack

Slide 9

Slide 9 text

8 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service Data Subject Privacy Distributed (Federated) Learning Inference Attack API User Model Stealing Attack Hyperparameter Stealing Attack

Slide 10

Slide 10 text

9 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service Data Subject Privacy Distributed (Federated) Learning Inference Attack API User Model Stealing Attack Hyperparameter Stealing Attack Note: only considering confidentiality; lots of integrity attacks also (poisoning, evasion, …)

Slide 11

Slide 11 text

Privacy Mechanisms: Encryption 10 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User API User Randomized Response, Local Differential Privacy Output Perturbation Objective Perturbation Gradient Perturbation Distributed Learning (Federated Learning)

Slide 12

Slide 12 text

Privacy Mechanisms: Encryption 11 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User API User Randomized Response, Local Differential Privacy Output Perturbation Objective Perturbation Gradient Perturbation Distributed Learning (Federated Learning) Oblivious Model Execution

Slide 13

Slide 13 text

Privacy Mechanisms: Noise 12 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User Machine Learning Service API User Randomized Response, Local Differential Privacy Output Perturbation Objective Perturbation Gradient Perturbation

Slide 14

Slide 14 text

Mechanisms Overview Noise Local Differential Privacy, Randomized Response Prevent subject data exposure Differential Privacy During/after model learning Prevent training data inference Encryption Secure Multi-Party Computation Prevent training data exposure Prevent model/input exposure Homomorphic Encryption Hybrid Protocols 13

Slide 15

Slide 15 text

Secure Two-Party Computation Can Alice and Bob compute a function on private data, without exposing anything about their data besides the result? ! = #(%, ') Alice’s Secret Input: % Bob’s Secret Input: ' 14

Slide 16

Slide 16 text

Secure Two-Party Computation Can Alice and Bob compute a function on private data, without exposing anything about their data besides the result? ! = #(%, ') Alice’s Secret Input: % Bob’s Secret Input: ' “private” and “correct” 15

Slide 17

Slide 17 text

Secure Computation Protocol Alice (circuit generator) Bob (circuit evaluator) Secure Computation Protocol secret input ! secret input " Agree on function # $ = #(!, ") $ = #(!, ") Learns nothing else about b Learns nothing else about a 16

Slide 18

Slide 18 text

FOCS 1982 FOCS 1986 Note: neither paper actually describes “Yao’s protocol” Andrew Yao (Turing Award 2000) 17

Slide 19

Slide 19 text

Regular Logic Inputs Output a b ! 0 0 0 0 1 0 1 0 0 1 1 1 " # ! AND 18

Slide 20

Slide 20 text

“Obfuscated” Logic Inputs Output a b ! "# $# %# "# $& %# "& $# %# "& $& %& ' ( ! AND ") , $) , %) are random values, chosen by generator but meaningless to evaluator. 19

Slide 21

Slide 21 text

Garbled Logic Inputs Output a b ! "# $# %&',)' (+# ) "# $- %&',). (+# ) "- $# %&.,)' (+# ) "- $- %&.,). (+- ) / 0 ! AND "1 , $1 , +1 are random wire labels, chosen by generator 20

Slide 22

Slide 22 text

Garbled Logic Inputs Output a b ! "# $# %&',)' (+, ) ", $# %&',). (+, ) "# $, %&.,)' (+, ) "# $# %&.,). (+# ) / 0 ! AND Garbled Table (Garbled Gate) 21

Slide 23

Slide 23 text

Yao’s GC Protocol Alice (generator) Sends tables, her input labels (!" ) Bob (evaluator) Picks random values for ! #,% . ' #,% , ( #,% )*+,,+ ((# ) )*+,,/ ((# ) )*/,,+ ((# ) )*/,,/ ((% ) Evaluates circuit, decrypting one row of each garbled gate ( 0 Decodes output 0 Generates garbled tables 22

Slide 24

Slide 24 text

Yao’s GC Protocol Alice (generator) Sends tables, her input labels (!" ) Bob (evaluator) Picks random values for ! #,% . ' #,% , ( #,% Evaluates circuit, decrypting one row of each garbled gate ( ) Decodes output ) Generates garbled tables 23 *+,,-, ((# ) *+,,-0 ((# ) *+0,-, ((# ) *+0,-0 ((% ) How does the Bob learn his own input wire labels?

Slide 25

Slide 25 text

Primitive: Oblivious Transfer (OT) Alice (sender) Bob (receiver) Oblivious Transfer Protocol ! " , ! # selector $ ! $ Learns nothing about % Rabin, 1981; Even, Goldreich, and Lempel, 1985; … 24

Slide 26

Slide 26 text

G0 G1 … G2 Chain gates to securely compute any discrete function! !" " or !# " $" " or $# " !" # or !# # $" # or $# # %" " or %# " %" # or %# # %" & or %# & ' () ),+) ) (%" ") ' (. ),+) ) (%" ") ' () ),+. ) (%" ") ' (. ),+. ) (%# ") ' () .,+) . (%" #) ' (. .,+) . (%" #) ' () .,+. . (%" #) ' (. .,+. . (%# #) ' /) ),/) . (%" &) ' /. ),/) . (%" &) ' /) ),/. . (%" &) ' /. ),/. . (%# &)

Slide 27

Slide 27 text

From Theory to Practice

Slide 28

Slide 28 text

Building Computing Systems Digital Electronic Circuits Garbled Circuits Operate on known data Operate on encrypted wire labels 32-bit logical operation requires moving some electrons a few nm One-bit AND requires four encryptions Reuse is great! Reuse is not allowed! ! "# #,"# % ('( )) ! "% #,"# % ('( )) … 27

Slide 29

Slide 29 text

28 $1 $10 $100 $1,000 $10,000 $100,000 $1,000,000 $10,000,000 $100,000,000 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 Estimated cost of 4T gates 2PC, compute only (bandwidth free) Caveat: very rough data and cost estimates Moore’s Law rate of improvement FairPlay (Malkhi, Nisan, Pinkas and Sella [USENIX Sec 2004])

Slide 30

Slide 30 text

29 $1 $10 $100 $1,000 $10,000 $100,000 $1,000,000 $10,000,000 $100,000,000 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 Free-XOR Pipelining, + Half Gates Estimated cost of 4T gates 2PC, compute only (bandwidth free) Caveat: very rough data and cost estimates Moore’s Law rate of improvement Passive Security (Semi-honest)

Slide 31

Slide 31 text

$1 $10 $100 $1,000 $10,000 $100,000 $1,000,000 $10,000,000 $100,000,000 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 30 Free-XOR Pipelining, + Half Gates Estimated cost of 4T gates 2PC, compute only (bandwidth free) Caveat: very rough data and cost estimates, mostly guessing for active security Active Security (Malicious-Secure) Passive Security (Semi-honest)

Slide 32

Slide 32 text

MPC State-of-the-Art Mature research area hundreds of protocols, thousands of papers well-established security models, proofs many implementations, libraries; industry use Practicality General-purpose protocols computation nearly free bandwidth expensive: scales with circuit size Custom protocols overcome bandwidth scaling cost combine homomorphic encryption, secret sharing 31 https://securecomputation.org/ Pragmatic Introduction to Secure MPC Evans, Kolesnikov, Rosulek (Dec 2018)

Slide 33

Slide 33 text

Multi-Party Private Learning using MPC 32 Dataset A Dataset B Alessandro Beatrice MPC Protocol Circuit describes Training Algorithm ! !

Slide 34

Slide 34 text

Federated Learning 33

Slide 35

Slide 35 text

Federated Learning 34 Central Aggregator and Controler ! ! 1. Server sends candidate models to local devices

Slide 36

Slide 36 text

Federated Learning 35 Central Aggregator and Controler ! ! 1. Server sends candidate models to local devices 2. Local devices train models on their local data 3. Devices send back gradient updates (for some parameters) 4. Server aggregated updates, produces new model "# "$

Slide 37

Slide 37 text

36 Privacy against Inference

Slide 38

Slide 38 text

Distributed Learning 37 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Output Model Hyperparameters Output Perturbation Objective Perturbation Gradient Perturbation Distributed/Federated Learning Inference Attack

Slide 39

Slide 39 text

No Inference Protection 38 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model Hyperparameters User API User Distributed Learning (Federated Learning) Inference Attack

Slide 40

Slide 40 text

Inference Attack 39 Training Data Data Collection Data Collection Model Training Trained Model Deployed Model Inference Attack

Slide 41

Slide 41 text

40 https://transformer.huggingface.co/ Predictions for next text from OpenAI’s GPT-2 language model.

Slide 42

Slide 42 text

41

Slide 43

Slide 43 text

42

Slide 44

Slide 44 text

43 USENIX Security 2019

Slide 45

Slide 45 text

Limiting Inference 44 Data Collection Data Collection Model Training Trained Model Deployed Model Hyperparameters Output Perturbation Objective Perturbation Gradient Perturbation Inference Attack Local DP

Slide 46

Slide 46 text

Limiting Inference 45 Data Collection Data Collection Model Training Trained Model Deployed Model Hyperparameters Output Perturbation Objective Perturbation Gradient Perturbation Inference Attack Local DP Trust Boundary

Slide 47

Slide 47 text

Limiting Inference 46 Data Collection Data Collection Model Training Trained Model Deployed Model Hyperparameters Output Perturbation Objective Perturbation Gradient Perturbation Inference Attack Trust Boundary Preventing inference requires adding noise to the deployed model: how much noise and where to add it?

Slide 48

Slide 48 text

Differential Privacy TCC 2006

Slide 49

Slide 49 text

Differential Privacy Definition 48 A randomized mechanism ! satisfies (#)-Differential Privacy if for any two neighboring datasets % and %’: “Neighboring” datasets differ in at most one entry. Pr[! % ∈ +] Pr[! %′ ∈ +] ≤ /0

Slide 50

Slide 50 text

Differential Privacy Definition 49 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 Pr[$ % ∈ '] Pr[$ %′ ∈ '] ≤ +, +, Privacy Budget -

Slide 51

Slide 51 text

Definition 50 A randomized mechanism ! satisfies (#)-Differential Privacy if for any two neighboring datasets % and %&: “Neighboring” datasets differ in at most one entry: definition is symmetrical Pr[! % ∈ +] Pr[! %′ ∈ +] ≤ /0 Pr[! %′ ∈ +] Pr[! % ∈ +] ≤ /0 /10 ≤ Pr[! % ∈ +] Pr[! %′ ∈ +] ≤ /0

Slide 52

Slide 52 text

51 Image taken from “Differential Privacy and Pan-Private Algorithms” slides by Cynthia Dwork Pr[$(&) ∈ )] Pr[$(&′) ∈ )] Pr[$ & ∈ )] Pr[$ &′ ∈ )] ≤ -.

Slide 53

Slide 53 text

Definition 52 A randomized mechanism ! satisfies (#, %)-Differential Privacy if for two neighboring datasets ' and '’: Pr[! ' ∈ -] Pr[! '′ ∈ -] ≤ 12 + %

Slide 54

Slide 54 text

53 Differential privacy describes a promise, made by a data holder, or curator, to a data subject: “You will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies, data sets, or information sources, are available.”

Slide 55

Slide 55 text

Limiting Inference 54 Data Collection Data Collection Model Training Trained Model Deployed Model Hyperparameters Output Perturbation Objective Perturbation Gradient Perturbation Inference Attack Trust Boundary

Slide 56

Slide 56 text

Where can we add noise? 55

Slide 57

Slide 57 text

Differential Privacy for Machine Learning Chaudhuri et al. (2011) Objective Perturbation Chaudhuri et al. (2011) Output Perturbation Abadi et al. (2016) Gradient Perturbation

Slide 58

Slide 58 text

2009 2011 2013 2015 2017 2019 [D06] [DMNS06] [CM09] [CMS11] [PRR10] [ZZXYW12] [JT13] [JT14] [WFWJN15] [HCB16] ! = 0.2 ! = 0.2 ! = 0.2 ! = 0.8 ! = 0.5 ! = 0.1 ! = 1 ! = 0.2 [WLKCJN17] ! = 0.05 Empirical Risk Minimization algorithms using ! ≤ 1 All using objective or output perturbation Simple tasks: convex learning, binary classifiers Differential Privacy introduced

Slide 59

Slide 59 text

Multi-Party Setting: Output Perturbation !(#) ! = 1 ' ( # ) !(*) + , Pathak et al. (2010) Model Training Model Training Model Training - -(#) ' data owners !(7) !(8) ! MPC Aggregation 9# 97 98 , = 9 # ∝ 1 -(#) Noise of smallest partition

Slide 60

Slide 60 text

Multi-Party Output Perturbation !(#) Model Training Model Training Model Training % %(#) & data owners !(0) !(1) ! 2 = 4 # ∝ 1 7% # ~ 1 % Add noise within MPC ! = 1 & 9 # : !(;) + 2 Bargav Jayaraman, Lingxiao Wang, David Evans and Quanquan Gu. NeurIPS 2018.

Slide 61

Slide 61 text

! = 1000 ! = 50000 * [Rajkumar and Agarwal] Violates the privacy budget KDDCup99 Dataset - Classification Task

Slide 62

Slide 62 text

Differential Privacy for Complex Learning To achieve DP, need to know the sensitivity: Pr[$ % ∈ '] Pr[$ %′ ∈ '] ≤ +, + . max2,24, 2 524 6 78 ℳ % − ℳ %; < how much a difference in the input could impact the output.

Slide 63

Slide 63 text

Differential Privacy for Complex Learning To achieve DP, need to know the sensitivity: Pr[$ % ∈ '] Pr[$ %′ ∈ '] ≤ +, + . max2,24, 2 524 6 78 ℳ % − ℳ %; < how much a difference in the input could impact the output.

Slide 64

Slide 64 text

Iterative Multi-Party Gradient Perturbation Model Training ! !(#) % data owners Bargav Jayaraman, Lingxiao Wang, David Evans and Quanquan Gu. NeurIPS 2018. /0# (1) /02 (1) /03 (1) 1 = 1 − 6( 1 % 8 /09 1 + ;) Iterate for < epochs ; ∝ 1 >! # ~ 1 ! Each iteration consumes privacy budget

Slide 65

Slide 65 text

Multiple Iterations 64 Composition Theorem: ! executions of an " -DP mechanism on same data satisfies !" -DP. Pr[&' ( ∈ *] Pr[&' (′ ∈ *] ≤ ./ Pr[&0 ( ∈ *] Pr[&0 (′ ∈ *] ≤ ./ Pr[&' ( ∈ *] Pr[&' (′ ∈ *] ⋅ Pr[&0 ( ∈ *] Pr[&0 (′ ∈ *] ≤ ./ ⋅ ./

Slide 66

Slide 66 text

2009 2011 2013 2015 2017 2019 [D06] [DMNS06] [CM09] [CMS11] [PRR10] [ZZXYW12] [JT13] [JT14] [SCS13] [WFWJN15] [HCB16] ! = 0.2 ! = 0.2 ! = 0.2 ! = 0.8 ! = 0.5 ! = 0.1 ! = 1 ! = 1 ! = 0.2 [WLKCJN17] ! = 0.05 ERM Algorithms using ! ≤ 1 Complex tasks: high ! [SS15] [ZZWCWZ18] ! = 100 * = +,-, /00 first Deep Learning with Differential Privacy

Slide 67

Slide 67 text

Tighter Composition Bounds 66

Slide 68

Slide 68 text

2009 2011 2013 2015 2017 2019 [D06] [DMNS06] [CM09] [CMS11] [PRR10] [ZZXYW12] [JT13] [JT14] [SCS13] [WFWJN15] [HCB16] ! = 0.2 ! = 0.2 ! = 0.2 ! = 0.8 ! = 0.5 ! = 0.1 ! = 1 ! = 1 ! = 0.2 [WLKCJN17] ! = 0.05 ERM Algorithms using ! ≤ 1 Complex tasks: high ! [SS15] [ZZWCWZ18] [JKT12] [INSTTW19] ! = 10 ! = 10 ! = 100 ! = 369,200 [BDFKR18] [HCS18] [YLPGT19] [GKN17] [ACGMMTZ16] [PAEGT16] . = / . = 0 ! = 8 ! = 8 ! = 21.5 ! = 8 Complex tasks: using relaxed DP definitions

Slide 69

Slide 69 text

2009 2011 2013 2015 2017 2019 [D06] [DMNS06] [CM09] [CMS11] [PRR10] [ZZXYW12] [JT13] [JT14] [SCS13] [WFWJN15] [HCB16] ! = 0.2 ! = 0.2 ! = 0.2 ! = 0.8 ! = 0.5 ! = 0.1 ! = 1 ! = 1 ! = 0.2 [WLKCJN17] ! = 0.05 ERM Algorithms using ! ≤ 1 Complex tasks: high ! [SS15] [ZZWCWZ18] [JKT12] [INSTTW19] ! = 10 ! = 10 ! = 100 ! = 369,200 [BDFKR18] [HCS18] [YLPGT19] [GKN17] [ACGMMTZ16] [PAEGT16] . = / . = 0 ! = 8 ! = 8 ! = 21.5 ! = 8 Complex tasks: using relaxed DP definitions Privacy Budget ! 0 10 20 30 40 50 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 12 Bound on Distinguishing

Slide 70

Slide 70 text

69 How much actual leakage is there with relaxed definitions?

Slide 71

Slide 71 text

70

Slide 72

Slide 72 text

Measuring Accuracy Loss 71 Accuracy Loss ∶= 1 − Accuracy of Private Model Accuracy of Non-Private Model

Slide 73

Slide 73 text

72 Accuracy Loss Privacy Budget ! Rènyi DP has 0.1 accuracy loss at ! ≈ 10 Naïve Composion has 0.1 accuracy loss at ! ≈ 500 Logistic Regression on CIFAR-100

Slide 74

Slide 74 text

Experimentally Measuring Leakage 73 Data Subjects Data Collection Data Owner Data Collection Model Training Trained Model Deployed Model User Inference Attack Gradient Perturbation

Slide 75

Slide 75 text

Membership Inference Attack 74 Adversary Membership Inference Test ! ! ∈ # # True or False Privacy Leakage Measure: True Positive Rate – False Positive Rate Training Evaluated on balanced set (member/non-member)

Slide 76

Slide 76 text

How can adversary guess membership? 75 Test error Training error Accuracy on CIFAR-10 Hint from first lecture:

Slide 77

Slide 77 text

How can adversary guess membership? 76 Test error Training error Accuracy on CIFAR-10 Generalization Gap Overfitting: Model is “more confident” in predictions for training examples

Slide 78

Slide 78 text

Membership Inference Attack: Shokri+ 77 Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov [S&P 2017] !" !# Assumption: adversary has access to similar training data 1. Train several local models Intuition: Confidence score of model is high for members, due to overfitting on training set. !$ ...

Slide 79

Slide 79 text

Membership Inference Attack: Shokri+ 78 Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov [S&P 2017] !" !# A Assumption: adversary has access to similar training data 1. Train several local models 2. Train a binary classifier model on local model outputs to distinguish member/non- member Intuition: Confidence score of model is high for members, due to overfitting on training set. !$ ...

Slide 80

Slide 80 text

Membership Inference Attack: Yeom+ 79 Samuel Yeom, Irene Giacomelli, Matt Fredrikson, Somesh Jha [CSF 2018] Attack: At inference, given record !, attacker classifies it as member if ℓ(!) ≤ & Intuition: Sample loss of training instance is lower than that of non-member, due to generalization gap. Assumption: adversary knows expected training loss of target model & = 1 ) * +,- . ℓ/ !+

Slide 81

Slide 81 text

Attribute Inference Attack 80 Adversary Membership Inference Test ["# , "% , ? , "' ] " ∈ * * Training + Predict value of unknown (private) attribute

Slide 82

Slide 82 text

81 Privacy Leakage Privacy Budget ! Logistic Regression on CIFAR-100 Theoretical Guarantee RDP NC zCDP RDP has ~0.06 leakage at ! = 10 NC has ~0.06 leakage at ! = 500 Membership Inference Attack (Yeom)

Slide 83

Slide 83 text

82 Privacy Leakage Privacy Budget ! Logistic Regression on CIFAR-100 Theoretical Guarantee PPV = 0.55 Positive Predictive Value = #$%&'( )* +($' ,)-.+./'- #$%&'( )* ,)-.+./' ,('0.1+.)#- Non-private model has 0.12 leakage with 0.56 PPV

Slide 84

Slide 84 text

Neural Networks 83 NN has 103,936 trainable parameters

Slide 85

Slide 85 text

84 Accuracy Loss Privacy Budget ! Rènyi DP has ~0.5 accuracy loss at ! ≈ 10 Naïve Composion has ~0.5 accuracy loss at ! = 500 NN on CIFAR-100

Slide 86

Slide 86 text

85 NN on CIFAR-100 Theoretical Guarantee RDP NC zCDP PPV = 0.74 PPV = 0.71 Non-private model has 0.72 leakage with 0.94 PPV

Slide 87

Slide 87 text

86 Who is actually exposed?

Slide 88

Slide 88 text

87

Slide 89

Slide 89 text

88

Slide 90

Slide 90 text

89 NN on CIFAR-100 Huge gap between theoretical guarantees and measured attacks Sacrifice accuracy for privacy

Slide 91

Slide 91 text

Open Problems Close gap between theory and meaningful privacy: - Tighter theoretical bounds - Better attacks - Theory for non-worst-case What properties put a record at risk of exposure? Understanding tradeoffs between model capacity and privacy 90

Slide 92

Slide 92 text

University of Virginia Charlottesville, Virginia USA 91 Image: cc Eric T Gunther

Slide 93

Slide 93 text

92 Thomas Jefferson

Slide 94

Slide 94 text

93 Thomas Jefferson

Slide 95

Slide 95 text

94

Slide 96

Slide 96 text

Other Security Faculty at the University of Virginia 95 Yonghwi Kwon Systems security Cyberforensics Yuan Tian IoT Security ML Security and Privacy Yixin Sun [Joining Jan 2020] Network Security & Privacy Mohammad Mahmoody Theoretical Cryptography David Wu Applied Cryptography Collaborators in Machine Learning, Computer Vision, Natural Language Processing, Software Engineering

Slide 97

Slide 97 text

Visit Opportunities PhD Student Post-Doc Year/Semester/Summer Undergraduate, Graduate, Faculty 96 Please contact me if you are interested even if in another area

Slide 98

Slide 98 text

97

Slide 99

Slide 99 text

David Evans University of Virginia [email protected] EvadeML.org Thank you!