Luca Corbucci
June 02, 2024
5

# DevFest Pisa 2024 - Is Your Model Private?

June 02, 2024

## Transcript

2. ### Why should we care about privacy when training ML models?

i.e. What could possibly go wrong?

4. ### What’s the color of the cat? An attacker wants to

know If a sample was used to Train the model In a Membership Inference Attack,
5. ### What’s the color of the cat? 0 45 90 0

1 2 4 0 20 40 0 1 2 4
6. ### What’s the color of the cat? 0 45 90 0

1 2 4 0 20 40 0 1 2 4 The model will be more confident when we query it with the image that was in the training dataset

17. ### Differential Privacy (An intuition using databases) Suppose you have two

databases That differs in one single instance
18. ### Differential Privacy (An intuition using databases) You query both of

them and you have two different results
19. ### Differential Privacy (An intuition using databases) You query both of

them and you have two different results You can infer something about the missing instance
20. ### Differential Privacy Differential Privacy allows you to query the databases

adding some randomisation to the answer. (An intuition using databases) You will have (more or less) the same output regardless of the presence of one sample
21. ### Differential Privacy (A slightly more advanced definition) P[A( ) =

O] ≤ P[A( ) = O] eϵ Given two databases which differ in only one instance:
22. ### P[A( ) = O] ≤ P[A( ) = O] eϵ

tells us how much these two probabilities are similar eϵ is called “privacy budget” and represents an upper bound on how much we can leak information ϵ How to interpret the ϵ Given two databases which differ in only one instance:
23. ### Differential Privacy (A more relaxed definition) P[A( ) = O]

≤ P[A( ) = O] +δ eϵ The parameter quantifies the probability that something goes wrong. The algorithm will be differentially private with probability 1 -δ δ Given two databases which differ in only one instance:
24. ### Essentially, instead of returning the real output of the query,

we return a noisy output.
25. ### Example We want to query our database to know how

many patients have Diabetes >>> df[df["Disease"] == “Diabetes”].shape[0] 98
26. ### Example We want to query our database to know how

many patients have Diabetes >>> df[df["Disease"] == “Diabetes”].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_1)
27. ### Example We want to query our database to know how

many patients have Diabetes >>> df[df["Disease"] == “Diabetes”].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_1)
28. ### Example We want to query our database to know how

many patients have Diabetes >>> df[df["Disease"] == “Diabetes”].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_1) 97.19888273257044
29. ### Example We want to query our database to know how

many patients have Diabetes >>> df[df["Disease"] == “Diabetes”].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_1) 97.19888273257044 >>> df[df["Disease"] == "Diabetes"].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_2) 94.0943263602294
30. ### Example We want to query our database to know how

many patients have Diabetes >>> df[df["Disease"] == “Diabetes”].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_1) 97.19888273257044 >>> df[df["Disease"] == "Diabetes"].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_2) 94.0943263602294 What’s the privacy cost here?
31. ### Example We want to query our database to know how

many patients have Diabetes >>> df[df["Disease"] == “Diabetes”].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_1) 97.19888273257044 >>> df[df["Disease"] == "Diabetes"].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_2) 94.0943263602294 >>> int(df[df[“Disease"] == "Diabetes"].shape[0] + rnd.laplace(loc=0, scale=sensitivity/eps_1)) 95 Am I removing DP when I round the result?

] eϵ
36. ### Differential Privacy P[A( ) = ] ≤ P[A( ) =

] eϵ The outputs of the two neural networks will be similar regardless of the presence of in the dataset
37. ### SGD def sgd(): for each batch L_t: for each sample

x_i in the batch: g_t(x_i) = compute_gradient(M, x_i) g_t = average of gradients M = M - lr * g_t Return M
38. ### SGD DP-SGD def sgd(): for each batch L_t: for each

sample x_i in the batch: g_t(x_i) = compute_gradient(M, x_i) g_t = average of gradients M = M - lr * g_t Return M def sgd(): for each batch L_t: for each sample x_i: g_t(x_i) = compute_gradient(M, x_i) g_t = average of gradients M = M - lr * g_t Return M
39. ### SGD DP-SGD def dp_sgd(): for each batch L_t: for each

sample x_i: g_t(x_i) = compute_gradient(M, x_i) g_t(x_i) = clip_gradient() g_t = average of clipped gradients + Noise M = M - lr * g_t Return M def sgd(): for each batch L_t: for each sample x_i in the batch: g_t(x_i) = compute_gradient(M, x_i) g_t = average of gradients M = M - lr * g_t Return M
40. ### SGD DP-SGD def dp_sgd(): for each batch L_t: for each

sample x_i: g_t(x_i) = compute_gradient(M, x_i) g_t(x_i) = clip_gradient(C) g_t = average of clipped gradients + Noise M = M - lr * g_t Return M def sgd(): for each batch L_t: for each sample x_i in the batch: g_t(x_i) = compute_gradient(M, x_i) g_t = average of gradients M = M - lr * g_t Return M This can be Gaussian Noise 𝒩(0, σ2C2I)

Tensorflow
43. ### Differentially Private NN are just a wrapper away * *

if you carefully choose your privacy parameters model, optimizer, train_loader = privacy_engine.make_private_with_epsilon( module=model, # the model you want to train with DP optimizer=optimizer, data_loader=train_loader, epochs=EPOCHS, target_epsilon=EPSILON, # privacy budget target_delta=DELTA, max_grad_norm=MAX_GRAD_NORM, # clipping value )
44. ### A few notes on the privacy parameters Choosing the is

a tradeoff between the utility of the model and the privacy we want to guarantee ϵ
45. ### A few notes on the privacy parameters If we set

a low we will need to introduce a lot of noise during the training Choosing the is a tradeoff between the utility of the model and the privacy we want to guarantee ϵ ϵ
46. ### A few notes on the privacy parameters If we set

a low we will need to introduce a lot of noise during the training This will degrade the model performances! Choosing the is a tradeoff between the utility of the model and the privacy we want to guarantee ϵ ϵ

48. ### References 1) Evaluating and Testing Unintended Memorization in Neural Networks

https:// bair.berkeley.edu/blog/2019/08/13/memorization/ 2) Scalable Extraction of Training Data from (Production) Language Models https://arxiv.org/pdf/2311.17035 3) Membership Inference Attacks against Machine Learning Models https:// arxiv.org/abs/1610.05820 4) A friendly, non-technical introduction to differential privacy https:// desfontain.es/blog/friendly-intro-to-differential-privacy.html 5) Deep Learning with Differential Privacy https://arxiv.org/abs/1607.00133 6) Opacus https://opacus.ai/ 7) Tensorﬂow Privacy https://github.com/tensorﬂow/privacy
49. ### References 8) A list of real-world uses of differential privacy

https://desfontain.es/blog/ real-world-differential-privacy.html 9) Improving Gboard language models via private federated analytics https://research.google/blog/improving-gboard-language-models-via- private-federated-analytics/ 10) Learning with Privacy at Scale https://docs-assets.developer.apple.com/ ml-research/papers/learning-with-privacy-at-scale.pdf