/ Whitebox Access Can the adversary gain any information about the training data? Membership of a record? Value of a sensitive attribute? Recurring patterns in the data set? Latent statistics of the data set?
if for two neighboring datasets D and D’ Pr[M(D) ∈ S] Pr[M(D′) ∈ S] ≤ eϵ+δ (ϵ, δ) *Image taken from “Differential Privacy and Pan-Private Algorithms” slides by Cynthia Dwork
al. (2018) Attacker has: and M L = 1 |D| |D| ∑ i=1 ℓ(di ) At inference, given record d, attacker classifies it as member if: ℓ(d) ≤ L Key Intuition: Sample loss of training instance is lower than that of non-member, due to generalization gap.
CIFAR-100 and Purchase-100 data sets, and measure model utility and privacy leakage Accuracy loss w.r.t. non-private model Attack advantage = (TPR - FPR)
may leak more For complex learning tasks, leakage increases with increase in utility For simple tasks, the existing attacks don’t seem to be effective Future Directions: Protection against property inference attacks Exploring stronger adversaries with more background knowledge
al. (2018) Attacker has: and M L = 1 |D| |D| ∑ i=1 ℓ(di ) At inference, given record d, attacker plugs in different values of sensitive attribute and outputs the value for which: is maximum. Pr(ℓ(d), L) Key Intuition: Sample loss of training instance with the correct value of sensitive attribute has the maximum probability estimate. sensitive attribute
Zero Concentrated DP Renyi DP Moments Accountant [Dwork et al. (2016)] [Bun & Steinke (2016)] [Abadi et al. (2016)] [Mironov (2017)] “Privacy Loss RV is Sub-Gaussian” “Privacy Loss RV is strictly distributed around zero mean” “Renyi divergence of Privacy Loss RV is bounded” “Higher order moments of Privacy Loss RV is bounded” DsubG(M(D)||M(D′)) ≤ (μ, τ) Dα (M(D)||M(D′)) ≤ ζ + ρα; ∀α ∈ (1,∞) Dα (M(D)||M(D′)) ≤ ϵ λDλ+1 (M(D)||M(D′)) ≤ αM (λ)