Upgrade to Pro — share decks privately, control downloads, hide ads and more …

FOSAD Trustworthy Machine Learning: Class 2

David Evans
August 28, 2019

FOSAD Trustworthy Machine Learning: Class 2

19th International School on Foundations of Security Analysis and Design
Mini-course on "Trustworthy Machine Learning"
https://jeffersonswheel.org/fosad2019
David Evans

Class 3: Privacy

David Evans

August 28, 2019
Tweet

More Decks by David Evans

Other Decks in Education

Transcript

  1. Trustworthy
    Machine
    Learning
    David Evans
    University of Virginia
    jeffersonswheel.org
    Bertinoro, Italy
    26 August 2019
    19th International School on Foundations of Security Analysis and Design
    3: Privacy

    View Slide

  2. Course Overview
    Monday
    Introduction / Attacks
    Tuesday
    Defenses
    Today
    Privacy
    1

    View Slide

  3. Machine Learning Pipeline
    2
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service

    View Slide

  4. Potential Privacy Goals
    3
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    Data Subject Privacy
    API
    User

    View Slide

  5. Potential Privacy Goals
    4
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    Data Subject Privacy
    Distributed (Federated) Learning
    API
    User

    View Slide

  6. 5
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    Data Subject Privacy
    Distributed (Federated) Learning
    Inference Attack
    API
    User

    View Slide

  7. 6
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    Data Subject Privacy
    Distributed (Federated) Learning
    Inference Attack
    API
    User

    View Slide

  8. 7
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    Data Subject Privacy
    Distributed (Federated) Learning
    Inference Attack
    API
    User
    Model
    Stealing
    Attack

    View Slide

  9. 8
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    Data Subject Privacy
    Distributed (Federated) Learning
    Inference Attack
    API
    User
    Model
    Stealing
    Attack
    Hyperparameter Stealing Attack

    View Slide

  10. 9
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    Data Subject Privacy
    Distributed (Federated) Learning
    Inference Attack
    API
    User
    Model
    Stealing
    Attack
    Hyperparameter Stealing Attack
    Note: only considering confidentiality; lots of integrity attacks also (poisoning, evasion, …)

    View Slide

  11. Privacy Mechanisms: Encryption
    10
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    API
    User
    Randomized Response,
    Local Differential Privacy
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation
    Distributed Learning (Federated Learning)

    View Slide

  12. Privacy Mechanisms: Encryption
    11
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    API
    User
    Randomized Response,
    Local Differential Privacy
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation
    Distributed Learning (Federated Learning)
    Oblivious
    Model
    Execution

    View Slide

  13. Privacy Mechanisms: Noise
    12
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    Machine Learning Service
    API
    User
    Randomized Response,
    Local Differential Privacy
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation

    View Slide

  14. Mechanisms Overview
    Noise
    Local Differential Privacy,
    Randomized Response
    Prevent subject data exposure
    Differential Privacy
    During/after model learning
    Prevent training data inference
    Encryption
    Secure Multi-Party Computation
    Prevent training data exposure
    Prevent model/input exposure
    Homomorphic Encryption
    Hybrid Protocols
    13

    View Slide

  15. Secure Two-Party Computation
    Can Alice and Bob compute a function on private data, without exposing
    anything about their data besides the result?
    ! = #(%, ')
    Alice’s Secret Input: % Bob’s Secret Input: '
    14

    View Slide

  16. Secure Two-Party Computation
    Can Alice and Bob compute a function on private data, without exposing
    anything about their data besides the result?
    ! = #(%, ')
    Alice’s Secret Input: % Bob’s Secret Input: '
    “private”
    and
    “correct”
    15

    View Slide

  17. Secure Computation Protocol
    Alice (circuit generator) Bob (circuit evaluator)
    Secure Computation
    Protocol
    secret input ! secret input "
    Agree on function #
    $ = #(!, ")
    $ = #(!, ")
    Learns nothing else about b Learns nothing else about a
    16

    View Slide

  18. FOCS 1982
    FOCS 1986
    Note: neither paper actually
    describes “Yao’s protocol”
    Andrew Yao
    (Turing Award 2000)
    17

    View Slide

  19. Regular Logic
    Inputs Output
    a b !
    0 0 0
    0 1 0
    1 0 0
    1 1 1
    " #
    !
    AND
    18

    View Slide

  20. “Obfuscated” Logic
    Inputs Output
    a b !
    "#
    $#
    %#
    "#
    $&
    %#
    "&
    $#
    %#
    "&
    $&
    %&
    ' (
    !
    AND
    ")
    , $)
    , %)
    are random values, chosen by generator but meaningless to evaluator.
    19

    View Slide

  21. Garbled Logic
    Inputs Output
    a b !
    "#
    $#
    %&',)'
    (+#
    )
    "#
    $-
    %&',).
    (+#
    )
    "-
    $#
    %&.,)'
    (+#
    )
    "-
    $-
    %&.,).
    (+-
    )
    / 0
    !
    AND
    "1
    , $1
    , +1
    are random wire labels, chosen by generator
    20

    View Slide

  22. Garbled Logic
    Inputs Output
    a b !
    "#
    $#
    %&',)'
    (+,
    )
    ",
    $#
    %&',).
    (+,
    )
    "#
    $,
    %&.,)'
    (+,
    )
    "#
    $#
    %&.,).
    (+#
    )
    / 0
    !
    AND
    Garbled Table
    (Garbled Gate)
    21

    View Slide

  23. Yao’s GC Protocol
    Alice (generator)
    Sends tables, her
    input labels (!"
    )
    Bob (evaluator)
    Picks random values
    for ! #,%
    . ' #,%
    , ( #,% )*+,,+
    ((#
    )
    )*+,,/
    ((#
    )
    )*/,,+
    ((#
    )
    )*/,,/
    ((%
    )
    Evaluates
    circuit,
    decrypting
    one row of
    each garbled
    gate
    (
    0
    Decodes output
    0
    Generates garbled
    tables
    22

    View Slide

  24. Yao’s GC Protocol
    Alice (generator)
    Sends tables, her
    input labels (!"
    )
    Bob (evaluator)
    Picks random values
    for ! #,%
    . ' #,%
    , ( #,% Evaluates
    circuit,
    decrypting
    one row of
    each garbled
    gate
    (
    )
    Decodes output
    )
    Generates garbled
    tables
    23
    *+,,-,
    ((#
    )
    *+,,-0
    ((#
    )
    *+0,-,
    ((#
    )
    *+0,-0
    ((%
    )
    How does the Bob learn his own input wire labels?

    View Slide

  25. Primitive: Oblivious Transfer (OT)
    Alice (sender) Bob (receiver)
    Oblivious Transfer
    Protocol
    !
    "
    , !
    # selector $
    !
    $
    Learns
    nothing
    about %
    Rabin, 1981; Even, Goldreich, and Lempel, 1985; …
    24

    View Slide

  26. G0
    G1

    G2
    Chain gates to securely
    compute any discrete function!
    !"
    " or !#
    "
    $"
    " or $#
    "
    !"
    # or !#
    #
    $"
    # or $#
    #
    %"
    " or %#
    " %"
    # or %#
    #
    %"
    & or %#
    &
    '
    ()
    ),+)
    )
    (%"
    ")
    '
    (.
    ),+)
    )
    (%"
    ")
    '
    ()
    ),+.
    )
    (%"
    ")
    '
    (.
    ),+.
    )
    (%#
    ")
    '
    ()
    .,+)
    .
    (%"
    #)
    '
    (.
    .,+)
    .
    (%"
    #)
    '
    ()
    .,+.
    .
    (%"
    #)
    '
    (.
    .,+.
    .
    (%#
    #)
    '
    /)
    ),/)
    .
    (%"
    &)
    '
    /.
    ),/)
    .
    (%"
    &)
    '
    /)
    ),/.
    .
    (%"
    &)
    '
    /.
    ),/.
    .
    (%#
    &)

    View Slide

  27. From Theory
    to Practice

    View Slide

  28. Building Computing Systems
    Digital Electronic Circuits Garbled Circuits
    Operate on known data Operate on encrypted wire labels
    32-bit logical operation requires
    moving some electrons a few nm
    One-bit AND requires four
    encryptions
    Reuse is great! Reuse is not allowed!
    !
    "#
    #,"#
    %
    ('(
    ))
    !
    "%
    #,"#
    %
    ('(
    ))

    27

    View Slide

  29. 28
    $1
    $10
    $100
    $1,000
    $10,000
    $100,000
    $1,000,000
    $10,000,000
    $100,000,000
    2003
    2004
    2005
    2006
    2007
    2008
    2009
    2010
    2011
    2012
    2013
    2014
    2015
    2016
    2017
    2018
    2019
    Estimated cost of 4T gates 2PC, compute only (bandwidth free)
    Caveat: very rough data and cost estimates
    Moore’s Law rate of improvement
    FairPlay (Malkhi, Nisan, Pinkas
    and Sella [USENIX Sec 2004])

    View Slide

  30. 29
    $1
    $10
    $100
    $1,000
    $10,000
    $100,000
    $1,000,000
    $10,000,000
    $100,000,000
    2003
    2004
    2005
    2006
    2007
    2008
    2009
    2010
    2011
    2012
    2013
    2014
    2015
    2016
    2017
    2018
    2019
    Free-XOR
    Pipelining, +
    Half Gates
    Estimated cost of 4T gates 2PC, compute only (bandwidth free)
    Caveat: very rough data and cost estimates
    Moore’s Law rate of improvement
    Passive Security
    (Semi-honest)

    View Slide

  31. $1
    $10
    $100
    $1,000
    $10,000
    $100,000
    $1,000,000
    $10,000,000
    $100,000,000
    2003
    2004
    2005
    2006
    2007
    2008
    2009
    2010
    2011
    2012
    2013
    2014
    2015
    2016
    2017
    2018
    2019
    30
    Free-XOR
    Pipelining, +
    Half Gates
    Estimated cost of 4T gates 2PC, compute only (bandwidth free)
    Caveat: very rough data and cost estimates, mostly guessing for active security
    Active Security
    (Malicious-Secure)
    Passive Security
    (Semi-honest)

    View Slide

  32. MPC State-of-the-Art
    Mature research area
    hundreds of protocols, thousands of papers
    well-established security models, proofs
    many implementations, libraries; industry use
    Practicality
    General-purpose protocols
    computation nearly free
    bandwidth expensive: scales with circuit size
    Custom protocols
    overcome bandwidth scaling cost
    combine homomorphic encryption, secret sharing
    31
    https://securecomputation.org/
    Pragmatic Introduction to Secure MPC
    Evans, Kolesnikov, Rosulek (Dec 2018)

    View Slide

  33. Multi-Party Private Learning using MPC
    32
    Dataset
    A
    Dataset
    B
    Alessandro Beatrice
    MPC Protocol
    Circuit describes
    Training Algorithm
    ! !

    View Slide

  34. Federated Learning
    33

    View Slide

  35. Federated Learning
    34
    Central Aggregator
    and Controler
    !
    !
    1. Server sends candidate models to local devices

    View Slide

  36. Federated Learning
    35
    Central Aggregator
    and Controler
    !
    !
    1. Server sends candidate models to local devices
    2. Local devices train models on their local data
    3. Devices send back gradient updates (for some parameters)
    4. Server aggregated updates, produces new model
    "#
    "$

    View Slide

  37. 36
    Privacy against Inference

    View Slide

  38. Distributed Learning
    37
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Output
    Model
    Hyperparameters
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation
    Distributed/Federated Learning
    Inference Attack

    View Slide

  39. No Inference Protection
    38
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    User
    API
    User
    Distributed Learning (Federated Learning)
    Inference Attack

    View Slide

  40. Inference Attack
    39
    Training Data
    Data
    Collection
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Inference Attack

    View Slide

  41. 40
    https://transformer.huggingface.co/
    Predictions for next text
    from OpenAI’s GPT-2
    language model.

    View Slide

  42. 41

    View Slide

  43. 42

    View Slide

  44. 43
    USENIX Security 2019

    View Slide

  45. Limiting Inference
    44
    Data
    Collection
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation
    Inference Attack
    Local DP

    View Slide

  46. Limiting Inference
    45
    Data
    Collection
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation
    Inference Attack
    Local DP
    Trust Boundary

    View Slide

  47. Limiting Inference
    46
    Data
    Collection
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation
    Inference Attack
    Trust Boundary
    Preventing inference requires adding
    noise to the deployed model: how
    much noise and where to add it?

    View Slide

  48. Differential Privacy
    TCC 2006

    View Slide

  49. Differential Privacy Definition
    48
    A randomized mechanism ! satisfies (#)-Differential
    Privacy if for any two neighboring datasets % and %’:
    “Neighboring” datasets differ in at most one entry.
    Pr[! % ∈ +]
    Pr[! %′ ∈ +]
    ≤ /0

    View Slide

  50. Differential Privacy Definition
    49
    1.0
    1.5
    2.0
    2.5
    3.0
    3.5
    4.0
    4.5
    0.0
    0.1
    0.2
    0.3
    0.4
    0.5
    0.6
    0.7
    0.8
    0.9
    1.0
    1.1
    1.2
    1.3
    1.4
    1.5
    Pr[$ % ∈ ']
    Pr[$ %′ ∈ ']
    ≤ +,
    +,
    Privacy Budget -

    View Slide

  51. Definition
    50
    A randomized mechanism ! satisfies (#)-Differential
    Privacy if for any two neighboring datasets % and %&:
    “Neighboring” datasets differ in at most one entry: definition is symmetrical
    Pr[! % ∈ +]
    Pr[! %′ ∈ +]
    ≤ /0
    Pr[! %′ ∈ +]
    Pr[! % ∈ +]
    ≤ /0
    /10 ≤
    Pr[! % ∈ +]
    Pr[! %′ ∈ +]
    ≤ /0

    View Slide

  52. 51
    Image taken from “Differential Privacy and Pan-Private Algorithms” slides by Cynthia Dwork
    Pr[$(&) ∈ )] Pr[$(&′) ∈ )]
    Pr[$ & ∈ )]
    Pr[$ &′ ∈ )]
    ≤ -.

    View Slide

  53. Definition
    52
    A randomized mechanism ! satisfies (#, %)-Differential
    Privacy if for two neighboring datasets ' and '’:
    Pr[! ' ∈ -]
    Pr[! '′ ∈ -]
    ≤ 12 + %

    View Slide

  54. 53
    Differential privacy describes a
    promise, made by a data
    holder, or curator, to a data
    subject: “You will not be
    affected, adversely or
    otherwise, by allowing your
    data to be used in any study or
    analysis, no matter what other
    studies, data sets, or
    information sources, are
    available.”

    View Slide

  55. Limiting Inference
    54
    Data
    Collection
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    Hyperparameters
    Output
    Perturbation
    Objective Perturbation
    Gradient Perturbation
    Inference Attack
    Trust Boundary

    View Slide

  56. Where can we add noise?
    55

    View Slide

  57. Differential Privacy for Machine Learning
    Chaudhuri et al. (2011)
    Objective Perturbation
    Chaudhuri et al. (2011)
    Output Perturbation
    Abadi et al. (2016)
    Gradient Perturbation

    View Slide

  58. 2009 2011 2013 2015 2017 2019
    [D06]
    [DMNS06]
    [CM09] [CMS11]
    [PRR10] [ZZXYW12]
    [JT13]
    [JT14]
    [WFWJN15]
    [HCB16]
    ! = 0.2 ! = 0.2
    ! = 0.2 ! = 0.8
    ! = 0.5
    ! = 0.1 ! = 1
    ! = 0.2
    [WLKCJN17]
    ! = 0.05
    Empirical Risk Minimization algorithms using ! ≤ 1
    All using objective or output perturbation
    Simple tasks: convex learning, binary classifiers
    Differential
    Privacy
    introduced

    View Slide

  59. Multi-Party Setting: Output Perturbation
    !(#)
    ! =
    1
    '
    (
    #
    )
    !(*) + ,
    Pathak et al. (2010)
    Model Training
    Model Training
    Model Training
    -
    -(#)
    ' data owners
    !(7)
    !(8)
    !
    MPC Aggregation
    9#
    97
    98
    , = 9 #

    1
    -(#)
    Noise of smallest partition

    View Slide

  60. Multi-Party Output Perturbation
    !(#)
    Model Training
    Model Training
    Model Training
    %
    %(#)
    & data owners
    !(0)
    !(1)
    !
    2 = 4 #

    1
    7% #
    ~
    1
    %
    Add noise within MPC
    ! =
    1
    &
    9
    #
    :
    !(;) + 2
    Bargav Jayaraman, Lingxiao Wang, David Evans
    and Quanquan Gu. NeurIPS 2018.

    View Slide

  61. ! = 1000 ! = 50000
    * [Rajkumar and Agarwal] Violates the privacy budget
    KDDCup99 Dataset - Classification Task

    View Slide

  62. Differential Privacy for Complex Learning
    To achieve DP, need to
    know the sensitivity:
    Pr[$ % ∈ ']
    Pr[$ %′ ∈ ']
    ≤ +, + .
    max2,24, 2 524
    6
    78
    ℳ % − ℳ %;
    <
    how much a difference in the
    input could impact the output.

    View Slide

  63. Differential Privacy for Complex Learning
    To achieve DP, need to
    know the sensitivity:
    Pr[$ % ∈ ']
    Pr[$ %′ ∈ ']
    ≤ +, + .
    max2,24, 2 524
    6
    78
    ℳ % − ℳ %;
    <
    how much a difference in the
    input could impact the output.

    View Slide

  64. Iterative Multi-Party
    Gradient Perturbation
    Model Training
    !
    !(#)
    % data owners Bargav Jayaraman, Lingxiao Wang, David Evans
    and Quanquan Gu. NeurIPS 2018.
    /0#
    (1)
    /02
    (1)
    /03
    (1)
    1 = 1 − 6(
    1
    %
    8 /09
    1 + ;)
    Iterate for < epochs
    ; ∝
    1
    >! #
    ~
    1
    !
    Each iteration consumes
    privacy budget

    View Slide

  65. Multiple Iterations
    64
    Composition Theorem:
    ! executions of an " -DP mechanism on same data
    satisfies !" -DP.
    Pr[&'
    ( ∈ *]
    Pr[&'
    (′ ∈ *]
    ≤ ./
    Pr[&0
    ( ∈ *]
    Pr[&0
    (′ ∈ *]
    ≤ ./
    Pr[&'
    ( ∈ *]
    Pr[&'
    (′ ∈ *]

    Pr[&0
    ( ∈ *]
    Pr[&0
    (′ ∈ *]
    ≤ ./ ⋅ ./

    View Slide

  66. 2009 2011 2013 2015 2017 2019
    [D06]
    [DMNS06]
    [CM09] [CMS11]
    [PRR10] [ZZXYW12]
    [JT13]
    [JT14]
    [SCS13]
    [WFWJN15]
    [HCB16]
    ! = 0.2 ! = 0.2
    ! = 0.2 ! = 0.8
    ! = 0.5
    ! = 0.1
    ! = 1
    ! = 1
    ! = 0.2
    [WLKCJN17]
    ! = 0.05
    ERM Algorithms using ! ≤ 1
    Complex tasks: high !
    [SS15]
    [ZZWCWZ18]
    ! = 100
    * = +,-, /00
    first Deep Learning with
    Differential Privacy

    View Slide

  67. Tighter Composition Bounds
    66

    View Slide

  68. 2009 2011 2013 2015 2017 2019
    [D06]
    [DMNS06]
    [CM09] [CMS11]
    [PRR10] [ZZXYW12]
    [JT13]
    [JT14]
    [SCS13]
    [WFWJN15]
    [HCB16]
    ! = 0.2 ! = 0.2
    ! = 0.2 ! = 0.8
    ! = 0.5
    ! = 0.1
    ! = 1
    ! = 1
    ! = 0.2
    [WLKCJN17]
    ! = 0.05
    ERM Algorithms using ! ≤ 1
    Complex tasks: high !
    [SS15]
    [ZZWCWZ18]
    [JKT12]
    [INSTTW19]
    ! = 10
    ! = 10
    ! = 100
    ! = 369,200
    [BDFKR18]
    [HCS18]
    [YLPGT19]
    [GKN17]
    [ACGMMTZ16]
    [PAEGT16]
    . = /
    . = 0
    ! = 8
    ! = 8
    ! = 21.5
    ! = 8
    Complex tasks:
    using relaxed DP definitions

    View Slide

  69. 2009 2011 2013 2015 2017 2019
    [D06]
    [DMNS06]
    [CM09] [CMS11]
    [PRR10] [ZZXYW12]
    [JT13]
    [JT14]
    [SCS13]
    [WFWJN15]
    [HCB16]
    ! = 0.2 ! = 0.2
    ! = 0.2 ! = 0.8
    ! = 0.5
    ! = 0.1
    ! = 1
    ! = 1
    ! = 0.2
    [WLKCJN17]
    ! = 0.05
    ERM Algorithms using ! ≤ 1
    Complex tasks: high !
    [SS15]
    [ZZWCWZ18]
    [JKT12]
    [INSTTW19]
    ! = 10
    ! = 10
    ! = 100
    ! = 369,200
    [BDFKR18]
    [HCS18]
    [YLPGT19]
    [GKN17]
    [ACGMMTZ16]
    [PAEGT16]
    . = /
    . = 0
    ! = 8
    ! = 8
    ! = 21.5
    ! = 8
    Complex tasks:
    using relaxed DP definitions
    Privacy Budget !
    0
    10
    20
    30
    40
    50
    0.0
    0.5
    1.0
    1.5
    2.0
    2.5
    3.0
    3.5
    4.0
    4.5
    12
    Bound on Distinguishing

    View Slide

  70. 69
    How much actual
    leakage is there
    with relaxed
    definitions?

    View Slide

  71. 70

    View Slide

  72. Measuring Accuracy Loss
    71
    Accuracy Loss ∶= 1 −
    Accuracy of Private Model
    Accuracy of Non-Private Model

    View Slide

  73. 72
    Accuracy Loss
    Privacy Budget !
    Rènyi DP has 0.1
    accuracy loss at
    ! ≈ 10
    Naïve
    Composion has
    0.1 accuracy
    loss at ! ≈ 500
    Logistic Regression on CIFAR-100

    View Slide

  74. Experimentally Measuring Leakage
    73
    Data Subjects
    Data
    Collection
    Data Owner
    Data
    Collection
    Model Training
    Trained
    Model
    Deployed
    Model
    User
    Inference Attack
    Gradient Perturbation

    View Slide

  75. Membership Inference Attack
    74
    Adversary
    Membership
    Inference Test
    !
    ! ∈ #
    #
    True or False
    Privacy Leakage Measure:
    True Positive Rate – False Positive Rate
    Training
    Evaluated on balanced set (member/non-member)

    View Slide

  76. How can adversary guess membership?
    75
    Test error
    Training error
    Accuracy on CIFAR-10
    Hint from first lecture:

    View Slide

  77. How can adversary guess membership?
    76
    Test error
    Training error
    Accuracy on CIFAR-10
    Generalization Gap
    Overfitting:
    Model is “more
    confident” in predictions
    for training examples

    View Slide

  78. Membership Inference Attack: Shokri+
    77
    Reza Shokri, Marco Stronati, Congzheng
    Song, Vitaly Shmatikov [S&P 2017]
    !"
    !#
    Assumption: adversary has access to similar
    training data
    1. Train several local models
    Intuition: Confidence score of model is
    high for members, due to overfitting on
    training set.
    !$
    ...

    View Slide

  79. Membership Inference Attack: Shokri+
    78
    Reza Shokri, Marco Stronati, Congzheng
    Song, Vitaly Shmatikov [S&P 2017]
    !"
    !#
    A
    Assumption: adversary has access to similar
    training data
    1. Train several local models
    2. Train a binary classifier model on local
    model outputs to distinguish member/non-
    member
    Intuition: Confidence score of model is
    high for members, due to overfitting on
    training set.
    !$
    ...

    View Slide

  80. Membership Inference Attack: Yeom+
    79
    Samuel Yeom, Irene Giacomelli,
    Matt Fredrikson, Somesh Jha
    [CSF 2018]
    Attack: At inference, given
    record !, attacker classifies it
    as member if ℓ(!) ≤ &
    Intuition: Sample loss of
    training instance is lower
    than that of non-member,
    due to generalization gap.
    Assumption: adversary knows expected
    training loss of target model
    & =
    1
    )
    *
    +,-
    .
    ℓ/
    !+

    View Slide

  81. Attribute Inference Attack
    80
    Adversary
    Membership
    Inference Test
    ["#
    , "%
    , ? , "'
    ]
    " ∈ *
    *
    Training
    +
    Predict value of unknown (private) attribute

    View Slide

  82. 81
    Privacy Leakage
    Privacy Budget !
    Logistic Regression on CIFAR-100
    Theoretical
    Guarantee
    RDP
    NC
    zCDP
    RDP has ~0.06
    leakage at ! = 10
    NC has ~0.06
    leakage at ! = 500
    Membership Inference Attack (Yeom)

    View Slide

  83. 82
    Privacy Leakage
    Privacy Budget !
    Logistic Regression on CIFAR-100
    Theoretical
    Guarantee
    PPV = 0.55
    Positive Predictive Value =
    #$%&'( )* +($' ,)-.+./'-
    #$%&'( )* ,)-.+./' ,('0.1+.)#-
    Non-private model has 0.12 leakage with 0.56 PPV

    View Slide

  84. Neural Networks
    83
    NN has 103,936 trainable parameters

    View Slide

  85. 84
    Accuracy Loss
    Privacy Budget !
    Rènyi DP has
    ~0.5 accuracy
    loss at ! ≈ 10
    Naïve Composion
    has ~0.5 accuracy
    loss at ! = 500
    NN on CIFAR-100

    View Slide

  86. 85
    NN on CIFAR-100
    Theoretical
    Guarantee
    RDP
    NC
    zCDP
    PPV = 0.74
    PPV = 0.71
    Non-private model has 0.72 leakage with 0.94 PPV

    View Slide

  87. 86
    Who is
    actually
    exposed?

    View Slide

  88. 87

    View Slide

  89. 88

    View Slide

  90. 89
    NN on CIFAR-100
    Huge gap between
    theoretical guarantees
    and measured attacks
    Sacrifice accuracy
    for privacy

    View Slide

  91. Open Problems
    Close gap between theory and meaningful privacy:
    - Tighter theoretical bounds
    - Better attacks
    - Theory for non-worst-case
    What properties put a record at risk of exposure?
    Understanding tradeoffs between model capacity and privacy
    90

    View Slide

  92. University of Virginia
    Charlottesville, Virginia USA
    91
    Image: cc Eric T Gunther

    View Slide

  93. 92
    Thomas Jefferson

    View Slide

  94. 93
    Thomas Jefferson

    View Slide

  95. 94

    View Slide

  96. Other Security Faculty
    at the University of Virginia
    95
    Yonghwi Kwon
    Systems security
    Cyberforensics
    Yuan Tian
    IoT Security
    ML Security and
    Privacy
    Yixin Sun
    [Joining Jan 2020]
    Network Security
    & Privacy
    Mohammad
    Mahmoody
    Theoretical
    Cryptography
    David Wu
    Applied
    Cryptography
    Collaborators in Machine Learning, Computer Vision, Natural Language Processing, Software Engineering

    View Slide

  97. Visit Opportunities
    PhD Student
    Post-Doc
    Year/Semester/Summer
    Undergraduate,
    Graduate,
    Faculty
    96
    Please contact me if you are
    interested even if in another area

    View Slide

  98. 97

    View Slide

  99. David Evans
    University of Virginia
    [email protected]
    EvadeML.org
    Thank you!

    View Slide