Slide 1

Slide 1 text

Formal Verification of Neural Networks in Safety-Critical Environments Tobias Ladner Technical University of Munich November 30th, 2023 Tobias Ladner Neural Network Verification November 30th, 2023 1 / 29

Slide 2

Slide 2 text

Formal Verification of Neural Networks About Me Tobias Ladner PhD @ TUM Research: Set-based Formal Verification of Neural Networks Cyber-Physical Systems Group (Prof. Althoff) Tobias Ladner Neural Network Verification November 30th, 2023 2 / 29

Slide 3

Slide 3 text

Motivation Cyber-Physical Systems Group (Prof. Althoff) Figure: Youtube: TUM Cyber-Physical Systems1 1https://www.youtube.com/watch?v=IUAeZGau28E Tobias Ladner Neural Network Verification November 30th, 2023 3 / 29

Slide 4

Slide 4 text

Motivation Motivation 2Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and harnessing adversarial examples”. In: International Conference on Learning Representations. 2015 Tobias Ladner Neural Network Verification November 30th, 2023 4 / 29

Slide 5

Slide 5 text

Motivation Motivation Adversarial attacks2 limit the applicability of neural networks in cyber-physical systems! 2Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. “Explaining and harnessing adversarial examples”. In: International Conference on Learning Representations. 2015 Tobias Ladner Neural Network Verification November 30th, 2023 4 / 29

Slide 6

Slide 6 text

Motivation Motivation Let us demonstrate the formal verification of neural networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Sample Verified? Tobias Ladner Neural Network Verification November 30th, 2023 5 / 29

Slide 7

Slide 7 text

Motivation Motivation Let us demonstrate the formal verification of neural networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Samples Verified? Tobias Ladner Neural Network Verification November 30th, 2023 5 / 29

Slide 8

Slide 8 text

Motivation Motivation Let us demonstrate the formal verification of neural networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Samples Verification Output Y Tobias Ladner Neural Network Verification November 30th, 2023 5 / 29

Slide 9

Slide 9 text

Motivation Motivation Let us demonstrate the formal verification of neural networks by an example: −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Unsafe S Output Y Samples → Conservative output set Y should not intersect with unsafe set S. Tobias Ladner Neural Network Verification November 30th, 2023 6 / 29

Slide 10

Slide 10 text

Set-based Layer Propagation Neural Network Feed-forward neural network with κ layers, input x and output y: h0 = x, hk = Lk(hk−1), k = 1 . . . κ, y = hκ, (1) with Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (2) Tobias Ladner Neural Network Verification November 30th, 2023 7 / 29

Slide 11

Slide 11 text

Set-based Layer Propagation Neural Network Feed-forward neural network with κ layers, input X and output Y: H0 = X, Hk ⊇ Lk(Hk−1), k = 1 . . . κ, Y = Hκ, (3) with Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise, (4) Tobias Ladner Neural Network Verification November 30th, 2023 8 / 29

Slide 12

Slide 12 text

Set-based Layer Propagation Neural Network Feed-forward neural network with κ layers, input X and output Y: H0 = X, Hk ⊇ Lk(Hk−1), k = 1 . . . κ, Y = Hκ, (3) with Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise, (4) and Lk(Hk−1) = {Lk(hk−1) | hk−1 ∈ Hk−1} . (5) Tobias Ladner Neural Network Verification November 30th, 2023 8 / 29

Slide 13

Slide 13 text

Set-based Layer Propagation Sets Discrete set: X = {x, x1, x2, . . .} . (6) Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29

Slide 14

Slide 14 text

Set-based Layer Propagation Sets Discrete set: X = {x, x1, x2, . . .} . (6) We might miss outliers using discrete sets → continuous sets: Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29

Slide 15

Slide 15 text

Set-based Layer Propagation Sets Discrete set: X = {x, x1, x2, . . .} . (6) We might miss outliers using discrete sets → continuous sets: Interval: X = [x − ϵ, x + ϵ] . (7) Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29

Slide 16

Slide 16 text

Set-based Layer Propagation Sets Discrete set: X = {x, x1, x2, . . .} . (6) We might miss outliers using discrete sets → continuous sets: Interval: X = [x − ϵ, x + ϵ] . (7) We usually use (polynomial) zonotopes for NNV3: X = ⟨x, ϵIn⟩Z . (8) 3Niklas Kochdumper et al. “Open-and closed-loop neural network verification using polynomial zonotopes”. In: NASA Formal Methods Symposium. Springer. 2023, pp. 16–36 Tobias Ladner Neural Network Verification November 30th, 2023 9 / 29

Slide 17

Slide 17 text

Set-based Layer Propagation Set-based Layer Propagation Neuron 2 Input 1st linear layer 1st nonlinear layer Neuron 1 Neuron 2 2nd linear layer Neuron 1 2nd nonlinear layer Neuron 1 Output Interval Zonotope Samples Tobias Ladner Neural Network Verification November 30th, 2023 10 / 29

Slide 18

Slide 18 text

Set-based Layer Propagation Set-based Layer Propagation Neuron 2 Input 1st linear layer 1st nonlinear layer Neuron 1 Neuron 2 2nd linear layer Neuron 1 2nd nonlinear layer Neuron 1 Output Interval Zonotope Samples Tobias Ladner Neural Network Verification November 30th, 2023 10 / 29

Slide 19

Slide 19 text

Set-based Layer Propagation Set-based Layer Propagation Neuron 2 Input 1st linear layer 1st nonlinear layer Neuron 1 Neuron 2 2nd linear layer Neuron 1 2nd nonlinear layer Neuron 1 Output Interval Zonotope Samples Tobias Ladner Neural Network Verification November 30th, 2023 10 / 29

Slide 20

Slide 20 text

Set-based Layer Propagation Zonotope Tobias Ladner Neural Network Verification November 30th, 2023 11 / 29

Slide 21

Slide 21 text

Set-based Layer Propagation Zonotope Z = ⟨c, G⟩Z = c + p i=1 βi G(:,i) βi ∈ [−1, 1] (9) Tobias Ladner Neural Network Verification November 30th, 2023 11 / 29

Slide 22

Slide 22 text

Set-based Layer Propagation Zonotope Z = ⟨c, G⟩Z = c + p i=1 βi G(:,i) βi ∈ [−1, 1] (9) Example: Z = 1 1 , 1 1 1 1 −1 0 Z (10) Tobias Ladner Neural Network Verification November 30th, 2023 11 / 29

Slide 23

Slide 23 text

Set-based Layer Propagation Zonotope Z = ⟨c, G⟩Z = c + p i=1 βi G(:,i) βi ∈ [−1, 1] (9) Example: Z = 1 1 , 1 1 1 1 −1 0 Z (10) 0 1 2 0 1 2 x(1) x(2) G(:,1) −2 0 2 4 0 2 x(1) G(:,1:2) −2 0 2 4 0 2 x(1) G(:,1:3) Tobias Ladner Neural Network Verification November 30th, 2023 11 / 29

Slide 24

Slide 24 text

Set-based Layer Propagation Set-based Computation Layers of our neural network: Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (11) Tobias Ladner Neural Network Verification November 30th, 2023 12 / 29

Slide 25

Slide 25 text

Set-based Layer Propagation Set-based Computation Layers of our neural network: Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (11) Linear Layers: W Z + b = W ⟨c, G⟩Z + b = ⟨Wc + b, WG⟩Z . (12) Tobias Ladner Neural Network Verification November 30th, 2023 12 / 29

Slide 26

Slide 26 text

Set-based Layer Propagation Set-based Computation Layers of our neural network: Lk(hk−1) = Wkhk−1 + bk if layer k is linear, σk(hk−1) otherwise. (11) Linear Layers: W Z + b = W ⟨c, G⟩Z + b = ⟨Wc + b, WG⟩Z . (12) Unfortunately, nonlinear layers are harder to compute → Image enclosure. Tobias Ladner Neural Network Verification November 30th, 2023 12 / 29

Slide 27

Slide 27 text

Set-based Layer Propagation Image Enclosure Output Step 1 Step 2 Step 3 Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 13 / 29

Slide 28

Slide 28 text

Set-based Layer Propagation Step 1: Element-wise Evaluation Many nonlinear layers are applied element-wise, e.g. ReLU, sigmoid, . . . Output Step 1 Output Step 4 Tobias Ladner Neural Network Verification November 30th, 2023 14 / 29

Slide 29

Slide 29 text

Set-based Layer Propagation Step 1: Element-wise Evaluation Many nonlinear layers are applied element-wise, e.g. ReLU, sigmoid, . . . Hk−1(i) = project (Hk−1, i) , i = 1 . . . vk, (13) Output Step 1 Output Step 4 Tobias Ladner Neural Network Verification November 30th, 2023 14 / 29

Slide 30

Slide 30 text

Set-based Layer Propagation Step 2: Domain Bounds As our set Hk−1(i) is bounded, we do not need to enclose the nonlinear function over the entire domain. Output Step 1 Step 2 Output Step 4 Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 15 / 29

Slide 31

Slide 31 text

Set-based Layer Propagation Step 2: Domain Bounds As our set Hk−1(i) is bounded, we do not need to enclose the nonlinear function over the entire domain. Only enclose the nonlinear function within the bounds of the input set: [l(i), u(i)] = interval Hk−1(i) . (14) Output Step 1 Step 2 Output Step 4 Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 15 / 29

Slide 32

Slide 32 text

Set-based Layer Propagation Step 3: Polynomial Approximation Next, we compute a polynomial pi (x) that approximates our nonlinear function f (x) within the domain: pi (x) = polyApprox [l(i), u(i)], order ≈ f (x), x ∈ [l(i), u(i)]. (15) Output Step 1 Step 2 Step 3 Output Step 4 Step 5 Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 16 / 29

Slide 33

Slide 33 text

Set-based Layer Propagation Step 4: Approximation Error After finding an approximation polynomial pi (x), we need to find the approximation error: d(i) = max x∈[l(i),u(i) ] |f (x) − pi (x)|. (16) Output Input Output Step 4 Tobias Ladner Neural Network Verification November 30th, 2023 17 / 29

Slide 34

Slide 34 text

Set-based Layer Propagation Step 5: Set-based Evaluation Finally, we evaluate our polynomial pi (x) over our set Hk−1(i): Hk(i) = pi Hk−1(i) . (17) Output Input Output Step 4 Input Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 18 / 29

Slide 35

Slide 35 text

Set-based Layer Propagation Step 5: Set-based Evaluation Finally, we evaluate our polynomial pi (x) over our set Hk−1(i): Hk(i) = pi Hk−1(i) . (17) For a linear polynomial pi (x) = c0 + c1x, this is computed by Hk(i) = c0 + c1 · Hk−1(i). (18) Output Input Output Step 4 Input Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 18 / 29

Slide 36

Slide 36 text

Set-based Layer Propagation Step 5: Set-based Evaluation Finally, we evaluate our polynomial pi (x) over our set Hk−1(i): Hk(i) = pi Hk−1(i) . (17) For a linear polynomial pi (x) = c0 + c1x, this is computed by Hk(i) = c0 + c1 · Hk−1(i). (18) Higher-order polynomials require non-convex set representations for a tight enclosure. Output Input Output Step 4 Input Step 5 Tobias Ladner Neural Network Verification November 30th, 2023 18 / 29

Slide 37

Slide 37 text

Set-based Layer Propagation Step 6: Image Enclosure Finally, we stack the computed sets back together using the Cartesian product: Hk =    Hk(1) . . . Hk(vk )    , (19) Output Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 19 / 29

Slide 38

Slide 38 text

Set-based Layer Propagation Step 6: Image Enclosure Finally, we stack the computed sets back together using the Cartesian product: Hk =    Hk(1) . . . Hk(vk )    , (19) and add the approximation error: Hk = Hk ⊕ [−d, d]. (20) Output Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 19 / 29

Slide 39

Slide 39 text

Higher-Order Image Enclosure Higher-Order Image Enclosure Note that we can also use higher-order polynomials to enclose the nonlinear layers: Output Step 1 Step 2 Step 3 Input Output Step 4 Input Step 5 Input Step 6 Tobias Ladner Neural Network Verification November 30th, 2023 20 / 29

Slide 40

Slide 40 text

Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4 0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29

Slide 41

Slide 41 text

Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4 0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] Order: [2 2 2 1 2] [1 1 1 1 1] [1 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29

Slide 42

Slide 42 text

Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4 0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] Order: [2 2 2 1 2] [1 1 1 1 1] [1 1] Order: [2 3 3 1 3] [1 1 1 1 1] [1 1] Order: [2 4 4 1 4] [1 1 1 1 1] [1 1] Order: [2 4 5 1 5] [1 1 1 1 1] [1 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29

Slide 43

Slide 43 text

Higher-Order Image Enclosure Automatic Abstraction Refinement4 0.2 0.4 0.6 0.4 0.5 0.6 0.7 y(1) y (2) Output Samples Order: [1 1 1 1 1] [1 1 1 1 1] [1 1] Order: [2 2 2 1 2] [1 1 1 1 1] [1 1] Order: [2 3 3 1 3] [1 1 1 1 1] [1 1] Order: [2 4 4 1 4] [1 1 1 1 1] [1 1] Order: [2 4 5 1 5] [1 1 1 1 1] [1 1] Order: [2 4 5 1 5] [2 2 2 2 2] [1 1] Order: [2 4 5 1 5] [2 2 2 2 2] [2 1] 4Tobias Ladner and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13 Tobias Ladner Neural Network Verification November 30th, 2023 21 / 29

Slide 44

Slide 44 text

Open-Loop System Open-Loop System Going back to our image example: What happens if we add more noise to our images? −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Unsafe S Output Y Samples Tobias Ladner Neural Network Verification November 30th, 2023 22 / 29

Slide 45

Slide 45 text

Open-Loop System Open-Loop System Going back to our image example: What happens if we add more noise to our images? −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Linear Abstraction Unsafe S Output Y Samples Tobias Ladner Neural Network Verification November 30th, 2023 22 / 29

Slide 46

Slide 46 text

Open-Loop System Open-Loop System Going back to our image example: What happens if we add more noise to our images? −15 −10 −5 0 5 0 1 2 3 4 5 6 7 8 9 prediction pred. label Refined Abstraction Unsafe S Refined Y Samples → We can verify a larger noise radius using higher-order polynomials! Tobias Ladner Neural Network Verification November 30th, 2023 22 / 29

Slide 47

Slide 47 text

Closed-Loop System Closed-Loop System We can now compute the output set of a neural network. Tobias Ladner Neural Network Verification November 30th, 2023 23 / 29

Slide 48

Slide 48 text

Closed-Loop System Closed-Loop System We can now compute the output set of a neural network. Can we use this knowledge to verify a neural network as part of a dynamic system? Open-Loop System ˙ x = f (x, u) Sampler t = ∆t Neural Network Controller u = Φ(x) x u Tobias Ladner Neural Network Verification November 30th, 2023 23 / 29

Slide 49

Slide 49 text

Closed-Loop System Neural Network Controller Controllers are usually sampled every ∆t. The output of the neural network controller is then used as input to the dynamic system. 0 1 2 3 4 5 0 1 2 time altitude Quadrotor Goal set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 24 / 29

Slide 50

Slide 50 text

Closed-Loop System Neural Network Controller Controllers are usually sampled every ∆t. The output of the neural network controller is then used as input to the dynamic system. 0 1 2 3 4 5 0 1 2 time altitude Quadrotor Goal set Simulations Sampling time Tobias Ladner Neural Network Verification November 30th, 2023 24 / 29

Slide 51

Slide 51 text

Closed-Loop System Neural Network Controller We can verify a given system by evaluating everything set-based: 0 1 2 3 4 5 0 1 2 time altitude Quadrotor Goal set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 25 / 29

Slide 52

Slide 52 text

Closed-Loop System Neural Network Controller We can verify a given system by evaluating everything set-based: 0 1 2 3 4 5 0 1 2 time altitude Quadrotor (linear) Goal set Reachable set Initial set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 25 / 29

Slide 53

Slide 53 text

Closed-Loop System Neural Network Controller We can verify a given system by evaluating everything set-based: 0 1 2 3 4 5 0 1 2 time altitude Quadrotor (refined) Goal set Reachable set Initial set Simulations → Refinement can also help in closed-loop systems! Tobias Ladner Neural Network Verification November 30th, 2023 25 / 29

Slide 54

Slide 54 text

Closed-Loop System Examples 0 5 10 −4 −2 0 x(1) x(2) Simplified Car Model Goal set Reachable set Initial set Simulations Tobias Ladner Neural Network Verification November 30th, 2023 26 / 29

Slide 55

Slide 55 text

Closed-Loop System Examples 0 5 10 −4 −2 0 x(1) x(2) Simplified Car Model Goal set Reachable set Initial set Simulations 0 2 4 40 60 80 100 time distance Lane Following Distance Safe distance Simulations Tobias Ladner Neural Network Verification November 30th, 2023 26 / 29

Slide 56

Slide 56 text

Closed-Loop System CORA If you want to verify some networks yourself: https://cora.in.tum.de Don’t hesitate to contact us! Tobias Ladner Neural Network Verification November 30th, 2023 27 / 29

Slide 57

Slide 57 text

Closed-Loop System Cyber-Physical Systems Group (Prof. Althoff) Figure: Youtube: TUM Cyber-Physical Systems5 5https://www.youtube.com/watch?v=IUAeZGau28E Tobias Ladner Neural Network Verification November 30th, 2023 28 / 29

Slide 58

Slide 58 text

References References Goodfellow, Ian, Jonathon Shlens, and Christian Szegedy. “Explaining and harnessing adversarial examples”. In: International Conference on Learning Representations. 2015. Kochdumper, Niklas et al. “Open-and closed-loop neural network verification using polynomial zonotopes”. In: NASA Formal Methods Symposium. Springer. 2023, pp. 16–36. Ladner, Tobias and Matthias Althoff. “Automatic abstraction refinement in neural network verification using sensitivity analysis”. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023, pp. 1–13. Tobias Ladner Neural Network Verification November 30th, 2023 29 / 29

Slide 59

Slide 59 text

Appendix Appendix: Formal Neural Network Reduction Larger networks are usually harder to verify than smaller networks. Is it possible to construct a smaller network Φ, where the verification of the small network implies the verification of the original network Φ: Φ(X) ∩ S = ∅ =⇒ Φ(X) ∩ S = ∅, (21) where X is the input set and S is the unsafe set. Tobias Ladner Neural Network Verification November 30th, 2023 1 / 2

Slide 60

Slide 60 text

Appendix Appendix: Formal Neural Network Reduction Larger networks are usually harder to verify than smaller networks. Is it possible to construct a smaller network Φ, where the verification of the small network implies the verification of the original network Φ: Φ(X) ∩ S = ∅ =⇒ Φ(X) ∩ S = ∅, (21) where X is the input set and S is the unsafe set. Bk,y,δ Original Network Tobias Ladner Neural Network Verification November 30th, 2023 1 / 2

Slide 61

Slide 61 text

Appendix Appendix: Formal Neural Network Reduction Larger networks are usually harder to verify than smaller networks. Is it possible to construct a smaller network Φ, where the verification of the small network implies the verification of the original network Φ: Φ(X) ∩ S = ∅ =⇒ Φ(X) ∩ S = ∅, (21) where X is the input set and S is the unsafe set. Bk,y,δ Original Network y → Reduced Network Tobias Ladner Neural Network Verification November 30th, 2023 1 / 2

Slide 62

Slide 62 text

Appendix Appendix: Set-based Training Standard training only uses a single point to update the weights. x Input · h0 W1 Layer L1 linear + b1 µ2 Layer L2 activation h1 · h2 W3 Layer L3 linear + b3 µ4 Layer L4 activation h3 ˆ y Output h4 ∂E(y,ˆ y) ∂ˆ y Gradient · diag ∂µ4 ∂h3 g4 · ⊤ g3 · diag ∂µ2 ∂h1 g2 g1 Tobias Ladner Neural Network Verification November 30th, 2023 2 / 2

Slide 63

Slide 63 text

Appendix Appendix: Set-based Training Standard training only uses a single point to update the weights. X Input · H0 W1 Layer L1 linear + b1 enclose H1 Layer L2 activation · H2 W3 Layer L3 linear + b3 enclose H3 Layer L4 activation Y Output H4 ∂E(y,Y) ∂Y Gradient · diag(m4 ) G4 · ⊤ G3 · diag(m2 ) G2 G1 → Including uncertainty in the training process makes the resulting networks more robust. Tobias Ladner Neural Network Verification November 30th, 2023 2 / 2