Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Towards a Runtime Verification Approach for Internet of Things Systems

Towards a Runtime Verification Approach for Internet of Things Systems

Luca Franceschini

June 05, 2018
Tweet

More Decks by Luca Franceschini

Other Decks in Research

Transcript

  1. Towards a Runtime Verification Approach for Internet of Things Systems

    Maurizio Leotta Davide Ancona Luca Franceschini Dario Olianas Marina Ribaudo Filippo Ricca EnWoT’18, 05 June, 2018, Cáceres, Spain University of Genoa, Italy
  2. Project Current project 2016–2018: “Full Stack Quality for the Internet

    of Things” Research projects working from different perspectives: testing, model checking, runtime verification… 1
  3. Project Current project 2016–2018: “Full Stack Quality for the Internet

    of Things” Research projects working from different perspectives: testing, model checking, runtime verification… Last year future work… […] combine testing technique with runtime verifica- tion in order to increase the reliability of IoT systems. (Leotta et al. 2018) 1
  4. The Big Picture How to ensure software correctness? • Static

    verification/formal methods • Testing • Runtime verification! 2
  5. Definition Runtime verification is the discipline of computer sci- ence

    that deals with the study, development, and ap- plication of those verification techniques that allow checking whether a run of a system under scrutiny sat- isfies or violates a given correctness property. (Leucker and Schallhart 2009) 3
  6. Different Problems Static verification Check whether all possible runs of

    a system satisfy a property. Runtime verification Check whether a run of a system satisfy a property. 4
  7. Different Problems Static verification Check whether all possible runs of

    a system satisfy a property. Runtime verification Check whether a run of a system satisfy a property. The latter is easier! 4
  8. Reasons Why should we consider runtime verification? • Properties can

    be hard (or even undecidable) to be statically checked 5
  9. Reasons Why should we consider runtime verification? • Properties can

    be hard (or even undecidable) to be statically checked • Model of the system unavailable, or we want to verify the actual implementation 5
  10. Reasons Why should we consider runtime verification? • Properties can

    be hard (or even undecidable) to be statically checked • Model of the system unavailable, or we want to verify the actual implementation • Dinamicity of the underlying system 5
  11. Reasons Why should we consider runtime verification? • Properties can

    be hard (or even undecidable) to be statically checked • Model of the system unavailable, or we want to verify the actual implementation • Dinamicity of the underlying system • Verification after deployment 5
  12. Reasons Why should we consider runtime verification? • Properties can

    be hard (or even undecidable) to be statically checked • Model of the system unavailable, or we want to verify the actual implementation • Dinamicity of the underlying system • Verification after deployment • Safety-critical systems 5
  13. Reasons Why should we consider runtime verification? • Properties can

    be hard (or even undecidable) to be statically checked • Model of the system unavailable, or we want to verify the actual implementation • Dinamicity of the underlying system • Verification after deployment • Safety-critical systems • React to failures 5
  14. Implementation A monitor observes all relevant events from the execution

    of the system, and collects them in a trace. 6
  15. Implementation A monitor observes all relevant events from the execution

    of the system, and collects them in a trace. The trace is then checked against a given formal specification, either online or offline. 6
  16. Trace Expressions We propose the use of trace expressions as

    a specification formalism. Trace expressions are expressly devised for runtime verification purposes: 7
  17. Trace Expressions We propose the use of trace expressions as

    a specification formalism. Trace expressions are expressly devised for runtime verification purposes: • the basic building blocks are the observed events 7
  18. Trace Expressions We propose the use of trace expressions as

    a specification formalism. Trace expressions are expressly devised for runtime verification purposes: • the basic building blocks are the observed events • a rich set of operators is provided: prefixing, union, intersection, concatenation, interleaving 7
  19. Trace Expressions We propose the use of trace expressions as

    a specification formalism. Trace expressions are expressly devised for runtime verification purposes: • the basic building blocks are the observed events • a rich set of operators is provided: prefixing, union, intersection, concatenation, interleaving • more high-level operators can be introduced for ease of use (e.g., if-then-else conditionals) 7
  20. Trace Expressions We propose the use of trace expressions as

    a specification formalism. Trace expressions are expressly devised for runtime verification purposes: • the basic building blocks are the observed events • a rich set of operators is provided: prefixing, union, intersection, concatenation, interleaving • more high-level operators can be introduced for ease of use (e.g., if-then-else conditionals) • parametric runtime verification supported 7
  21. Trace Expressions We propose the use of trace expressions as

    a specification formalism. Trace expressions are expressly devised for runtime verification purposes: • the basic building blocks are the observed events • a rich set of operators is provided: prefixing, union, intersection, concatenation, interleaving • more high-level operators can be introduced for ease of use (e.g., if-then-else conditionals) • parametric runtime verification supported • recursion allowed: (possibly) non-terminating system can be verified 7
  22. Trace Expression Example Events = {open, read, close} τ =

    ϵ ∨ open : τ′ τ′ = (read : τ′) ∨ (close : ϵ) 8
  23. Parametric Trace Expression Example Events = {open(fd), read(fd), close(fd)} τ

    = ϵ ∨ {let fd; open(fd) : (τ′ | τ)} τ′ = (read(fd) : τ′) ∨ (close(fd) : ϵ) 9
  24. Formal Specification Main = Normal⟨[]⟩ Normal⟨L⟩ = {let L′; if

    read(L, L′, 4) then inject : Discard5 · More⟨[]⟩ else read(L, L′, 0) : Normal⟨L′⟩} 15
  25. Formal Specification Main = Normal⟨[]⟩ Normal⟨L⟩ = {let L′; if

    read(L, L′, 4) then inject : Discard5 · More⟨[]⟩ else read(L, L′, 0) : Normal⟨L′⟩} More⟨L⟩ = {let L′; if read20(L, L′, 16) then alarm : inject : Discard5 · Proble else if read20(L, L′, 4) then inject : Discard5 · More⟨[]⟩ else if read20(L, L′, 0) then Normal⟨L′⟩ else read(L, L′, 0) : More⟨L′⟩} Problem⟨L⟩ = {let L′; if read20(L, L′, 16) then alarm : inject : Discard5 · Proble else if read20(L, L′, 4) then inject : Discard5 · More⟨[]⟩ else if read20(L, L′, 0) then Normal⟨L′⟩ else read(L, L′, 0) : Problem⟨L′⟩} Discardi>0 = ignore : Discardi−1 Discard0 = ϵ 16
  26. Mutation Analysis Traditionally, a technique to evaluate testing scripts quality.

    Introduce small changes in code, similar to errors a developer could do (mutants) and see if tests can detect (kill) them. Effectiveness: percentage of killed mutants. Can be used with runtime verification too! 17
  27. Procedure 1. Mutate JavaScript code (29 generated using Stryker) 2.

    Mutate Node-RED switch nodes (27 generated manually) 18
  28. Procedure 1. Mutate JavaScript code (29 generated using Stryker) 2.

    Mutate Node-RED switch nodes (27 generated manually) 3. Run all mutants on all test cases (56 overall) 18
  29. Results Input Scenario Transitions Mutants Killed from_starting_the_app_(S)_to_Normal 1 0 from_S_to_MoreInsulin

    2 19 from_S_to_Problematic 3 41 from_S_to_MoreInsulin_and_back_to_Normal 3 39 from_S_to_Problematic_and_directly_to_Normal 4 43 from_S_to_Problematic_and_back_to_MoreInsulin 4 43 from_S_to_Problematic_and_back_to_Normal_(via_MoreInsulin) 5 44 from_S_to_self-loop_to_Normal 2 4 from_S_to_self-loop_to_MoreInsulin 3 38 from_S_to_self-loop_to_Problematic 4 42 Total Mutants killed - (a) 44 Total number of Mutants 56 Total number of Mutants (excluding equivalent) - (b) 48 Mutants detection rate - (a/b) 92% 19
  30. Survived Mutants Reasons: • the observable behavior does not change

    if (i == 20) i = 0; if (i >= 20) i = 0; no black-box approach can detect this! 20
  31. Survived Mutants Reasons: • the observable behavior does not change

    if (i == 20) i = 0; if (i >= 20) i = 0; no black-box approach can detect this! • weak input scenarios (w.r.t. boundaries) if (value >= 160) ... if (value > 160) ... 20
  32. Conclusion, Weaknesses and Future work • The approach seems to

    be worth studying • It would be nice to automatically get the specification from the requirements… if any 21
  33. Conclusion, Weaknesses and Future work • The approach seems to

    be worth studying • It would be nice to automatically get the specification from the requirements… if any • Trace expression semantics implementation (Prolog server) are always the same, but the monitor depends on the system 21
  34. Conclusion, Weaknesses and Future work • The approach seems to

    be worth studying • It would be nice to automatically get the specification from the requirements… if any • Trace expression semantics implementation (Prolog server) are always the same, but the monitor depends on the system • We are working on combining runtime verification and testing 21
  35. Bibliography References Maurizio Leotta et al. “Towards an Acceptance Testing

    Approach for Internet of Things Systems”. In: Current Trends in Web Engineering. 2018. Martin Leucker and Christian Schallhart. “A brief account of runtime verification”. In: The Journal of Logic and Algebraic Programming (2009). 22