Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Relation between Test Coverage and Timed Automata Model Structure

Exactpro
PRO
November 08, 2019

Relation between Test Coverage and Timed Automata Model Structure

Lukas Krejci, Jan Sobotka and Jiri Novak

International Conference on Software Testing, Machine Learning and Complex Process Analysis (TMPA-2019)
7-9 November 2019, Tbilisi

Video: https://youtu.be/9b3m3QFaXwY

TMPA Conference website https://tmpaconf.org/
TMPA Conference on Facebook https://www.facebook.com/groups/tmpaconf/

Exactpro
PRO

November 08, 2019
Tweet

More Decks by Exactpro

Other Decks in Technology

Transcript

  1. Relation between Test
    Coverage and Timed
    Automata Model Structure
    Lukáš Krejčí, Jan Sobotka and Jiří Novák
    [email protected]
    TMPA 2019

    View Slide

  2. Outline
    Background Automotive integration testing
    Model–Based Testing with Timed Automata models
    Problem description
    Case study
    Experiment
    Results
    Conclusion

    View Slide

  3. Background
    Integration testing and Model-Based Testing

    View Slide

  4. Integration Testing
     Evaluation of interactions in a cluster of ECUs
     Distributed functions
     Bus communication
     Done independently by the car manufacturer
     ECUs usually come from different suppliers
     With real hardware using HiL testing method
     Complete car electronics or relevant part of electronic system
     Test cases (sequences) implemented manually by test engineers
     Our team is trying to deploy test generation using Model-Based
    Testing principles
     Developed test cases maintained during car life cycle

    View Slide

  5. MBT Concept Overview

    View Slide

  6. HiL Testing Platform

    View Slide

  7. HiL Testing Platform

    View Slide

  8. Case Study
    Automatic trunk doors control and keyless locking systems

    View Slide

  9. Modeling Language
     System and its environment are modeled
    as a network of Timed Automata

    View Slide

  10. System Under Test
     Opening and closing of the automatic trunk doors using the buttons
     Locking and unlocking the car using the remote control in keys, the
    key position detection and door handle

    View Slide

  11. Problem Outline
    Different approaches of modeling

    View Slide

  12. Problem Outline
     Observer model is created according to System Specification
     There are multiple approaches to modeling of both SUT and the environment
     Fully permissive
     Equivalent of random stimuli
     Useful for discovering of corner cases
     Fully restrictive
     Reduces the possible traces
     Allows more accurate models
     Question is how different resulting model structure influences coverage of
    observer model

    View Slide

  13. Simple Environment Model
     Easy to create
     Each input button is modeled as separate automaton

    View Slide

  14. Complex Environment Model
     Based on behavior of a real driver
     Generates more realistic test cases

    View Slide

  15. Simple Observer Model
     Each subsystem is modeled as individual automaton
     More accurate and permissive description of an SUT

    View Slide

  16. Simple Observer Model
     Each subsystem is modeled as individual automaton
     More accurate and permissive description of an SUT

    View Slide

  17. Complex Observer Model
     Models the full system
     More restrictive description of the entire SUT

    View Slide

  18. Experiment
    Comparison of modeling approaches using structural criteria

    View Slide

  19. Experiment Overview
     Compare all model variants by structural coverage criteria
     Coverage of nodes
     Coverage of edges
     Coverage of edge pairs
     Test runs were driven by different strategies
     Random
     Systematic
     Heuristic
     Find the most suitable modeling approach

    View Slide

  20. Taster Tool
     Tool developed by our team MBT for online testing with Timed
    Automata models

    View Slide

  21. Results
    Experiment results and comparison of modeling approaches

    View Slide

  22. Nodes Coverage

    View Slide

  23. Edges Coverage

    View Slide

  24. Edge Pairs Coverage

    View Slide

  25. Conclusions and Future Work
    Result of comparison and future research

    View Slide

  26. Conclusions
     Combination of simple environment model and simple observer model
    provided most consisted results
     Worse performance of complex environment model is expected
     Complex observer model provides good results as well, expect for
    edge-based criteria and heuristic strategy
     Results suggest that coverage of observer model depends on structure
    of the environment model
     It is more beneficial to create simpler, divided models, which are
    significantly easier to create and maintain

    View Slide

  27. Future Work
     Extension of the case study with additional subsystems
     Propulsion
     Intrusion detection
     Evaluation test cases generation strategies
     Extension of existing strategies in Taster tool
     Utilization of machine learning

    View Slide

  28. If you have any questions, feel free to ask
    Thank you for your attention

    View Slide