Slide 1

Slide 1 text

Relation between Test Coverage and Timed Automata Model Structure Lukáš Krejčí, Jan Sobotka and Jiří Novák [email protected] TMPA 2019

Slide 2

Slide 2 text

Outline Background Automotive integration testing Model–Based Testing with Timed Automata models Problem description Case study Experiment Results Conclusion

Slide 3

Slide 3 text

Background Integration testing and Model-Based Testing

Slide 4

Slide 4 text

Integration Testing  Evaluation of interactions in a cluster of ECUs  Distributed functions  Bus communication  Done independently by the car manufacturer  ECUs usually come from different suppliers  With real hardware using HiL testing method  Complete car electronics or relevant part of electronic system  Test cases (sequences) implemented manually by test engineers  Our team is trying to deploy test generation using Model-Based Testing principles  Developed test cases maintained during car life cycle

Slide 5

Slide 5 text

MBT Concept Overview

Slide 6

Slide 6 text

HiL Testing Platform

Slide 7

Slide 7 text

HiL Testing Platform

Slide 8

Slide 8 text

Case Study Automatic trunk doors control and keyless locking systems

Slide 9

Slide 9 text

Modeling Language  System and its environment are modeled as a network of Timed Automata

Slide 10

Slide 10 text

System Under Test  Opening and closing of the automatic trunk doors using the buttons  Locking and unlocking the car using the remote control in keys, the key position detection and door handle

Slide 11

Slide 11 text

Problem Outline Different approaches of modeling

Slide 12

Slide 12 text

Problem Outline  Observer model is created according to System Specification  There are multiple approaches to modeling of both SUT and the environment  Fully permissive  Equivalent of random stimuli  Useful for discovering of corner cases  Fully restrictive  Reduces the possible traces  Allows more accurate models  Question is how different resulting model structure influences coverage of observer model

Slide 13

Slide 13 text

Simple Environment Model  Easy to create  Each input button is modeled as separate automaton

Slide 14

Slide 14 text

Complex Environment Model  Based on behavior of a real driver  Generates more realistic test cases

Slide 15

Slide 15 text

Simple Observer Model  Each subsystem is modeled as individual automaton  More accurate and permissive description of an SUT

Slide 16

Slide 16 text

Simple Observer Model  Each subsystem is modeled as individual automaton  More accurate and permissive description of an SUT

Slide 17

Slide 17 text

Complex Observer Model  Models the full system  More restrictive description of the entire SUT

Slide 18

Slide 18 text

Experiment Comparison of modeling approaches using structural criteria

Slide 19

Slide 19 text

Experiment Overview  Compare all model variants by structural coverage criteria  Coverage of nodes  Coverage of edges  Coverage of edge pairs  Test runs were driven by different strategies  Random  Systematic  Heuristic  Find the most suitable modeling approach

Slide 20

Slide 20 text

Taster Tool  Tool developed by our team MBT for online testing with Timed Automata models

Slide 21

Slide 21 text

Results Experiment results and comparison of modeling approaches

Slide 22

Slide 22 text

Nodes Coverage

Slide 23

Slide 23 text

Edges Coverage

Slide 24

Slide 24 text

Edge Pairs Coverage

Slide 25

Slide 25 text

Conclusions and Future Work Result of comparison and future research

Slide 26

Slide 26 text

Conclusions  Combination of simple environment model and simple observer model provided most consisted results  Worse performance of complex environment model is expected  Complex observer model provides good results as well, expect for edge-based criteria and heuristic strategy  Results suggest that coverage of observer model depends on structure of the environment model  It is more beneficial to create simpler, divided models, which are significantly easier to create and maintain

Slide 27

Slide 27 text

Future Work  Extension of the case study with additional subsystems  Propulsion  Intrusion detection  Evaluation test cases generation strategies  Extension of existing strategies in Taster tool  Utilization of machine learning

Slide 28

Slide 28 text

If you have any questions, feel free to ask Thank you for your attention