Passive testing of production systems based on model inference (MEMOCODE 2015)

F59d2f1ed66b8d9c6ceebea5a748494b?s=47 William Durand
September 22, 2015

Passive testing of production systems based on model inference (MEMOCODE 2015)

This paper tackles the problem of testing production systems, i.e. systems that run in industrial environments, and that are distributed over several devices and sensors. Usually, such systems lack of models, or are expressed with models that are not up to date. Without any model, the testing process is often done by hand, and tends to be an heavy and tedious task. This paper contributes to this issue by proposing a framework called Autofunk, which combines different fields such as model inference, expert systems, and machine learning. This framework, designed with the collaboration of our industrial partner Michelin, infers formal models that can be used as specifications to perform offline passive testing. Given a large set of production messages, it infers exact models that only capture the functional behaviours of a system under analysis. Thereafter, inferred models are used as input by a passive tester, which checks whether a system under test conforms to these models. Since inferred models do not express all the possible behaviours that should happen, we define conformance with two implementation relations. We evaluate our framework on real production systems and show that it can be used in practice.


William Durand

September 22, 2015


  1. Passive testing of production systems based on model inference. William

    Durand, Sébastien Salva September 22, 2015 Ǯ / MEMOCODE'15
  2. None
  3. Quick Tour @ Michelin

  4. A factory is divided into several workshops, one for each

    step of the manufacturing process.
  5. A production system is composed of devices, production machines, and

    one or more software to control them. q In our case, we target a single workshop only.
  6. Software exchange information with points and machines by sending and

    receiving production messages. 17-Sep-2015 23:29:59.50|17011|MSG_IN [pid: 1] [nsec: 8] [point: 1] ... 17-Sep-2015 23:29:59.61|17021|MSG_OUT [pid: 1] [nsec: 8] [point: 3] ... 17-Sep-2015 23:29:59.70|17011|MSG_IN [pid: 2] [nsec: 8] [point: 2] ... A simple example of 3 messages in a human readable format.
  7. Production messages are exchanged in a binary format (custom protocols),

    through centralized exchanging systems.
  8. Each production message is tied to a product (e.g. tire),

    identified by a product identifier (pid). Gathering all production messages related to a product allows to retrieve what happened to it (behaviours).
  9. Background

  10. Developement Teams POV 100+ applications running in production Not (fully)

    covered by tests Documentation most likely outdated MUST be maintained for ~20 years!
  11. Customers (Factories) POV Stability over anything else Maintenance periods are

    planned, but rather long (> 1 week) 1h (unexpected) downtime = 50k $
  12. Testing such production systems is complex, and takes a lot

    of time as it implies the physical devices, and there are numerous behaviours.
  13. These behaviours could be formally described into a model. But

    writing such models would be complicated and error prone. q Not suitable for Michelin applications.
  14. Our Approach (1/3) By leveraging the information carried by the

    messages, we build formal and exact models (STS) that describe functional behaviours of a production System Under Analysis (SUA).
  15. Our Approach (2/3) Michelin's exchanging systems guarantee the order in

    which the production messages occured. We capture the messages directly into these systems to avoid message loss, reordering, and/or duplication of the production messages.
  16. Our Approach (3/3) We take production messages from another System

    Under Test (SUT), and we check whether SUT conforms with SUA (using two implementation relations to define the notion of conformance).
  17. The Big Picture

  18. Model Inference 1. We collect production system traces (monitoring) 2.

    We segment these traces to create different complete trace sets (outlier detection approach) 3. We build (rather large) STS models from these sets 4. We reduce the models to obtain "usable" models Durand, W., & Salva, S. (2015). Autofunk: An Inference‐Based Formal Model Generation Framework for Production Systems. In FM 2015: Formal Methods (pp. 577‐580). Springer International Publishing.
  19. Model Reduction y

  20. Model Inference Experimentation 10 million production messages (20 days) y

    161,035 traces y S R(S) 77,058 branches 1,587 branches 43,536 branches 1,585 branches q It took 6 minutes to build the two models.
  21. In Depth Testing

  22. Offline Passive Testing Two implementation relations : Trace preorder relation

    and our own weaker implementation relation Our testing algorithm relies on both to give verdicts † Partial models = No Fail verdict
  23. The Need for a Weaker Impl. Relation "Since I know

    that my model is not complete, I am willing to accept not standard behaviours till a certain point."
  24. Experimentation SUA: 53,996 traces SUT: 25,047 traces y 98% are

    Pass traces. The remaining 2% are new behaviours that never occured before. q It took 10 minutes to check conformance.
  25. Now, What? 2% still represents many traces, and can contain

    many false positive. For Michelin engineers, it is still "better than nothing". Larger sets of traces to build the models should reduce the number of false positive But we should find a way to refine this possibly fail trace set
  26. Conclusion Fast passive testing framework for a specific context Model

    inference: the more production messages, the better! Testing: still too many possibly fail traces
  27. Future Work Online passive testing (just-in-time fault detection?) Active testing

    by leveraging the inferred models again Developing a way to focus on specific parts of the system
  28. Thank You. Questions?