Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[PhD Thesis Defense] Automated Test Generation for production systems with a Model-based Testing approach

[PhD Thesis Defense] Automated Test Generation for production systems with a Model-based Testing approach

This thesis tackles the problem of testing (legacy) production systems such as those of our industrial partner Michelin, one of the three largest tire manufacturers in the world, by means of Model-based Testing.

William Durand

May 04, 2016
Tweet

More Decks by William Durand

Other Decks in Research

Transcript

  1. Automated Test Generation for production systems with a Model-based Testing

    approach William Durand • PhD Thesis Defense • May 4th, 2016
  2. 1 2 "Automated..." What? Automated: “to use machines instead of

    people” Test: “the means by which the quality of anything is determined” Generation: “the act or process of generating” (for) production systems: “a set of production machines controlled by a software (or application)” (with a) Model-based Testing approach
  3. 4 . 2 Very First Meeting @ Michelin “We face

    several issues with our Level 2 applications.” “Some of them are not covered by tests. We have many legacy applications and we would like to avoid regressions.” “We have outdated documentation we cannot rely on.” “These applications run in our factories for years, but we can state that they behave correctly in production.”
  4. 4 . 4 4 . 5 Development Teams 50+ applications

    running in production Different programming languages and versions MUST be maintained for ~20 years!
  5. 4 . 6 Factories Stability over anything else Maintenance periods

    are planned, but rather long (> 1 week) 1h (unexpected) downtime = 50k $
  6. 4 . 7 This Thesis The goal of this thesis

    is to propose technical solutions to Michelin engineers in order to prevent unexpected downtimes with (regression) testing.
  7. 4 . 8 Hypotheses 1. The applications deployed in production

    behave correctly 2. We do not consider any (existing) documentation
  8. 4 . 9 Insight of the Approach 1. The inference

    of models of production systems based on the data exchanged in a production environment 2. The design of a conformance testing technique based on these inferred models, targeting production systems
  9. 4 . 10 Publications Durand, W., & Salva, S. (2014).

    Inférence de modeles dirigée par la logique métier. In Actes de la 13eme édition d’AFADL, atelier francophone sur les Approches Formelles dans l’Assistance au Développement de Logiciels. Durand, W., & Salva, S. (2014). Inferring models with rule-based expert systems. In Proceedings of the Fifth Symposium on Information and Communication Technology (pp. 92-101). ACM. Salva, S., & Durand, W. (2015). Autofunk, a fast and scalable framework for building formal models from production systems. In Proceedings of the 9th ACM International Conference on Distributed Event-Based Systems (pp. 193-204). ACM. Durand, W., & Salva, S. (2015). Autofunk: An Inference-Based Formal Model Generation Framework for Production Systems. In FM 2015: Formal Methods (pp. 577-580). Springer International Publishing. Durand, W., & Salva, S. (2015). Passive testing of production systems based on model inference. In Formal Methods and Models for Codesign (MEMOCODE), 2015 ACM/IEEE International Conference on (pp. 138-147). IEEE. 2 under submissions (ACM CSUR, JSS)
  10. 5 . 4 Model Inference A research field that aims

    at automatically deriving models, expressing behaviors of existing software.
  11. 5 . 5 Active vs. Passive Active inference: methods that

    interact with the system Passive inference: use a fixed set of data (no interaction) † We should not disturb the production systems.
  12. 5 . 6 Passive Inference Documentation White-box State-based abstraction Event

    sequence abstraction?? (e.g., kTail, kBehavior) † Over-approximated models are not suitable for testing.
  13. 5 . 7 Production Event & Michelin Systems Software exchange

    information with physical devices and machines by sending and receiving production events Michelin's exchanging systems guarantee the order in which the production events occured Events can be captured directly into these systems to avoid loss, reordering, and/or duplication of the events
  14. 5 . 8 Example 17-Jun-2014 23:29:59.00|INFO|New File 17-Jun-2014 23:29:59.50|17011|MSG_IN [nsys:

    1] \ [nsec: 8] [point: 1] [pid: 1] 17-Jun-2014 23:29:59.61|17021|MSG_OUT [nsys: 1] \ [nsec: 8] [point: 3] [tpoint: 8] [pid: 1] 17-Jun-2014 23:29:59.70|17011|MSG_IN [nsys: 1] \ [nsec: 8] [point: 2] [pid: 2] 17-Jun-2014 23:29:59.92|17021|MSG_OUT [nsys: 1] \ [nsec: 8] [point: 4] [tpoint: 9] [pid: 2] A set of production events in a human readable format.
  15. 5 . 9 Interesting Facts Each production event is tied

    to a product (e.g., a tire), identified by a product identifier ( ). Gathering all production events related to a product allows to retrieve what happened to it (behaviors). q That is what Michelin experts use to do.
  16. 5 . 11 Expert System A computer system that emulates

    the decision-making ability of a human expert. Inference engine Facts Inference rules “When LHS, then RHS”
  17. 5 . 12 Autofunk A framework and a tool to

    infer models v1: proof of concept for web applications v2 and v3: target production systems
  18. 5 . 15 Example (1/2) 17-Jun-2014 23:29:59.00|INFO|New File rule "Remove

    INFO events" when: $valued_event: ValuedEvent(Assign.type == TYPE_INFO) then retract($valued_event) end A rule written with Drools. y The event will be filtered out.
  19. 5 . 16 Example (2/2) Traces(Sua) = { (17011({ nsys,

    nsec, point, pid }), { nsys:=1, nsec:=8, point:=1, pid:=1 }) (17021({ nsys, nsec, point, tpoint, pid }), { nsys:=1, nsec:=8, point:=3, \ tpoint:=8, pid:=1 }) , (17011({ nsys, nsec, point, pid }), { nsys:=1,nsec:=8, point:=2, pid:=2 }) (17021({ nsys, nsec, point, tpoint, pid }), { nsys:=1, nsec:=8, point:=4, \ tpoint:=9, pid:=2 }) } (17011({ nsys, nsec, point, pid }), { nsys:=1, nsec:=8, point:=1, pid:=1 }) ↑ ↑ ↑
  20. 5 . 18 Segmentation & Filtering Autofunk v2: statistical analysis

    Autofunk v3: k-means clustering algorithm Complete trace set
  21. 5 . 21 Model Generation Based on the STS/LTS model

    definitions A run set is constructed from : Each run is transformed into a unique STS path:
  22. 5 . 23 Inferred Models One (sub-)model per entry point

    Common location per model Large yet partial STS models
  23. 5 . 25 Model Reduction Paths with same sequence of

    events are merged Guards are stored into matrices Fast computation with hash functions Trace equivalence between and
  24. 5 . 28 Experimentation Results Exp. # events # #

    ... D1 3,851,264 73,364 35,541 924 D2 17,402 837 E1 7,635,494 134,908 61,795 1,441 E2 35,799 1,401 F1 9,231,160 161,035 77,058 1,587 F2 43,536 1,585 q It took 5 minutes to build the two models of experiment F.
  25. 6 . 2 Model-based Testing The application of Model-based design

    for designing and optionally also executing artifacts to perform testing.
  26. 6 . 3 Active vs. Passive † We should not

    disturb the production systems (again).
  27. 6 . 4 Offline Passive Testing Model inference on a

    System under analysis ( ) Conformance testing on a System under test ( ) Reuse the reduced models Collect traces on , then perform testing
  28. 6 . 6 Model Normalization Remove runtime-dependent information Label verdict

    locations with “Some possible complete behaviors that should happen”
  29. 6 . 9 Implementation Relation (2/2) “Since I know that

    my model is not complete, I am willing to accept not standard behaviors till a certain point.”
  30. 6 . 12 Passive Testing Algorithm One unique algorithm Two

    verdicts: ≤ ct and ≤ mct Provides possibly fail trace sets Algorithm is sound:
  31. 6 . 13 Use Case (1/2) : 53,996 traces :

    25,047 traces y 98% are traces. The remaining 2% are new behaviors that never occured before. q It took 10 minutes to check conformance.
  32. 6 . 14 Use Case (2/2) 2% represents 500 traces,

    and can contain false positives. “Still way better than before (25,000).” Larger trace sets should help How to refine this possibly fail trace set?
  33. 7 . 2 Recap' Two approaches combining model inference, machine

    learning, and expert systems to infer models for web applications and production systems (Autofunk) Offline passive testing for production systems on-top of Autofunk, along with two implementation relations An implementation of Autofunk for Michelin
  34. 7 . 3 A Note on Autofunk 2831 LOC, Java

    8, tested (90% code cov.) 10 inference rules for Michelin Not a production-ready tool
  35. 7 . 5 Online Passive Testing Just-in-time fault detection Traces

    constructed on-the-fly Work in progress A few remaining issues
  36. 7 . 8 “These applications run in our factories for

    years, but we can state that they behave correctly in production.”
  37. 7 . 8 7 . 9 Thoughts On Model Inference

    How to avoid over- or under-approximation? More techniques should take scalability into account Combining research fields = WIN!