fond
Model Checking Contest @ Petri Nets 2016
6th edition, Toruń, Poland, June 21, 2016
Home Page
Last Updated
June 30, 2016
June 30, 2016: Results are available online
June 21, 2016: Results are presented in Torun aside the Petri net Conference
June 1, 2016: «Surprise» models are published
March 25, 2016: Some stuff to help developers extract results from traces (and check your tool)
March 24, 2016: communications of the MCC staff about the surprise models for 2016
March 20, 2016: update of the disk image (in the submission kit)
March 8, 2016: information about the precision when comparing results
March 3, 2016: update of the disk image (in the submission kit)
March 2, 2016: Useful oracle for tool developers
February 15, 2016: Message for tool developers
February 4, 2016: the submission kit is out, rules have been updated with a section on evaluation and scoring
November 17, 2015: the call for tools is out
November 3, 2015: the call for models is out

Results for the MCC@Petri Net 2016

Thanks to K. Wolf, the results for 2015 and 2016 have been organized in a mysql database that you may gather here (73.3 MB). Since some bugs in some models have been detected after the 2015 and 2016 editions of the model checking, some results may not be reproducable from the updated version of the PNML files that you can get from the model page.

If you want to cite the 2016 MCC report, please proceed as follows (bibtex entry):

@misc{mcc:2016,
   Author = {F. Kordon and H. Garavel and L. M. Hillah and F. Hulin-Hubard and
   G. Chiardo and A. Hamez and L. Jezequel and A. Miner and J. Meijer and E. Paviot-Adet
   and D. Racordon and C. Rodriguez and C. Rohr and J. Srba and Y. Thierry-Mieg and
   G. Tr{\d i}nh and K. Wolf},
   Howpublished = {{http://mcc.lip6.fr/2016/results.php}},
   Lastchecked = {2016},
   Urldate = {2016},
   Month = {June},
   Title = {{Complete Results for the 2016 Edition of the Model Checking Contest}},
   Year = {2016}}

This should look like to this (useful for those who prefer word-like word processors ;-):

Objectives

The Model Checking Contest is a yearly scientific event dedicated to the assessment of formal verification tools for concurrent systems.

The Model Checking Contest has two different parts: the Call for Models, which gathers Petri net models proposed by the scientific community, and the Call for Tools, which benchmarks verification tools developed within the scientific community.

The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on several classes of models with scaling capabilities (e.g. values that set up the «size» of its associated state space). Through the feedback on tools efficiency according to the selected benchmarks, we aim at identifying the techniques that can tackle a given type of problem (e.g. state space generation, deadlock detection, reachability analysis, causal analysis).

The Model Checking Contest is composed of two calls: a call for models and a call for tools.

After five editions in 2011, 2012, 2013, 2014, and 2015 this is the sixth one that will take place at PETRI NETS 2016 in Torun.

Results of the Previous Editions

Below is a quick access to the results of the past editions of the Model Checking Contest:

Important dates

Scientific Papers Mentioning the Model Checking Contest

  1. Component-wise Incremental LTL Model Checking (2016)
  2. Nested-Unit Petri Nets: A Structural Means to Increase Efficiency and Scalability of Verification on Elementary Nets (2015)
  3. Building a Symbolic Model Checker from Formal Language Description (2015)
  4. Saturation-Based Incremental LTL Model Checking with Inductive Proofs (2015)
  5. New Search Strategies for the Petri Net CEGAR Approach (2015)
  6. Bounded Model Checking High Level Petri Nets in PIPE+Verifier (2014)
  7. PeCAn: Compositional Verification of Petri Nets Made Easy (2014)
  8. Formal Modeling and Analysis Techniques for High Level Petri Nets (2014)
  9. Teaching formal methods: Experience at UPMC and UP13 with CosyVerif (2014)
  10. BenchKit, a Tool for Massive Concurrent Benchmarking (2014)
  11. Petri Nets Research at BTU in Cottbus, Germany (2014)
  12. Read, Write and Copy Dependencies for Symbolic Model Checking (2014)
  13. Compilation de réseaux de Petri : modèles haut niveau et symétries de processus (2014)
  14. Formal verification problems in a big data world: towards a mighty synergy (2014)
  15. Advanced Saturation-based Model Checking (2014)
  16. Compilation de réseaux de Petri (2013)
  17. Verifiable Design of a Satellite-based Train Control System with Petri Nets (2014)
  18. Unifying the syntax and semantics of modular extensions of Petri nets (2013)
  19. Modeling and Analyzing Wireless Sensor Networks with VeriSensor : An Integrated Workflow (2013)
  20. Building Petri nets tools around Neco compiler (2013)
  21. LTL Model Checking with Neco (2013)
  22. MaRDiGraS : Simplified Building of Reachability Graphs on Large Clusters (2013)
  23. CosyVerif: An Open Source Extensible Verification Environment (2013)
  24. A Modular Approach for Reusing Formalisms in Verification Tools of Concurrent Systems (2013)
  25. Unifying the syntax and semantics of modular extensions of Petri nets (2013)
  26. Half a century after Carl Adam Petri’s Ph.D. thesis: A perspective on the field (2013)
  27. Verification based on unfoldings of Petri nets with read arcs (2013)
  28. High-level Petri net model checking : the symbolic way (2012)
  29. Stubborn Sets for Simple Linear Time Properties (2012)
  30. High-Level Petri Net Model Checking with AlPiNA (2011)
  31. Crocodile : A Symbolic/Symbolic Tool for the Analysis of Symmetric Nets with Bag (2011)