fond
Model Checking Contest 2022
12th edition, Bergen, Norway, June 21, 2022
Home Page
Last Updated
Jun 22, 2022
June 22, 2022: minor correction on the MCC'2022 results
June 21, 2022: deployment of the results of the MCC'2022
June 21, 2022: publication of surprise models
January 20, 2022: calls for models and for tools are out
January 13, 2022: web site for 2022 on line
November 6, 2021: deployed web site for 2022 (yet in draft mode)

Results for the MCC 2022

As it is this year again a remote event, the MCC results will be out in two steps:

If you want to cite the 2022 MCC report, please proceed as follows (bibtex entry):

@misc{mcc:2022,
	Author = {F. Kordon and P. Bouvier and H. Garavel and F. Hulin-Hubard and
		N. Amat. and E. Amparore and B. Berthomieu and D. Donatelli and
		S. {Dal Zilio} and {P. G.} Jensen and L. Jezequel and  C. He and S. Li and 
		E. Paviot-Adet and J. Srba and Y. Thierry-Mieg},
	Howpublished = {{http://mcc.lip6.fr/2022/results.php}},
	Lastchecked = 2022,
	Month = {June},
	Title = {{Complete Results for the 2022 Edition of the Model Checking Contest}},
	Urldate = {2022},
	Year = {2022}}

Objectives

The Model Checking Contest is a yearly scientific event dedicated to the assessment of formal verification tools for concurrent systems.

The Model Checking Contest has two different parts: the Call for Models, which gathers Petri net models proposed by the scientific community, and the Call for Tools, which benchmarks verification tools developed within the scientific community.

The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on several classes of models with scaling capabilities (e.g. values that set up the «size» of its associated state space). Through the feedback on tools efficiency according to the selected benchmarks, we aim at identifying the techniques that can tackle a given type of problem (e.g. state space generation, deadlock detection, reachability analysis, causal analysis).

The Model Checking Contest is composed of two calls: a call for models and a call for tools.

After ten editions in 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, and 2021. This edition is the twelth one that will take place in Bergen, aside the Petri Net conference.

Results of the Previous Editions

Below is a quick access to the results of the past editions of the Model Checking Contest:

Important dates