fond
Model Checking Contest 2021
11th edition, Paris, France, June 23, 2021
Home Page
Last Updated
Jun 28, 2021
June 28, 2021: Final deployment of the results of the MCC'2021
June 23, 2021: Temporary deployment of the MCC'2021 results (polished version coming soon)
June 12, 2021: surprise models are out
February 5, 2021: updated version of the Submission kit (minor typo corrected since the January version)
January 17, 2021: new web site for 2021 ready (calls for models and for tools are out)
January 5, 2021: deployed web site for 2021 (yet in draft mode)

Results for the MCC 2021

As it is this year again a remote event, the MCC results will be out in two steps:

If you want to cite the 2021 MCC report, please proceed as follows (bibtex entry):

@misc{mcc:2021,
	Author = {F. Kordon and P. Bouvier and H. Garavel and L. M. Hillah and F. Hulin-Hubard and
		N. Amat. and E. Amparore and B. Berthomieu and S. Biswal and D. Donatelli and
		F. Galla and and S. {Dal Zilio} and {P. G.} Jensen and  C. He and
		D. {Le Botlan} and S. Li and and J. Srba and Y. Thierry-Mieg and 
		A. Walner and K. Wolf},
	Howpublished = {{http://mcc.lip6.fr/2021/results.php}},
	Lastchecked = 2021,
	Month = {June},
	Title = {{Complete Results for the 2021 Edition of the Model Checking Contest}},
	Urldate = {2021},
	Year = {2021}}

Objectives

The Model Checking Contest is a yearly scientific event dedicated to the assessment of formal verification tools for concurrent systems.

The Model Checking Contest has two different parts: the Call for Models, which gathers Petri net models proposed by the scientific community, and the Call for Tools, which benchmarks verification tools developed within the scientific community.

The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on several classes of models with scaling capabilities (e.g. values that set up the «size» of its associated state space). Through the feedback on tools efficiency according to the selected benchmarks, we aim at identifying the techniques that can tackle a given type of problem (e.g. state space generation, deadlock detection, reachability analysis, causal analysis).

The Model Checking Contest is composed of two calls: a call for models and a call for tools.

After ten editions in 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, and 2020. This edition is the eleventh one that will take place in Paris, aside the Petri Net conference.

Results of the Previous Editions

Below is a quick access to the results of the past editions of the Model Checking Contest:

Important dates