fond
Model Checking Contest 2020
10th edition, Paris, France, June 23, 2020
Home Page
Last Updated
Jun 28, 2020
June 28, 2020: publications of results, surprise models for the 2020 edition
march 26, 2020: publications of a set of new LTL formulas (in the 2020 standard) for tool developers
February 18, 2020: minor updates of the submission manual, the formula manual and the rules (consistency of examination names)
February 14, 2020: Submission kit and finalized rules are out
January 14, 2020: Call for tool + draft rules are out
December 8, 2019: Call for model is out
October 13, 2019: new web site for 2020 (yet in draft mode)

Results for the MCC 2020

If you want to cite the 2020 MCC report, please proceed as follows (bibtex entry):

@misc{mcc:2020,
	Author = {F. Kordon and H. Garavel and L. M. Hillah and F. Hulin-Hubard and
		E. Amparore and B. Berthomieu and S. Biswal and D. Donatelli and 
		F. Galla and G. Ciardo and S. {Dal Zilio} and {P. G.} Jensen and  C. He and 
		D. {Le Botlan} and S. Li and A. Miner and J. Srba and . Thierry-Mieg},
	Howpublished = {{http://mcc.lip6.fr/2020/results.php}},
	Lastchecked = 2019,
	Month = {June},
	Title = {{Complete Results for the 2020 Edition of the Model Checking Contest}},
	Urldate = {2020},
	Year = {2020}}

Objectives

The Model Checking Contest is a yearly scientific event dedicated to the assessment of formal verification tools for concurrent systems.

The Model Checking Contest has two different parts: the Call for Models, which gathers Petri net models proposed by the scientific community, and the Call for Tools, which benchmarks verification tools developed within the scientific community.

The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on several classes of models with scaling capabilities (e.g. values that set up the «size» of its associated state space). Through the feedback on tools efficiency according to the selected benchmarks, we aim at identifying the techniques that can tackle a given type of problem (e.g. state space generation, deadlock detection, reachability analysis, causal analysis).

The Model Checking Contest is composed of two calls: a call for models and a call for tools.

After nine editions in 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, and 2019 this is the tenth one that will take place in Paris, aside the Petri Net conference.

Results of the Previous Editions

Below is a quick access to the results of the past editions of the Model Checking Contest:

Important dates