fond
Model Checking Contest 2024
14th edition, Geneva, Switzerland, June 25, 2024
Home Page
Last Updated
July 7, 2024
July 7, 2024: publication of corrected results for 2024
June 25, 2024: publication of results for 2024
June 16, 2024: publication of surprise models for 2024
April 27, 2024: updated archive of Submission Kit (trouble with some models)
February 24, 2024: publication of the Submission Kit
February 23, 2024: publication of the rules
January 22, 2024: publication of the Call for Tools
January 15, 2024: publication of the Call for Models
January 14, 2024: web site deployed for 2024

Results for the MCC 2024

Important Note : we lately discovered a problem with the colored instances of BlocksWorld leading to a problem with results for 2024. We thus updated data from the contest by not considering these instances for 2024 (we will deal with corrections to be done for 2025)

As it is this year again a remote event, the MCC results will be out in two steps:

If you want to cite the 2024 MCC report, please proceed as follows (bibtex entry):

@misc{mcc:2024,
	Author = {F. Kordon and P. Bouvier and H. Garavel and F. Hulin-Hubard and
		L. Jezequel and E. Paviot-Adet Q. Nivon and and N. Amat. and B. Berthomieu  and
		S. {Dal Zilio} and {P. G.} Jensen and D. Morard and B. Smith and J. Srba and 
		Y. Thierry-Mieg and K. Wolf},
	Howpublished = {{https://mcc.lip6.fr/2024/results.php}},
	Lastchecked = 2024,
	Month = {June},
	Title = {{Complete Results for the 2024 Edition of the Model Checking Contest}},
	Urldate = {2024},
	Year = {2024}}

Objectives

The Model Checking Contest is a yearly scientific event dedicated to the assessment of formal verification tools for concurrent systems.

The Model Checking Contest has two different parts: the Call for Models, which gathers Petri net models proposed by the scientific community, and the Call for Tools, which benchmarks verification tools developed within the scientific community.

The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on several classes of models with scaling capabilities (e.g. values that set up the «size» of its associated state space). Through the feedback on tools efficiency according to the selected benchmarks, we aim at identifying the techniques that can tackle a given type of problem (e.g. state space generation, deadlock detection, reachability analysis, causal analysis).

The Model Checking Contest is composed of two calls: a call for models and a call for tools.

There was already thirteen editions in 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, and 2023. This is the fourteenth edition that will take place during the Petri Net 2024 (Geneva, Switzerland).

Results of the Previous Editions

Below is a quick access to the results of the past editions of the Model Checking Contest:

Important dates