Results for the MCC 2022
As it is this year again a remote event, the MCC results will be out in two steps:
- the Complete Results for the 2022 Edition of the Model Checking Contest,
- the slides presented @ Petri Nets 2022 in Bergen.
If you want to cite the 2022 MCC report, please proceed as follows (bibtex entry):
@misc{mcc:2022, Author = {F. Kordon and P. Bouvier and H. Garavel and F. Hulin-Hubard and N. Amat. and E. Amparore and B. Berthomieu and D. Donatelli and S. {Dal Zilio} and {P. G.} Jensen and L. Jezequel and C. He and S. Li and E. Paviot-Adet and J. Srba and Y. Thierry-Mieg}, Howpublished = {{http://mcc.lip6.fr/2022/results.php}}, Lastchecked = 2022, Month = {June}, Title = {{Complete Results for the 2022 Edition of the Model Checking Contest}}, Urldate = {2022}, Year = {2022}}
Objectives
The Model Checking Contest is a yearly scientific event dedicated to the assessment of formal verification tools for concurrent systems.
The Model Checking Contest has two different parts: the Call for Models, which gathers Petri net models proposed by the scientific community, and the Call for Tools, which benchmarks verification tools developed within the scientific community.
The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on several classes of models with scaling capabilities (e.g. values that set up the «size» of its associated state space). Through the feedback on tools efficiency according to the selected benchmarks, we aim at identifying the techniques that can tackle a given type of problem (e.g. state space generation, deadlock detection, reachability analysis, causal analysis).
The Model Checking Contest is composed of two calls: a call for models and a call for tools.
After ten editions in 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, and 2021. This edition is the twelth one that will take place in Bergen, aside the Petri Net conference.
Results of the Previous Editions
Below is a quick access to the results of the past editions of the Model Checking Contest:
- MCC'2021 @ Paris: video - HTML report
- MCC'2020 @ Paris: video - HTML report
- MCC'2019 @ Prague: slides - HTML report
- MCC'2018 @ Bratislava: slides - HTML report
- MCC'2017 @ Zaragoza: slides - HTML report
- MCC'2016 @ Torún: slides - HTML report
- MCC'2015 @ Brussels: slides - HTML report
- MCC'2014 @ Tunis: slides - HTML report
- MCC'2013 @ MIlano: slides - HTML report - PDF report (CoRR)
- MCC'2012 @ Hamburg: slides - PDF report (CoRR)
- MCC'2011 @ Newcastle: slides - report (ToPnoC link)
Important dates
- January 15, 2022: publication of the Call for Models
- January 20, 2022: publication of the Call for Tools
- January 25, 2022: publication of the updated 2021 contest rules at http://mcc.lip6.fr/rules.php
- February 14, 2022: publication of the Tool Submission Kit, which will be made available from http://mcc.lip6.fr/archives/SubmissionKit-2022.tar.gz
- March 1, 2022: deadline for model submission
- March 1, 2022: deadline for tool pre-registration If you plan to submit a tool to the contest, please fill in the pre-registration form (you may retire if you finally decide not to do so)
- April. 1, 2022: individual notification of model acceptance/rejection
- April 25, 2022: deadline for tool submission
- May 10, 2022: early feedback to tool submitters, following the preliminary qualification runs, which are performed using a few small instances of the "known" models
- June 1, 2022: on-line publication of the selected MCC'2021 models
- June 21, 2022: official announcement of MCC'2021 results during the Petri Net conference (Paris, France).