Results for the MCC 2021
As it is this year again a remote event, the MCC results will be out in two steps:
- the Complete Results for the 2021 Edition of the Model Checking Contest,
- the video presented since PN'2021 was a "virtual event".
If you want to cite the 2021 MCC report, please proceed as follows (bibtex entry):
@misc{mcc:2021, Author = {F. Kordon and P. Bouvier and H. Garavel and L. M. Hillah and F. Hulin-Hubard and N. Amat. and E. Amparore and B. Berthomieu and S. Biswal and D. Donatelli and F. Galla and and S. {Dal Zilio} and {P. G.} Jensen and C. He and D. {Le Botlan} and S. Li and and J. Srba and Y. Thierry-Mieg and A. Walner and K. Wolf}, Howpublished = {{http://mcc.lip6.fr/2021/results.php}}, Lastchecked = 2021, Month = {June}, Title = {{Complete Results for the 2021 Edition of the Model Checking Contest}}, Urldate = {2021}, Year = {2021}}
Objectives
The Model Checking Contest is a yearly scientific event dedicated to the assessment of formal verification tools for concurrent systems.
The Model Checking Contest has two different parts: the Call for Models, which gathers Petri net models proposed by the scientific community, and the Call for Tools, which benchmarks verification tools developed within the scientific community.
The objective of the Model Checking Contest is to compare the efficiency of techniques according to characteristics of models. To do so, the Model Checking Contest compares tools on several classes of models with scaling capabilities (e.g. values that set up the «size» of its associated state space). Through the feedback on tools efficiency according to the selected benchmarks, we aim at identifying the techniques that can tackle a given type of problem (e.g. state space generation, deadlock detection, reachability analysis, causal analysis).
The Model Checking Contest is composed of two calls: a call for models and a call for tools.
After ten editions in 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, and 2020. This edition is the eleventh one that will take place in Paris, aside the Petri Net conference.
Results of the Previous Editions
Below is a quick access to the results of the past editions of the Model Checking Contest:
- MCC'2020 @ Paris: video - HTML report
- MCC'2019 @ Prague: slides - HTML report
- MCC'2018 @ Bratislava: slides - HTML report
- MCC'2017 @ Zaragoza: slides - HTML report
- MCC'2016 @ Torún: slides - HTML report
- MCC'2015 @ Brussels: slides - HTML report
- MCC'2014 @ Tunis: slides - HTML report
- MCC'2013 @ MIlano: slides - HTML report - PDF report (CoRR)
- MCC'2012 @ Hamburg: slides - PDF report (CoRR)
- MCC'2011 @ Newcastle: slides - report (ToPnoC link)
Important dates
- January 15, 2021: publication of the Call for Models
- January 20, 2021: publication of the Call for Tools
- January 25, 2021: publication of the updated 2021 contest rules at http://mcc.lip6.fr/rules.php
- February 14, 2021: publication of the Tool Submission Kit, which will be made available from http://mcc.lip6.fr/archives/SubmissionKit-2022.tar.gz
- March 1, 2021: deadline for model submission
- March 1, 2021: deadline for tool pre-registration If you plan to submit a tool to the contest, please fill in the pre-registration form (you may retire if you finally decide not to do so)
- April. 1, 2021: individual notification of model acceptance/rejection
- April 25, 2021: deadline for tool submission
- May 10, 2021: early feedback to tool submitters, following the preliminary qualification runs, which are performed using a few small instances of the "known" models
- June 1, 2021: on-line publication of the selected MCC'2021 models
- June 23, 2021: official announcement of MCC'2021 results during the Petri Net conference (Paris, France).