fond
Model Checking Contest 2020
10th edition, Paris, France, June 23, 2020
Call for Tools
Last Updated
Jun 28, 2020

Goals

The Model Checking Contest (MCC) is a yearly event that assesses existing verification tools for concurrent systems on a set of models (i.e., benchmarks) proposed by the scientific community. All tools are compared on the same benchmarks and using the same computing platform, so that a fair comparison can be made, contrary to most scientific publications, in which different benchmarks are executed on different platforms.

Another goal of the Model Checking Contest is to infer conclusions about the respective efficiency of verification techniques for Petri nets (decision diagrams, partial orders, symmetries, etc.) depending on the particular characteristics of models under analysis. Through the feedback on tools efficiency, we aim at identifying which techniques can best tackle a given class of models.

Finally, the Model Checking Contest seeks to be a friendly place where developers meet, exchange, and collaborate to enhance their verification tools.

The Model Checking Contest is organized in three steps:

Call for Tools

For the 2020 edition, we kindly ask the developers of verification tools for concurrent systems to participate in the MCC competition. Each tool will be assessed on both the accumulated collection of MCC models (these are the "known" models, see http://mcc.lip6.fr/models.php) and on the new models selected during the 2020 edition (these are the "surprise" models, see http://mcc.lip6.fr/cfm.php).

The benchmarks on which tools will be assessed, are colored Petri nets and/or P/T nets. Some P/T nets are provided with additional information giving a hierarchical decomposition into sequential machines (these models are called Nested-Units Petri nets - see http://mcc.lip6.fr/nupn.php for more information): tools may wish to exploit this information to increase performance and scalability.

Each tool may compete in one or more categories of verification tasks, such as reachability analysis, evaluation of CTL formulas, of LTL formulas, etc.

Tools have to be submitted in binary-code form. Each submitted tool will be run by the MCC organizers in a virtual machine (typically configured with up to 4 cores, 16 Gbytes RAM, and a time confinement of 60 minutes per run, i.e., per instance of a model). Last year, more than 1500 days of CPU time have been invested in the MCC competition. The MCC relies on BenchKit (https://github.com/fkordon/BenchKit), a dedicated execution environment for monitoring the execution of processes and gathering of data.

By submitting a tool, you explicitly allow the organizers of the Model Checking Contest to publish to publish on the MCC web site the binary executable of this tool, so that experiments can be reproduced by others after the contest. Detailed information is available from http://mcc.lip6.fr/rules.php.

Note: to submit a tool, it is not required to have submitted any model to the MCC Call for Models. However, it is strongly recommended to pre-register your tool using the dedicated form before February 1, 2020: http://mcc.lip6.fr/registration.php. You will then be informed of the way the contest is going. The sooner is the better.

IMPORTANT: based on the discussions between the organizers and the tool developers, 2020 introduces some changes to increase the accuracy of the contest. Please have a close look at the submission manual that includes such changes. You can find below the list of those that may have an impact for you:

Grammar for formulas have changed a bit for LTL formulas, so the formulas for 2020 will be published with surprise models.

You may check for the new LTL formulas thanks to this set of formulas you can find here.

Important Dates

Committees

General Chairs

Execution Monitoring Board