fond
Model Checking Contest @ Petri Nets 2016
6th edition, Toruń, Poland, June 21, 2016
The developers’ corner (Q&A)
Last Updated
June 30, 2016

Introduction

This page is intended for tool developers willing to participate in the Model Checking Contest. It will display information and news to help you in the preparation of your tool for the Model Checking Contest.

The information is structured as a «blog roll». You can also discuss this information thanks to the forum set-up there (you must subscribe to this forum first).

News from the Organizers

Messages below are sorted in a reverse chronological order.

June 2, 2016
Publication ofSurprise Models

Since 2011, the Model Checking Contest is building up a collection of Petri-net models produced from real-life examples, either from research or industry. This collection of models has already been used in more than 30 scientific papers.

For the 2016 edition of the Model Checking Contest, 11 new models have been added to the collection. 7 of these models are parameterized, leading to a total of 139 (non-parameterized) instances. Five of these models preserve higher-level information: one is provided as a colored Petri net and four are provided as Nested-Unit Petri nets.

These models originate from various application domains: avionics, bioinformatics, communication protocols, cloud computing, cryptography, distributed computing, medical devices, etc.

The authors of the 2016 models are:

he models are available online: http://mcc.lip6.fr/models.php

Thank you for those who submitted models this year and thus enriched the benchmark. We remind that you are free (and even encouraged) to reuse these models for your own benchmarks. The final archive with formula for the model checking contest will be made available around the Petri net conference late June.

March 25, 2016
Some stuff to help developers extract results from traces (and check your tool)

Dear tool developers,

We are glad to provide you with some information to help you to test your tool. This could facilitate the qualification phase. So, you may find:

If this parser raise problems, please let us know.

March 24, 2016
About Surprise Models

Dear tool developers,

The Model Board of the Model Checking Contest has finished to process the answers received after the Call for Models 2016.

After a careful examination, the Model Board has selected 11 new models (the "surprise" models for 2016), which expand to 9 instances of colored nets and 130 instances of P/T nets (among which 62 are Nested-Unit Petri Nets).

Exchanges with the authors of the selected models enabled the Model Board to correct and complete the model forms initially submitted by the authors. Some defects have also been fixed in the known colored models published during the previous editions of the Model Checking Contest.

Finally, the forms of all models that are conservative or sub-conservative have been enhanced by inferring from the initial marking the maximal number of tokens in any reachable marking.

We heartily thank all our colleagues who submitted models for the MCC 2016.

Hubert Garavel - Lom Messan Hillah - Fabrice Kordon

March 20, 2016
The Virtual Machine in the 2016 Submission Kit has been Updated

Dear tool developers,

Since two tool developers discovered minor inconsistencies in the colored version of QuasiCertifProtocol, we have patched the virtual machine.

If your tool does not handle any colored model, you may forget about this upgrade. However, if your tool do handle such models, please follow the instructions below and use an updated virtual machine.

The submission kit including the disk image is now updated (as of march 20, 2016).

If you need to update a virtual machine on which you already performed some work, please type the following shell commands in the HOME directory of the virtual machine.

wget http://mcc.lip6.fr/archives/MCC-INPUTS_MODIFIED-2.tgz
tar xzf MCC-INPUTS_MODIFIED-2.tgz
rm -f MCC-INPUTS_MODIFIED-2.tgz

The archive containing the updated models only is available here. If you have not applied the previous patch (March 3, 2016), then also apply this patch. Se second patch does not include the first one.

March 8, 2016
About Results Comparison

Dear tool developers,

This is just to state that in the MCC, we will compare results with a precision of 12 digits in the mantissa. Results might be provided with a lower precision unless we find some appropriate presentation in the result tables.

March 3, 2016
The Virtual Machine in the 2016 Submission Kit has been Updated

Dear tool developers,

Since two tool developers discovered minor inconsistencies (but potentially leading to semantics issues) on some colored models, we have patched the virtual machine. This update only concerns some colored models:

If your tool does not handle any colored model, you may forget about this upgrade. However, if your tool do handle such models, please follow the instructions below and use an updated virtual machine.

The submission kit including the disk image is now updated (as of march 3, 2016).

If you need to update a virtual machine on which you already performed some work, please type the following shell commands in the HOME directory of the virtual machine.

wget http://mcc.lip6.fr/archives/MCC-INPUTS_MODIFIED.tgz
tar xzf MCC-INPUTS_MODIFIED.tgz
rm -f MCC-INPUTS_MODIFIED.tgz

The archive containing the updated models only is available here.

March 2, 2016
2015 Oracle Files and Test Bench (from Y. Thierry-Mieg)

This message is forwarded by the organizing committee. We really like this type of side effect that contribute to increase the reliability of tools by providing easy ways to compare executions between several tools. We think this is a good complement to the new execution procedure that we embedded in the model submission kit this year.

I regrettably submitted a version of my tool that was totally broken last year due to some pretty stupid regressions I should have caught.

So, to up my reliability score I've set up a test platform using the consensus from 2015 as control value and the actual files of 2015 as test suite.

Some of the oracles had to be patched, as the consensus was not always clear, but I only kept data for which we have high coverage of the result, so I trust the values now.

The source is publicly (anonymous/anonymous) available here, I haven't put a license on it yet but it's FOSS like all the rest of my stuff (think GPL, update it, improve it, share it back).

My current test framework is here

svn co https://projets-systeme.lip6.fr/svn/research/thierry/PSTL/GAL/pnmcc/fr.lip6.move.gal.benchmark

Use anonymous/anonymous login.

Look at e.g. run_all_marcie.sh script for the full thing, it downloads all the files fresh off internet before performing the test.

With the stuff deployed,
./run_test.pl oracle/Philosopher-PT-000005-SS.out -its
will test StateSpace for Philo 5 model with its tool.
./run_test.pl oracle/Philosopher-PT-000005-SS.out -marcie
runs marcie etc.

Each tool needs a wrapper like runmarcietest.sh, or runatest.sh

What it does is it wraps the invocation of a tool and compares the results against the values in the oracle files (see oracle/* ctl/*) The oracle files themselves are pretty basic/universal, which I think is nice, it's just the expected answers in MCC format.

I have written wrappers for ITS tools obviously, and also Marcie with some help from Christian Rohr. I'm adding a wrapper for greatSPN in collaboration with Marco Beccutti.

These tests are meant to run on my CI server(custom tracing), which can display the results, but it's not hard to alter the traces a bit if you need to.

Some examples (login as guest)
here,
here,
...

(yes there are failures, what did you expect a test suite to reveal ? More seriously both myself and Christian have tracked down subtle (and also stupid) bugs and patched them thanks to these tests.)

My agent is a linux 64 so that's the arch of the binaries we can test, I'd rather not have to support compilation during the test run, but the machine is full dev environment and should have the compilers and dev libraries needed (or I can install them). Ideally, we have one "install_xxx" script to download the latest version of the tool (and build it ?), and one "runXXtest.sh" that plays the role of the BenchKitHead thing in the MCC (arguments : model, examination + all folders and paths useful). A priori we run the test in "Stripped" mode rather than Known, since the Input is the official archive from 2015 decompressed on the fly.

I think this test suite might be useful to the other competitors, it's still being updated, I don't yet have oracles for bounds for instance, and verdict on CTL is shaky given the low coverage of 2015 (only Marcie answered). It does not require any heavy VM machinery, provided you have a tool that runs on linux x64 arch, which is mostly everybody I guess.

I'm adding the edge count, using "distinct edge" metric, see other forum post about that.

CTL tests are not fully updated but will reflect eventually the "EG p is true in a dead state satisfying p" semantics, unless some contradictory argument comes up on that forum post.

Please contact me if you are interested in running through this test course, I'll gladly integrate your tool.

I can open my source depot to outside committers if you need it, and I can run each tool maybe once or twice on the CI server for 6H test runs (I put a timeout at 10 minutes per run to have some coverage), but obviously I don't have infinite processing power so you'll have to run it on your own machines eventually. Having the plots in the CI server for test runtime as you update the tool is really nice for performance regression detection and to isolate performance bottlenecks.

Of course I'm not 100% up to date with 2016 inputs (some categories have changed a bit), but we don't have oracles for those, so for testing correctness it's not ideal. If we have at least two or three tools with 100% reliability on 2015, I'll build an oracle for 2016 with them.

February 15, 2016
Message from the Model Board of the Model Checking Contest

Dear tool developers,

The Model Board of the Model Checking Contest is actively working to check, select, and integrate the benchmarks received after the Call for Models.

So far, for all models in the MCC collection, the number of tokens present in the initial marking has always been smaller than 231, meaning that it could be represented as a signed 32-bit integer.

In 2016, for the first time, we received a model in which some places have more than 231 tokens in the initial marking, and the initial marking itself, as a whole, contains more than 233 tokens.

This is not forbidden by the PNML standard, which raises no size limits on numbers. However, this model is sufficiently different from all previous MCC models published so far that it is worth to inform you in advance, so that your tools can properly handle this new situation, either by giving up with models whose numbers are larger than 32 bits, or by doing computations using 64-bit numbers (to fix ideas, we guarantee that each initial marking will contain less than 263 tokens).

Paying attention to this issue is all the more important that negative points will be granted to incorrect answers.

Best Regards.

Hubert Garavel - Fabrice Kordon - Lom Messan Hillah