Model Checking Contest @ Petri Nets 2016
6th edition, Toruń, Poland, June 21, 2016
ITS-Tools compared to other tools (��Known�� models, UpperBounds)
Last Updated
June 30, 2016

# Introduction

This page presents how Marcie do cope efficiently with the UpperBounds examination face to the other participating tools. In this page, we consider «Known» models.

The next sections will show chart comparing performances in termsof both memory and execution time.The x-axis corresponds to the challenging tool where the y-axes represents Marcie' performances. Thus, points below the diagonal of a chart denote comparisons favorables to the tool whileothers corresponds to situations where the challenging tool performs better.

You might also find plots out of the range that denote the case were at least one tool could not answer appropriately (error, time-out, could not compute or did not competed).

# Marcie versus ITS-Tools

Some statistics are displayed below, based on 1050 runs (525 for Marcie and 525 for ITS-Tools, so there are 525 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Marcie to ITS-Tools are shown (you may click on one graph to enlarge it).

 Statistics on the execution Marcie ITS-Tools Both tools Marcie ITS-Tools Computed OK 133 15 125 Smallest Memory Footprint Do not compete 0 0 0 Times tool wins 136 137 Error detected 0 1 0 Shortest Execution Time Cannot Compute + Time-out 16 133 251 Times tool wins 162 111

On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

# Marcie versus LoLa

Some statistics are displayed below, based on 1050 runs (525 for Marcie and 525 for LoLa, so there are 525 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Marcie to LoLa are shown (you may click on one graph to enlarge it).

 Statistics on the execution Marcie LoLa Both tools Marcie LoLa Computed OK 52 155 206 Smallest Memory Footprint Do not compete 0 164 0 Times tool wins 87 326 Error detected 0 0 0 Shortest Execution Time Cannot Compute + Time-out 267 0 0 Times tool wins 170 243

On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

# Marcie versus Tapaal(PAR)

Some statistics are displayed below, based on 1050 runs (525 for Marcie and 525 for Tapaal(PAR), so there are 525 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Marcie to Tapaal(PAR) are shown (you may click on one graph to enlarge it).

 Statistics on the execution Marcie Tapaal(PAR) Both tools Marcie Tapaal(PAR) Computed OK 180 1 78 Smallest Memory Footprint Do not compete 0 164 0 Times tool wins 180 79 Error detected 0 0 0 Shortest Execution Time Cannot Compute + Time-out 113 128 154 Times tool wins 201 58

On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

# Marcie versus Tapaal(EXP)

Some statistics are displayed below, based on 1050 runs (525 for Marcie and 525 for Tapaal(EXP), so there are 525 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Marcie to Tapaal(EXP) are shown (you may click on one graph to enlarge it).

 Statistics on the execution Marcie Tapaal(EXP) Both tools Marcie Tapaal(EXP) Computed OK 143 19 115 Smallest Memory Footprint Do not compete 0 164 0 Times tool wins 150 127 Error detected 0 0 0 Shortest Execution Time Cannot Compute + Time-out 131 91 136 Times tool wins 191 86

On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

# Marcie versus Tapaal(SEQ)

Some statistics are displayed below, based on 1050 runs (525 for Marcie and 525 for Tapaal(SEQ), so there are 525 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Marcie to Tapaal(SEQ) are shown (you may click on one graph to enlarge it).

 Statistics on the execution Marcie Tapaal(SEQ) Both tools Marcie Tapaal(SEQ) Computed OK 150 14 108 Smallest Memory Footprint Do not compete 0 164 0 Times tool wins 156 116 Error detected 0 0 0 Shortest Execution Time Cannot Compute + Time-out 126 98 141 Times tool wins 204 68

On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

# Marcie versus ydd-pt

Some statistics are displayed below, based on 1050 runs (525 for Marcie and 525 for ydd-pt, so there are 525 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Marcie to ydd-pt are shown (you may click on one graph to enlarge it).

 Statistics on the execution Marcie ydd-pt Both tools Marcie ydd-pt Computed OK 258 0 0 Smallest Memory Footprint Do not compete 0 0 0 Times tool wins 258 0 Error detected 0 0 0 Shortest Execution Time Cannot Compute + Time-out 0 258 267 Times tool wins 258 0

On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.