fond
Model Checking Contest @ Petri Nets 2016
6th edition, Toruń, Poland, June 21, 2016
ITS-Tools%20compared%20to%20other%20tools%20(%EF%BF%BD%EF%BF%BDSurprise%EF%BF%BD%EF%BF%BD%20models,%20CTLFireability)
Last Updated
June 30, 2016

Introduction

This page presents how ITS-Tools do cope efficiently with the CTLFireability examination face to the other participating tools. In this page, we consider «Surprise» models.

The next sections will show chart comparing performances in termsof both memory and execution time.The x-axis corresponds to the challenging tool where the y-axes represents ITS-Tools' performances. Thus, points below the diagonal of a chart denote comparisons favorables to the tool whileothers corresponds to situations where the challenging tool performs better.

You might also find plots out of the range that denote the case were at least one tool could not answer appropriately (error, time-out, could not compute or did not competed).

ITS-Tools versus LoLa

Some statistics are displayed below, based on 278 runs (139 for ITS-Tools and 139 for LoLa, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing ITS-Tools to LoLa are shown (you may click on one graph to enlarge it).

Statistics on the execution
  ITS-Tools LoLa Both tools   ITS-Tools LoLa
Computed OK 8 86 43   Smallest Memory Footprint
Do not compete 0 9 0 Times tool wins 31 106
Error detected 3 0 0   Shortest Execution Time
Cannot Compute + Time-out 84 0 1 Times tool wins 33 104


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

ITS-Tools versus LTSMin

Some statistics are displayed below, based on 278 runs (139 for ITS-Tools and 139 for LTSMin, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing ITS-Tools to LTSMin are shown (you may click on one graph to enlarge it).

Statistics on the execution
  ITS-Tools LTSMin Both tools   ITS-Tools LTSMin
Computed OK 8 87 43   Smallest Memory Footprint
Do not compete 0 9 0 Times tool wins 45 93
Error detected 3 0 0   Shortest Execution Time
Cannot Compute + Time-out 85 0 0 Times tool wins 26 112


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

ITS-Tools versus Tapaal(PAR)

Some statistics are displayed below, based on 278 runs (139 for ITS-Tools and 139 for Tapaal(PAR), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing ITS-Tools to Tapaal(PAR) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  ITS-Tools Tapaal(PAR) Both tools   ITS-Tools Tapaal(PAR)
Computed OK 35 24 16   Smallest Memory Footprint
Do not compete 0 9 0 Times tool wins 44 31
Error detected 3 0 0   Shortest Execution Time
Cannot Compute + Time-out 25 30 60 Times tool wins 46 29


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

ITS-Tools versus Marcie

Some statistics are displayed below, based on 278 runs (139 for ITS-Tools and 139 for Marcie, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing ITS-Tools to Marcie are shown (you may click on one graph to enlarge it).

Statistics on the execution
  ITS-Tools Marcie Both tools   ITS-Tools Marcie
Computed OK 8 27 43   Smallest Memory Footprint
Do not compete 0 0 0 Times tool wins 46 32
Error detected 3 0 0   Shortest Execution Time
Cannot Compute + Time-out 26 10 59 Times tool wins 34 44


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

ITS-Tools versus Tapaal(SEQ)

Some statistics are displayed below, based on 278 runs (139 for ITS-Tools and 139 for Tapaal(SEQ), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing ITS-Tools to Tapaal(SEQ) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  ITS-Tools Tapaal(SEQ) Both tools   ITS-Tools Tapaal(SEQ)
Computed OK 8 80 43   Smallest Memory Footprint
Do not compete 0 9 0 Times tool wins 36 95
Error detected 1 1 2   Shortest Execution Time
Cannot Compute + Time-out 81 0 4 Times tool wins 42 89


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart