fond
Model Checking Contest @ Petri Nets 2016
6th edition, Toruń, Poland, June 21, 2016
ITS-Tools compared to other tools («Surprise» models, StateSpace)
Last Updated
June 30, 2016

Introduction

This page presents how Smart do cope efficiently with the StateSpace examination face to the other participating tools. In this page, we consider «Surprise» models.

The next sections will show chart comparing performances in termsof both memory and execution time.The x-axis corresponds to the challenging tool where the y-axes represents Smart' performances. Thus, points below the diagonal of a chart denote comparisons favorables to the tool whileothers corresponds to situations where the challenging tool performs better.

You might also find plots out of the range that denote the case were at least one tool could not answer appropriately (error, time-out, could not compute or did not competed).

Smart versus ITS-Tools

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for ITS-Tools, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to ITS-Tools are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart ITS-Tools Both tools   Smart ITS-Tools
Computed OK 9 35 14   Smallest Memory Footprint
Do not compete 9 0 0 Times tool wins 20 38
Error detected 0 16 0   Shortest Execution Time
Cannot Compute + Time-out 42 9 65 Times tool wins 22 36


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus LTSMin

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for LTSMin, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to LTSMin are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart LTSMin Both tools   Smart LTSMin
Computed OK 2 45 21   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 23 45
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 45 2 62 Times tool wins 21 47


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus Tapaal(PAR)

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for Tapaal(PAR), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to Tapaal(PAR) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart Tapaal(PAR) Both tools   Smart Tapaal(PAR)
Computed OK 17 11 6   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 22 12
Error detected 0 1 0   Shortest Execution Time
Cannot Compute + Time-out 12 17 95 Times tool wins 20 14


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus Marcie

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for Marcie, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to Marcie are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart Marcie Both tools   Smart Marcie
Computed OK 6 59 17   Smallest Memory Footprint
Do not compete 9 0 0 Times tool wins 23 59
Error detected 0 1 0   Shortest Execution Time
Cannot Compute + Time-out 54 9 53 Times tool wins 21 61


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus pnmc

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for pnmc, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to pnmc are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart pnmc Both tools   Smart pnmc
Computed OK 1 59 22   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 23 59
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 59 1 48 Times tool wins 14 68


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus PNXDD

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for PNXDD, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to PNXDD are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart PNXDD Both tools   Smart PNXDD
Computed OK 14 19 9   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 23 19
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 19 14 88 Times tool wins 23 19


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus Tapaal(EXP)

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for Tapaal(EXP), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to Tapaal(EXP) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart Tapaal(EXP) Both tools   Smart Tapaal(EXP)
Computed OK 14 19 9   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 19 23
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 19 14 88 Times tool wins 20 22


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus Tapaal(SEQ)

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for Tapaal(SEQ), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to Tapaal(SEQ) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart Tapaal(SEQ) Both tools   Smart Tapaal(SEQ)
Computed OK 16 16 7   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 22 17
Error detected 0 1 0   Shortest Execution Time
Cannot Compute + Time-out 17 16 90 Times tool wins 19 20


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

Smart versus ydd-pt

Some statistics are displayed below, based on 278 runs (139 for Smart and 139 for ydd-pt, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing Smart to ydd-pt are shown (you may click on one graph to enlarge it).

Statistics on the execution
  Smart ydd-pt Both tools   Smart ydd-pt
Computed OK 21 5 2   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 23 5
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 5 21 102 Times tool wins 23 5


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart