fond
Model Checking Contest @ Petri Nets 2016
6th edition, Toruń, Poland, June 21, 2016
ITS-Tools%20compared%20to%20other%20tools%20(%EF%BF%BD%EF%BF%BDSurprise%EF%BF%BD%EF%BF%BD%20models,%20StateSpace)
Last Updated
June 30, 2016

Introduction

This page presents how LTSMin do cope efficiently with the StateSpace examination face to the other participating tools. In this page, we consider «Surprise» models.

The next sections will show chart comparing performances in termsof both memory and execution time.The x-axis corresponds to the challenging tool where the y-axes represents LTSMin' performances. Thus, points below the diagonal of a chart denote comparisons favorables to the tool whileothers corresponds to situations where the challenging tool performs better.

You might also find plots out of the range that denote the case were at least one tool could not answer appropriately (error, time-out, could not compute or did not competed).

LTSMin versus ITS-Tools

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for ITS-Tools, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to ITS-Tools are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin ITS-Tools Both tools   LTSMin ITS-Tools
Computed OK 33 16 33   Smallest Memory Footprint
Do not compete 9 0 0 Times tool wins 37 45
Error detected 0 16 0   Shortest Execution Time
Cannot Compute + Time-out 9 19 55 Times tool wins 43 39


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus Tapaal(PAR)

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for Tapaal(PAR), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to Tapaal(PAR) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin Tapaal(PAR) Both tools   LTSMin Tapaal(PAR)
Computed OK 49 0 17   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 50 16
Error detected 0 1 0   Shortest Execution Time
Cannot Compute + Time-out 1 49 63 Times tool wins 51 15


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus Marcie

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for Marcie, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to Marcie are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin Marcie Both tools   LTSMin Marcie
Computed OK 11 21 55   Smallest Memory Footprint
Do not compete 9 0 0 Times tool wins 25 62
Error detected 0 1 0   Shortest Execution Time
Cannot Compute + Time-out 16 14 48 Times tool wins 38 49


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus pnmc

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for pnmc, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to pnmc are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin pnmc Both tools   LTSMin pnmc
Computed OK 1 16 65   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 10 72
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 16 1 48 Times tool wins 14 68


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus PNXDD

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for PNXDD, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to PNXDD are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin PNXDD Both tools   LTSMin PNXDD
Computed OK 38 0 28   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 44 22
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 0 38 64 Times tool wins 64 2


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus Smart

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for Smart, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to Smart are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin Smart Both tools   LTSMin Smart
Computed OK 45 2 21   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 45 23
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 2 45 62 Times tool wins 47 21


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus Tapaal(EXP)

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for Tapaal(EXP), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to Tapaal(EXP) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin Tapaal(EXP) Both tools   LTSMin Tapaal(EXP)
Computed OK 38 0 28   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 38 28
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 0 38 64 Times tool wins 52 14


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus Tapaal(SEQ)

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for Tapaal(SEQ), so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to Tapaal(SEQ) are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin Tapaal(SEQ) Both tools   LTSMin Tapaal(SEQ)
Computed OK 43 0 23   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 43 23
Error detected 0 1 0   Shortest Execution Time
Cannot Compute + Time-out 1 43 63 Times tool wins 54 12


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart

LTSMin versus ydd-pt

Some statistics are displayed below, based on 278 runs (139 for LTSMin and 139 for ydd-pt, so there are 139 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing LTSMin to ydd-pt are shown (you may click on one graph to enlarge it).

Statistics on the execution
  LTSMin ydd-pt Both tools   LTSMin ydd-pt
Computed OK 59 0 7   Smallest Memory Footprint
Do not compete 0 0 9 Times tool wins 60 6
Error detected 0 0 0   Shortest Execution Time
Cannot Compute + Time-out 0 59 64 Times tool wins 66 0


On the chart below, denote cases where the two tools did computed a result without error, denote the cases where at least one tool did not competed, denote the cases where at least one tool computed a bad value and denote the cases where at least one tool stated it could not compute a result or timed-out.

memory chart time chart