Introduction
This page presents how smart do cope efficiently with the StateSpace examination face to the other participating tools. In this page, we consider «Known» models.
The next sections will show chart comparing performances in termsof both memory and execution time.The x-axis corresponds to the challenging tool where the y-axes represents smart' performances. Thus, points below the diagonal of a chart denote comparisons favorables to the tool whileothers corresponds to situations where the challenging tool performs better.
You might also find plots out of the range that denote the case were at least one tool could not answer appropriately (error, time-out, could not compute or did not competed).
smart versus LTSMin
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for LTSMin, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to LTSMin are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | LTSMin | Both tools | smart | LTSMin | ||
Computed OK | 28 | 44 | 71 | Smallest Memory Footprint | ||
Do not compete | 0 | 0 | 120 | Times tool wins | 99 | 44 |
Error detected | 23 | 0 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 22 | 29 | 169 | Times tool wins | 92 | 51 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.
smart versus LoLA
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for LoLA, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to LoLA are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | LoLA | Both tools | smart | LoLA | ||
Computed OK | 99 | 0 | 0 | Smallest Memory Footprint | ||
Do not compete | 9 | 313 | 111 | Times tool wins | 99 | 0 |
Error detected | 23 | 0 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 191 | 9 | 0 | Times tool wins | 99 | 0 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.
smart versus Tapaal
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for Tapaal, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to Tapaal are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | Tapaal | Both tools | smart | Tapaal | ||
Computed OK | 46 | 18 | 53 | Smallest Memory Footprint | ||
Do not compete | 0 | 0 | 120 | Times tool wins | 91 | 26 |
Error detected | 23 | 0 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 4 | 55 | 187 | Times tool wins | 64 | 53 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.
smart versus ITS-Tools
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for ITS-Tools, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to ITS-Tools are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | ITS-Tools | Both tools | smart | ITS-Tools | ||
Computed OK | 25 | 99 | 74 | Smallest Memory Footprint | ||
Do not compete | 120 | 0 | 0 | Times tool wins | 97 | 101 |
Error detected | 23 | 2 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 38 | 105 | 153 | Times tool wins | 82 | 116 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.
smart versus MARCIE
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for MARCIE, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to MARCIE are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | MARCIE | Both tools | smart | MARCIE | ||
Computed OK | 15 | 89 | 84 | Smallest Memory Footprint | ||
Do not compete | 120 | 0 | 0 | Times tool wins | 98 | 90 |
Error detected | 23 | 2 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 36 | 103 | 155 | Times tool wins | 81 | 107 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.
smart versus GreatSPN
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for GreatSPN, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to GreatSPN are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | GreatSPN | Both tools | smart | GreatSPN | ||
Computed OK | 3 | 111 | 96 | Smallest Memory Footprint | ||
Do not compete | 120 | 0 | 0 | Times tool wins | 51 | 159 |
Error detected | 23 | 0 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 47 | 82 | 144 | Times tool wins | 32 | 178 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.
smart versus TINA.tedd
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for TINA.tedd, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to TINA.tedd are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | TINA.tedd | Both tools | smart | TINA.tedd | ||
Computed OK | 7 | 102 | 92 | Smallest Memory Footprint | ||
Do not compete | 111 | 0 | 9 | Times tool wins | 97 | 104 |
Error detected | 23 | 0 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 47 | 86 | 144 | Times tool wins | 75 | 126 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.
smart versus TINA.sift
Some statistics are displayed below, based on 866 runs (433 for smart and 433 for TINA.sift, so there are 433 plots on each of the two charts). Each execution was allowed 1 hour and 16 GByte of memory. Then performance charts comparing smart to TINA.sift are shown (you may click on one graph to enlarge it).
Statistics on the execution | ||||||
smart | TINA.sift | Both tools | smart | TINA.sift | ||
Computed OK | 49 | 39 | 50 | Smallest Memory Footprint | ||
Do not compete | 111 | 0 | 9 | Times tool wins | 73 | 65 |
Error detected | 23 | 2 | 0 | Shortest Execution Time | ||
Cannot Compute + Time-out | 6 | 148 | 185 | Times tool wins | 88 | 50 |
On the chart below, denote cases where
the two tools did computed a result without error,
denote the cases where at least one tool did not competed,
denote the cases where at least one
tool computed a bad value and
denote the cases where at least one tool stated it could not compute a result or timed-out.