Introduction
Please find here the computed results for the CTLFireability examination. The table is organized as follows:
- A global list of scores (per tool),
- Three main sections for «Surprise», «Stripped» and «Known» models,
- Within a main sections, a subsection per model that summarizes scores, bonuses an displays results for each instance.
The information for a model instance is summarized in a «line» where:
- You may get the expected results (and the confidence associated with such results) by moving your cursor over the name of the instance. The values are surrounded by parenthesis are considered as being «unsafe» and are discarded when computing scores (we expect a confidence level over 0.9 for 2015),
- computed results and their interpretation are all located in the same table cell.
The structure of a table cell is always the same. In black, you have the output provided by the tool followed by the interpretation mask stating if the tool did computed something corresponding to the expected values (T means that it is the expected result, ? means that no tool computed the corresponding value and X means that the value is not the expected one). Then, three consistency flags are displayed : V means that at least one value was wrong (i.e. at least one X in the result mask), C means that the tool was not consistent between the colored Petri net and the P/T equivalent one, S means that the tool is not consistent between the result of the «Stripped» model and of the corresponding «Known» one. This part is displayed in green when some points was considered, in orange when it was not possible («unsafe» values estimated for example) and red when the result is discarded due to an error.
Then a second group of flags outline the fastest tool (P) and the tool having the smallest memory footprint (M). These tools get a +2 bonus that is always considered for green and orange results. These two flags are followed by the score (without bonus) and a link to the execution report containing: a summary of results, a chart reporting memory and CPU evolution over the execution, a full execution log and the commands applied to the tool for this run.
You will note that DNC means «Do Not Compete» and CC means «Cannot Compute». Results are labelled CC only when no output was produced for this execution and the tool reported a problem. DNF (Did Not Finished) is displayed when no output was produced within the 1 hour time confinement.
The Results
They are displayed in the table below, together with scores (total, sum per category of model and sum per model).