fond
Model Checking Contest 2018
8th edition, Bratislava, Slovakia, June 26, 2018
Execution of r225-ebro-152732379600075
Last Updated
June 26, 2018

About the Execution of ITS-Tools.L for BridgeAndVehicles-PT-V10P10N10

Execution Summary
Max Memory
Used (MB)
Time wait (ms) CPU Usage (ms) I/O Wait (ms) Computed Result Execution
Status
15752.640 95336.00 207242.00 210.70 TFFTFFTFTTFTFTFF normal

Execution Chart

We display below the execution chart for this examination (boot time has been removed).

Trace from the execution

Waiting for the VM to be ready (probing ssh)
......................
/home/mcc/execution
total 668K
-rw-r--r-- 1 mcc users 5.2K May 15 18:54 CTLCardinality.txt
-rw-r--r-- 1 mcc users 23K May 15 18:54 CTLCardinality.xml
-rw-r--r-- 1 mcc users 21K May 15 18:54 CTLFireability.txt
-rw-r--r-- 1 mcc users 69K May 15 18:54 CTLFireability.xml
-rw-r--r-- 1 mcc users 4.0K May 15 18:49 GenericPropertiesDefinition.xml
-rw-r--r-- 1 mcc users 6.1K May 15 18:49 GenericPropertiesVerdict.xml
-rw-r--r-- 1 mcc users 3.4K May 26 09:26 LTLCardinality.txt
-rw-r--r-- 1 mcc users 13K May 26 09:26 LTLCardinality.xml
-rw-r--r-- 1 mcc users 8.4K May 26 09:26 LTLFireability.txt
-rw-r--r-- 1 mcc users 27K May 26 09:26 LTLFireability.xml
-rw-r--r-- 1 mcc users 5.0K May 15 18:54 ReachabilityCardinality.txt
-rw-r--r-- 1 mcc users 21K May 15 18:54 ReachabilityCardinality.xml
-rw-r--r-- 1 mcc users 121 May 15 18:54 ReachabilityDeadlock.txt
-rw-r--r-- 1 mcc users 359 May 15 18:54 ReachabilityDeadlock.xml
-rw-r--r-- 1 mcc users 21K May 15 18:54 ReachabilityFireability.txt
-rw-r--r-- 1 mcc users 67K May 15 18:54 ReachabilityFireability.xml
-rw-r--r-- 1 mcc users 2.1K May 15 18:54 UpperBounds.txt
-rw-r--r-- 1 mcc users 4.4K May 15 18:54 UpperBounds.xml
-rw-r--r-- 1 mcc users 5 May 15 18:49 equiv_col
-rw-r--r-- 1 mcc users 10 May 15 18:49 instance
-rw-r--r-- 1 mcc users 6 May 15 18:49 iscolored
-rw-r--r-- 1 mcc users 311K May 15 18:49 model.pnml
=====================================================================
Generated by BenchKit 2-3637
Executing tool itstoolsl
Input is BridgeAndVehicles-PT-V10P10N10, examination is LTLCardinality
Time confinement is 3600 seconds
Memory confinement is 16384 MBytes
Number of cores is 4
Run identifier is r225-ebro-152732379600075
=====================================================================


--------------------
content from stdout:

=== Data for post analysis generated by BenchKit (invocation template)

The expected result is a vector of booleans
BOOL_VECTOR

here is the order used to build the result vector(from text file)
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-00
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-01
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-02
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-03
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-04
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-05
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-06
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-07
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-08
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-09
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-10
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-11
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-12
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-13
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-14
FORMULA_NAME BridgeAndVehicles-PT-V10P10N10-LTLCardinality-15

=== Now, execution of the tool begins

BK_START 1527611193892

Using solver Z3 to compute partial order matrices.
Built C files in :
/home/mcc/execution
Invoking ITS tools like this :CommandLine [args=[/home/mcc/BenchKit/itstools/plugins/fr.lip6.move.gal.itstools.binaries_1.0.0.201805241334/bin/its-ltl-linux64, --gc-threshold, 2000000, -i, /home/mcc/execution/LTLCardinality.pnml.gal, -t, CGAL, -LTL, /home/mcc/execution/LTLCardinality.ltl, -c, -stutter-deadlock], workingDir=/home/mcc/execution]

its-ltl command run as :

/home/mcc/BenchKit/itstools/plugins/fr.lip6.move.gal.itstools.binaries_1.0.0.201805241334/bin/its-ltl-linux64 --gc-threshold 2000000 -i /home/mcc/execution/LTLCardinality.pnml.gal -t CGAL -LTL /home/mcc/execution/LTLCardinality.ltl -c -stutter-deadlock
Read 16 LTL properties
Checking formula 0 : !(("((VIDANGE_1+VIDANGE_2)<=(CONTROLEUR_1+CONTROLEUR_2))"))
Formula 0 simplified : !"((VIDANGE_1+VIDANGE_2)<=(CONTROLEUR_1+CONTROLEUR_2))"
Presburger conditions satisfied. Using coverability to approximate state space in K-Induction.
Normalized transition count is 90
// Phase 1: matrix 90 rows 48 cols
invariant :ROUTE_A + ATTENTE_A + SORTI_A + -1'CAPACITE + ATTENTE_B + SORTI_B + ROUTE_B = 10
invariant :COMPTEUR_0 + COMPTEUR_1 + COMPTEUR_2 + COMPTEUR_3 + COMPTEUR_4 + COMPTEUR_5 + COMPTEUR_6 + COMPTEUR_7 + COMPTEUR_8 + COMPTEUR_9 + COMPTEUR_10 = 1
invariant :NB_ATTENTE_B_0 + NB_ATTENTE_B_1 + NB_ATTENTE_B_2 + NB_ATTENTE_B_3 + NB_ATTENTE_B_4 + NB_ATTENTE_B_5 + NB_ATTENTE_B_6 + NB_ATTENTE_B_7 + NB_ATTENTE_B_8 + NB_ATTENTE_B_9 + NB_ATTENTE_B_10 = 1
invariant :CONTROLEUR_1 + CONTROLEUR_2 + CHOIX_1 + CHOIX_2 + VIDANGE_1 + VIDANGE_2 = 1
invariant :SUR_PONT_A + CAPACITE + -1'ATTENTE_B + -1'SORTI_B + -1'ROUTE_B = 0
invariant :NB_ATTENTE_A_0 + NB_ATTENTE_A_1 + NB_ATTENTE_A_2 + NB_ATTENTE_A_3 + NB_ATTENTE_A_4 + NB_ATTENTE_A_5 + NB_ATTENTE_A_6 + NB_ATTENTE_A_7 + NB_ATTENTE_A_8 + NB_ATTENTE_A_9 + NB_ATTENTE_A_10 = 1
invariant :ATTENTE_B + SUR_PONT_B + SORTI_B + ROUTE_B = 10
Running compilation step : CommandLine [args=[gcc, -c, -I/home/mcc/BenchKit//lts_install_dir//include, -I., -std=c99, -fPIC, -O3, model.c], workingDir=/home/mcc/execution]
Compilation finished in 6590 ms.
Running link step : CommandLine [args=[gcc, -shared, -o, gal.so, model.o], workingDir=/home/mcc/execution]
Link finished in 69 ms.
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (LTLAP0==true), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 109 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-00 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, []([]([](X((LTLAP1==true))))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 45 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-01 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, X([](((LTLAP2==true))U((LTLAP3==true)))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 10640 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-02 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, X((LTLAP4==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 45 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-03 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, [](X(X(<>((LTLAP4==true))))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 75 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-04 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, []([]((LTLAP5==true))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 138 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-05 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, (<>(X((LTLAP6==true))))U(((LTLAP7==true))U((LTLAP8==true))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 48 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-06 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, X((LTLAP9==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 40 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-07 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, [](X(<>(X((LTLAP10==true))))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 11741 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-08 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (LTLAP11==true), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 96 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-09 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, ((LTLAP12==true))U(<>([]((LTLAP5==true)))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 155 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-10 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (LTLAP13==true), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 125 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-11 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, X(([]((LTLAP14==true)))U([]((LTLAP15==true)))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 1916 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-12 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, <>(<>(((LTLAP16==true))U((LTLAP17==true)))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 15988 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-13 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (LTLAP3==true), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 113 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-14 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, []((LTLAP18==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 118 ms.
FORMULA BridgeAndVehicles-PT-V10P10N10-LTLCardinality-15 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
ITS tools runner thread asked to quit. Dying gracefully.

BK_STOP 1527611289228

--------------------
content from stderr:

+ export BINDIR=/home/mcc/BenchKit/
+ BINDIR=/home/mcc/BenchKit/
++ pwd
+ export MODEL=/home/mcc/execution
+ MODEL=/home/mcc/execution
+ /home/mcc/BenchKit//runeclipse.sh /home/mcc/execution LTLCardinality -its -ltsminpath /home/mcc/BenchKit//lts_install_dir/ -louvain -smt
+ ulimit -s 65536
+ [[ -z '' ]]
+ export LTSMIN_MEM_SIZE=8589934592
+ LTSMIN_MEM_SIZE=8589934592
+ /home/mcc/BenchKit//itstools/its-tools -consoleLog -data /home/mcc/execution/workspace -pnfolder /home/mcc/execution -examination LTLCardinality -z3path /home/mcc/BenchKit//z3/bin/z3 -yices2path /home/mcc/BenchKit//yices/bin/yices -its -ltsminpath /home/mcc/BenchKit//lts_install_dir/ -louvain -smt -vmargs -Dosgi.locking=none -Declipse.stateSaveDelayInterval=-1 -Dosgi.configuration.area=/tmp/.eclipse -Xss8m -Xms40m -Xmx8192m -Dfile.encoding=UTF-8 -Dosgi.requiredJavaVersion=1.6
May 29, 2018 4:26:36 PM fr.lip6.move.gal.application.Application start
INFO: Running its-tools with arguments : [-pnfolder, /home/mcc/execution, -examination, LTLCardinality, -z3path, /home/mcc/BenchKit//z3/bin/z3, -yices2path, /home/mcc/BenchKit//yices/bin/yices, -its, -ltsminpath, /home/mcc/BenchKit//lts_install_dir/, -louvain, -smt]
May 29, 2018 4:26:37 PM fr.lip6.move.gal.application.MccTranslator transformPNML
INFO: Parsing pnml file : /home/mcc/execution/model.pnml
May 29, 2018 4:26:37 PM fr.lip6.move.gal.nupn.PTNetReader loadFromXML
INFO: Load time of PNML (sax parser for PT used): 195 ms
May 29, 2018 4:26:37 PM fr.lip6.move.gal.pnml.togal.PTGALTransformer handlePage
INFO: Transformed 48 places.
May 29, 2018 4:26:37 PM fr.lip6.move.gal.pnml.togal.PTGALTransformer handlePage
INFO: Transformed 288 transitions.
May 29, 2018 4:26:37 PM fr.lip6.move.serialization.SerializationUtil systemToFile
INFO: Time to serialize gal into /home/mcc/execution/model.pnml.img.gal : 55 ms
May 29, 2018 4:26:38 PM fr.lip6.move.gal.instantiate.GALRewriter flatten
INFO: Flatten gal took : 330 ms
May 29, 2018 4:26:38 PM fr.lip6.move.serialization.SerializationUtil systemToFile
INFO: Time to serialize gal into /home/mcc/execution/LTLCardinality.pnml.gal : 9 ms
May 29, 2018 4:26:38 PM fr.lip6.move.serialization.SerializationUtil serializePropertiesForITSLTLTools
INFO: Time to serialize properties into /home/mcc/execution/LTLCardinality.ltl : 9 ms
May 29, 2018 4:26:38 PM fr.lip6.move.gal.semantics.DeterministicNextBuilder getDeterministicNext
INFO: Input system was already deterministic with 288 transitions.
May 29, 2018 4:26:39 PM fr.lip6.move.gal.gal2smt.bmc.KInductionSolver computeAndDeclareInvariants
INFO: Computed 7 place invariants in 43 ms
May 29, 2018 4:26:39 PM fr.lip6.move.gal.gal2smt.bmc.KInductionSolver init
INFO: Proved 48 variables to be positive in 514 ms
May 29, 2018 4:26:39 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver computeAblingMatrix
INFO: Computing symmetric may disable matrix : 288 transitions.
May 29, 2018 4:26:39 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of disable matrix completed :0/288 took 1 ms. Total solver calls (SAT/UNSAT): 0(0/0)
May 29, 2018 4:26:39 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of Complete disable matrix. took 86 ms. Total solver calls (SAT/UNSAT): 0(0/0)
May 29, 2018 4:26:39 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver computeAblingMatrix
INFO: Computing symmetric may enable matrix : 288 transitions.
May 29, 2018 4:26:40 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of Complete enable matrix. took 30 ms. Total solver calls (SAT/UNSAT): 0(0/0)
May 29, 2018 4:26:46 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver computeCoEnablingMatrix
INFO: Computing symmetric co enabling matrix : 288 transitions.
May 29, 2018 4:26:47 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(0/288) took 449 ms. Total solver calls (SAT/UNSAT): 263(133/130)
May 29, 2018 4:26:50 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(7/288) took 3683 ms. Total solver calls (SAT/UNSAT): 2076(378/1698)
May 29, 2018 4:26:53 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(25/288) took 6927 ms. Total solver calls (SAT/UNSAT): 6354(689/5665)
May 29, 2018 4:26:56 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(43/288) took 9962 ms. Total solver calls (SAT/UNSAT): 10679(899/9780)
May 29, 2018 4:26:59 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(55/288) took 13056 ms. Total solver calls (SAT/UNSAT): 13529(899/12630)
May 29, 2018 4:27:02 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(67/288) took 16243 ms. Total solver calls (SAT/UNSAT): 16235(899/15336)
May 29, 2018 4:27:05 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(88/288) took 19270 ms. Total solver calls (SAT/UNSAT): 20624(899/19725)
May 29, 2018 4:27:08 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(115/288) took 22294 ms. Total solver calls (SAT/UNSAT): 25619(899/24720)
May 29, 2018 4:27:11 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(145/288) took 25329 ms. Total solver calls (SAT/UNSAT): 30314(899/29415)
May 29, 2018 4:27:14 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(182/288) took 28350 ms. Total solver calls (SAT/UNSAT): 34865(899/33966)
May 29, 2018 4:27:17 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of co-enabling matrix(236/288) took 31389 ms. Total solver calls (SAT/UNSAT): 39050(899/38151)
May 29, 2018 4:27:20 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of Finished co-enabling matrix. took 33530 ms. Total solver calls (SAT/UNSAT): 40325(899/39426)
May 29, 2018 4:27:20 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver computeDoNotAccord
INFO: Computing Do-Not-Accords matrix : 288 transitions.
May 29, 2018 4:27:20 PM fr.lip6.move.gal.gal2smt.bmc.NecessaryEnablingsolver printStats
INFO: Computation of Completed DNA matrix. took 572 ms. Total solver calls (SAT/UNSAT): 45(0/45)
May 29, 2018 4:27:20 PM fr.lip6.move.gal.gal2pins.Gal2PinsTransformerNext transform
INFO: Built C files in 42275ms conformant to PINS in folder :/home/mcc/execution

Sequence of Actions to be Executed by the VM

This is useful if one wants to reexecute the tool in the VM from the submitted image disk.

set -x
# this is for BenchKit: configuration of major elements for the test
export BK_INPUT="BridgeAndVehicles-PT-V10P10N10"
export BK_EXAMINATION="LTLCardinality"
export BK_TOOL="itstoolsl"
export BK_RESULT_DIR="/tmp/BK_RESULTS/OUTPUTS"
export BK_TIME_CONFINEMENT="3600"
export BK_MEMORY_CONFINEMENT="16384"

# this is specific to your benchmark or test

export BIN_DIR="$HOME/BenchKit/bin"

# remove the execution directoty if it exists (to avoid increse of .vmdk images)
if [ -d execution ] ; then
rm -rf execution
fi

tar xzf /home/mcc/BenchKit/INPUTS/BridgeAndVehicles-PT-V10P10N10.tgz
mv BridgeAndVehicles-PT-V10P10N10 execution
cd execution
pwd
ls -lh

# this is for BenchKit: explicit launching of the test
echo "====================================================================="
echo " Generated by BenchKit 2-3637"
echo " Executing tool itstoolsl"
echo " Input is BridgeAndVehicles-PT-V10P10N10, examination is LTLCardinality"
echo " Time confinement is $BK_TIME_CONFINEMENT seconds"
echo " Memory confinement is 16384 MBytes"
echo " Number of cores is 4"
echo " Run identifier is r225-ebro-152732379600075"
echo "====================================================================="
echo
echo "--------------------"
echo "content from stdout:"
echo
echo "=== Data for post analysis generated by BenchKit (invocation template)"
echo
if [ "LTLCardinality" = "UpperBounds" ] ; then
echo "The expected result is a vector of positive values"
echo NUM_VECTOR
elif [ "LTLCardinality" != "StateSpace" ] ; then
echo "The expected result is a vector of booleans"
echo BOOL_VECTOR
else
echo "no data necessary for post analysis"
fi
echo
if [ -f "LTLCardinality.txt" ] ; then
echo "here is the order used to build the result vector(from text file)"
for x in $(grep Property LTLCardinality.txt | cut -d ' ' -f 2 | sort -u) ; do
echo "FORMULA_NAME $x"
done
elif [ -f "LTLCardinality.xml" ] ; then # for cunf (txt files deleted;-)
echo echo "here is the order used to build the result vector(from xml file)"
for x in $(grep '' LTLCardinality.xml | cut -d '>' -f 2 | cut -d '<' -f 1 | sort -u) ; do
echo "FORMULA_NAME $x"
done
fi
echo
echo "=== Now, execution of the tool begins"
echo
echo -n "BK_START "
date -u +%s%3N
echo
timeout -s 9 $BK_TIME_CONFINEMENT bash -c "/home/mcc/BenchKit/BenchKit_head.sh 2> STDERR ; echo ; echo -n \"BK_STOP \" ; date -u +%s%3N"
if [ $? -eq 137 ] ; then
echo
echo "BK_TIME_CONFINEMENT_REACHED"
fi
echo
echo "--------------------"
echo "content from stderr:"
echo
cat STDERR ;