fond
Model Checking Contest 2018
8th edition, Bratislava, Slovakia, June 26, 2018
Execution of r260-csrt-152732585500251
Last Updated
June 26, 2018

About the Execution of ITS-Tools for PhilosophersDyn-PT-10

Execution Summary
Max Memory
Used (MB)
Time wait (ms) CPU Usage (ms) I/O Wait (ms) Computed Result Execution
Status
15756.270 184443.00 384264.00 1085.40 FFFFTTTFTTTFFTTF normal

Execution Chart

We display below the execution chart for this examination (boot time has been removed).

Trace from the execution

Waiting for the VM to be ready (probing ssh)
....................................................
/home/mcc/execution
total 4.8M
-rw-r--r-- 1 mcc users 19K May 15 18:54 CTLCardinality.txt
-rw-r--r-- 1 mcc users 63K May 15 18:54 CTLCardinality.xml
-rw-r--r-- 1 mcc users 151K May 15 18:54 CTLFireability.txt
-rw-r--r-- 1 mcc users 585K May 15 18:54 CTLFireability.xml
-rw-r--r-- 1 mcc users 4.0K May 15 18:50 GenericPropertiesDefinition.xml
-rw-r--r-- 1 mcc users 5.9K May 15 18:50 GenericPropertiesVerdict.xml
-rw-r--r-- 1 mcc users 8.9K May 26 09:27 LTLCardinality.txt
-rw-r--r-- 1 mcc users 29K May 26 09:27 LTLCardinality.xml
-rw-r--r-- 1 mcc users 58K May 26 09:27 LTLFireability.txt
-rw-r--r-- 1 mcc users 212K May 26 09:27 LTLFireability.xml
-rw-r--r-- 1 mcc users 21K May 15 18:54 ReachabilityCardinality.txt
-rw-r--r-- 1 mcc users 62K May 15 18:54 ReachabilityCardinality.xml
-rw-r--r-- 1 mcc users 112 May 15 18:54 ReachabilityDeadlock.txt
-rw-r--r-- 1 mcc users 350 May 15 18:54 ReachabilityDeadlock.xml
-rw-r--r-- 1 mcc users 232K May 15 18:54 ReachabilityFireability.txt
-rw-r--r-- 1 mcc users 873K May 15 18:54 ReachabilityFireability.xml
-rw-r--r-- 1 mcc users 4.7K May 15 18:54 UpperBounds.txt
-rw-r--r-- 1 mcc users 9.9K May 15 18:54 UpperBounds.xml
-rw-r--r-- 1 mcc users 5 May 15 18:50 equiv_col
-rw-r--r-- 1 mcc users 3 May 15 18:50 instance
-rw-r--r-- 1 mcc users 6 May 15 18:50 iscolored
-rw-r--r-- 1 mcc users 2.5M May 15 18:50 model.pnml
=====================================================================
Generated by BenchKit 2-3637
Executing tool itstools
Input is PhilosophersDyn-PT-10, examination is LTLCardinality
Time confinement is 3600 seconds
Memory confinement is 16384 MBytes
Number of cores is 4
Run identifier is r260-csrt-152732585500251
=====================================================================


--------------------
content from stdout:

=== Data for post analysis generated by BenchKit (invocation template)

The expected result is a vector of booleans
BOOL_VECTOR

here is the order used to build the result vector(from text file)
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-00
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-01
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-02
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-03
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-04
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-05
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-06
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-07
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-08
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-09
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-10
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-11
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-12
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-13
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-14
FORMULA_NAME PhilosophersDyn-PT-10-LTLCardinality-15

=== Now, execution of the tool begins

BK_START 1527494086064

Using solver Z3 to compute partial order matrices.
Built C files in :
/home/mcc/execution
Invoking ITS tools like this :CommandLine [args=[/home/mcc/BenchKit/itstools/plugins/fr.lip6.move.gal.itstools.binaries_1.0.0.201805151631/bin/its-ltl-linux64, --gc-threshold, 2000000, -i, /home/mcc/execution/LTLCardinality.pnml.gal, -t, CGAL, -LTL, /home/mcc/execution/LTLCardinality.ltl, -c, -stutter-deadlock], workingDir=/home/mcc/execution]

its-ltl command run as :

/home/mcc/BenchKit/itstools/plugins/fr.lip6.move.gal.itstools.binaries_1.0.0.201805151631/bin/its-ltl-linux64 --gc-threshold 2000000 -i /home/mcc/execution/LTLCardinality.pnml.gal -t CGAL -LTL /home/mcc/execution/LTLCardinality.ltl -c -stutter-deadlock
Read 16 LTL properties
Checking formula 0 : !(((("((((((((((Think_9+Think_8)+Think_1)+Think_2)+Think_3)+Think_7)+Think_6)+Think_4)+Think_5)+Think_10)>=1)")U("((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((Neighbourhood_7_3+Neighbourhood_2_10)+Neighbourhood_9_1)+Neighbourhood_3_8)+Neighbourhood_8_2)+Neighbourhood_5_5)+Neighbourhood_8_9)+Neighbourhood_3_4)+Neighbourhood_7_7)+Neighbourhood_3_6)+Neighbourhood_4_8)+Neighbourhood_5_10)+Neighbourhood_5_1)+Neighbourhood_6_5)+Neighbourhood_1_9)+Neighbourhood_1_7)+Neighbourhood_6_3)+Neighbourhood_9_2)+Neighbourhood_2_2)+Neighbourhood_10_4)+Neighbourhood_7_5)+Neighbourhood_1_2)+Neighbourhood_5_8)+Neighbourhood_6_1)+Neighbourhood_9_9)+Neighbourhood_2_7)+Neighbourhood_6_7)+Neighbourhood_10_2)+Neighbourhood_4_4)+Neighbourhood_2_6)+Neighbourhood_5_3)+Neighbourhood_4_10)+Neighbourhood_3_1)+Neighbourhood_9_4)+Neighbourhood_9_7)+Neighbourhood_8_5)+Neighbourhood_4_3)+Neighbourhood_6_8)+Neighbourhood_9_6)+Neighbourhood_10_8)+Neighbourhood_1_6)+Neighbourhood_9_8)+Neighbourhood_1_4)+Neighbourhood_6_9)+Neighbourhood_7_10)+Neighbourhood_8_7)+Neighbourhood_7_8)+Neighbourhood_4_1)+Neighbourhood_6_10)+Neighbourhood_4_2)+Neighbourhood_3_3)+Neighbourhood_1_5)+Neighbourhood_2_4)+Neighbourhood_8_8)+Neighbourhood_7_9)+Neighbourhood_10_6)+Neighbourhood_10_5)+Neighbourhood_3_2)+Neighbourhood_9_3)+Neighbourhood_4_7)+Neighbourhood_6_4)+Neighbourhood_2_3)+Neighbourhood_5_2)+Neighbourhood_6_6)+Neighbourhood_8_1)+Neighbourhood_10_3)+Neighbourhood_2_5)+Neighbourhood_1_1)+Neighbourhood_10_7)+Neighbourhood_1_3)+Neighbourhood_9_5)+Neighbourhood_7_1)+Neighbourhood_3_7)+Neighbourhood_8_6)+Neighbourhood_6_2)+Neighbourhood_5_4)+Neighbourhood_8_10)+Neighbourhood_5_9)+Neighbourhood_9_10)+Neighbourhood_4_9)+Neighbourhood_7_6)+Neighbourhood_2_1)+Neighbourhood_1_8)+Neighbourhood_3_5)+Neighbourhood_10_9)+Neighbourhood_3_9)+Neighbourhood_8_4)+Neighbourhood_5_6)+Neighbourhood_10_1)+Neighbourhood_7_2)+Neighbourhood_4_5)+Neighbourhood_2_8)+Neighbourhood_10_10)+Neighbourhood_4_6)+Neighbourhood_7_4)+Neighbourhood_8_3)+Neighbourhood_1_10)+Neighbourhood_2_9)+Neighbourhood_5_7)+Neighbourhood_3_10)<=(((((((((HasLeft_6+HasLeft_1)+HasLeft_8)+HasLeft_9)+HasLeft_3)+HasLeft_10)+HasLeft_2)+HasLeft_5)+HasLeft_4)+HasLeft_7))"))U(F(G("((((((((((Forks_10+Forks_8)+Forks_1)+Forks_7)+Forks_6)+Forks_5)+Forks_4)+Forks_3)+Forks_9)+Forks_2)>=2)")))))
Formula 0 simplified : !(("((((((((((Think_9+Think_8)+Think_1)+Think_2)+Think_3)+Think_7)+Think_6)+Think_4)+Think_5)+Think_10)>=1)" U "((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((Neighbourhood_7_3+Neighbourhood_2_10)+Neighbourhood_9_1)+Neighbourhood_3_8)+Neighbourhood_8_2)+Neighbourhood_5_5)+Neighbourhood_8_9)+Neighbourhood_3_4)+Neighbourhood_7_7)+Neighbourhood_3_6)+Neighbourhood_4_8)+Neighbourhood_5_10)+Neighbourhood_5_1)+Neighbourhood_6_5)+Neighbourhood_1_9)+Neighbourhood_1_7)+Neighbourhood_6_3)+Neighbourhood_9_2)+Neighbourhood_2_2)+Neighbourhood_10_4)+Neighbourhood_7_5)+Neighbourhood_1_2)+Neighbourhood_5_8)+Neighbourhood_6_1)+Neighbourhood_9_9)+Neighbourhood_2_7)+Neighbourhood_6_7)+Neighbourhood_10_2)+Neighbourhood_4_4)+Neighbourhood_2_6)+Neighbourhood_5_3)+Neighbourhood_4_10)+Neighbourhood_3_1)+Neighbourhood_9_4)+Neighbourhood_9_7)+Neighbourhood_8_5)+Neighbourhood_4_3)+Neighbourhood_6_8)+Neighbourhood_9_6)+Neighbourhood_10_8)+Neighbourhood_1_6)+Neighbourhood_9_8)+Neighbourhood_1_4)+Neighbourhood_6_9)+Neighbourhood_7_10)+Neighbourhood_8_7)+Neighbourhood_7_8)+Neighbourhood_4_1)+Neighbourhood_6_10)+Neighbourhood_4_2)+Neighbourhood_3_3)+Neighbourhood_1_5)+Neighbourhood_2_4)+Neighbourhood_8_8)+Neighbourhood_7_9)+Neighbourhood_10_6)+Neighbourhood_10_5)+Neighbourhood_3_2)+Neighbourhood_9_3)+Neighbourhood_4_7)+Neighbourhood_6_4)+Neighbourhood_2_3)+Neighbourhood_5_2)+Neighbourhood_6_6)+Neighbourhood_8_1)+Neighbourhood_10_3)+Neighbourhood_2_5)+Neighbourhood_1_1)+Neighbourhood_10_7)+Neighbourhood_1_3)+Neighbourhood_9_5)+Neighbourhood_7_1)+Neighbourhood_3_7)+Neighbourhood_8_6)+Neighbourhood_6_2)+Neighbourhood_5_4)+Neighbourhood_8_10)+Neighbourhood_5_9)+Neighbourhood_9_10)+Neighbourhood_4_9)+Neighbourhood_7_6)+Neighbourhood_2_1)+Neighbourhood_1_8)+Neighbourhood_3_5)+Neighbourhood_10_9)+Neighbourhood_3_9)+Neighbourhood_8_4)+Neighbourhood_5_6)+Neighbourhood_10_1)+Neighbourhood_7_2)+Neighbourhood_4_5)+Neighbourhood_2_8)+Neighbourhood_10_10)+Neighbourhood_4_6)+Neighbourhood_7_4)+Neighbourhood_8_3)+Neighbourhood_1_10)+Neighbourhood_2_9)+Neighbourhood_5_7)+Neighbourhood_3_10)<=(((((((((HasLeft_6+HasLeft_1)+HasLeft_8)+HasLeft_9)+HasLeft_3)+HasLeft_10)+HasLeft_2)+HasLeft_5)+HasLeft_4)+HasLeft_7))") U FG"((((((((((Forks_10+Forks_8)+Forks_1)+Forks_7)+Forks_6)+Forks_5)+Forks_4)+Forks_3)+Forks_9)+Forks_2)>=2)")
Running compilation step : CommandLine [args=[gcc, -c, -I/home/mcc/BenchKit//lts_install_dir//include, -I., -std=c99, -fPIC, -O3, model.c], workingDir=/home/mcc/execution]
Compilation finished in 30955 ms.
Running link step : CommandLine [args=[gcc, -shared, -o, gal.so, model.o], workingDir=/home/mcc/execution]
Link finished in 97 ms.
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (((LTLAP0==true))U((LTLAP1==true)))U(<>([]((LTLAP2==true)))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14243 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-00 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (LTLAP3==true), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14173 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-01 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, X([](<>([]((LTLAP4==true))))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 231 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-02 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (((LTLAP5==true))U((LTLAP6==true)))U((LTLAP7==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14327 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-03 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, <>((LTLAP8==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14169 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-04 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, ([]((LTLAP9==true)))U((LTLAP10==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14216 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-05 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, X((LTLAP11==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 61 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-06 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, <>(<>([]([]((LTLAP4==true))))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 15048 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-07 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, (X(X((LTLAP12==true))))U(<>((LTLAP13==true))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 79 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-08 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, ([](<>((LTLAP14==true))))U(((LTLAP15==true))U((LTLAP16==true))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14040 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-09 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, <>(X(<>(<>((LTLAP17==true))))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 15689 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-10 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, X(X((LTLAP18==true))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 131 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-11 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, [](<>(X([]((LTLAP19==true))))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 391 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-12 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, (LTLAP20==true), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14223 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-13 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, -p, --pins-guards, --when, --ltl, <>((LTLAP21==true)), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 14394 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-14 TRUE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
Running LTSmin : CommandLine [args=[/home/mcc/BenchKit//lts_install_dir//bin/pins2lts-mc, ./gal.so, --threads=1, --when, --ltl, [](X(X((LTLAP22==true)))), --buchi-type=spotba], workingDir=/home/mcc/execution]
LTSmin run took 660 ms.
FORMULA PhilosophersDyn-PT-10-LTLCardinality-15 FALSE TECHNIQUES PARTIAL_ORDER EXPLICIT LTSMIN SAT_SMT
ITS tools runner thread asked to quit. Dying gracefully.

BK_STOP 1527494270507

--------------------
content from stderr:

+ export BINDIR=/home/mcc/BenchKit/
+ BINDIR=/home/mcc/BenchKit/
++ pwd
+ export MODEL=/home/mcc/execution
+ MODEL=/home/mcc/execution
+ /home/mcc/BenchKit//runeclipse.sh /home/mcc/execution LTLCardinality -its -ltsminpath /home/mcc/BenchKit//lts_install_dir/ -smt
+ ulimit -s 65536
+ [[ -z '' ]]
+ export LTSMIN_MEM_SIZE=8589934592
+ LTSMIN_MEM_SIZE=8589934592
+ /home/mcc/BenchKit//itstools/its-tools -consoleLog -data /home/mcc/execution/workspace -pnfolder /home/mcc/execution -examination LTLCardinality -z3path /home/mcc/BenchKit//z3/bin/z3 -yices2path /home/mcc/BenchKit//yices/bin/yices -its -ltsminpath /home/mcc/BenchKit//lts_install_dir/ -smt -vmargs -Dosgi.locking=none -Declipse.stateSaveDelayInterval=-1 -Dosgi.configuration.area=/tmp/.eclipse -Xss8m -Xms40m -Xmx8192m -Dfile.encoding=UTF-8 -Dosgi.requiredJavaVersion=1.6
May 28, 2018 7:54:48 AM fr.lip6.move.gal.application.Application start
INFO: Running its-tools with arguments : [-pnfolder, /home/mcc/execution, -examination, LTLCardinality, -z3path, /home/mcc/BenchKit//z3/bin/z3, -yices2path, /home/mcc/BenchKit//yices/bin/yices, -its, -ltsminpath, /home/mcc/BenchKit//lts_install_dir/, -smt]
May 28, 2018 7:54:48 AM fr.lip6.move.gal.application.MccTranslator transformPNML
INFO: Parsing pnml file : /home/mcc/execution/model.pnml
May 28, 2018 7:54:48 AM fr.lip6.move.gal.nupn.PTNetReader loadFromXML
INFO: Load time of PNML (sax parser for PT used): 230 ms
May 28, 2018 7:54:48 AM fr.lip6.move.gal.pnml.togal.PTGALTransformer handlePage
INFO: Transformed 170 places.
May 28, 2018 7:54:49 AM fr.lip6.move.gal.pnml.togal.PTGALTransformer handlePage
INFO: Transformed 2310 transitions.
May 28, 2018 7:54:50 AM fr.lip6.move.gal.instantiate.GALRewriter flatten
INFO: Flatten gal took : 976 ms
May 28, 2018 7:54:50 AM fr.lip6.move.serialization.SerializationUtil systemToFile
INFO: Time to serialize gal into /home/mcc/execution/LTLCardinality.pnml.gal : 62 ms
May 28, 2018 7:54:50 AM fr.lip6.move.serialization.SerializationUtil serializePropertiesForITSLTLTools
INFO: Time to serialize properties into /home/mcc/execution/LTLCardinality.ltl : 1 ms
May 28, 2018 7:54:51 AM fr.lip6.move.gal.semantics.DeterministicNextBuilder getDeterministicNext
INFO: Input system was already deterministic with 2310 transitions.
May 28, 2018 7:54:51 AM fr.lip6.move.gal.gal2pins.Gal2PinsTransformerNext transform
INFO: Too many transitions (2310) to apply POR reductions. Disabling POR matrices.
May 28, 2018 7:54:52 AM fr.lip6.move.gal.gal2pins.Gal2PinsTransformerNext transform
INFO: Built C files in 1862ms conformant to PINS in folder :/home/mcc/execution

Sequence of Actions to be Executed by the VM

This is useful if one wants to reexecute the tool in the VM from the submitted image disk.

set -x
# this is for BenchKit: configuration of major elements for the test
export BK_INPUT="PhilosophersDyn-PT-10"
export BK_EXAMINATION="LTLCardinality"
export BK_TOOL="itstools"
export BK_RESULT_DIR="/tmp/BK_RESULTS/OUTPUTS"
export BK_TIME_CONFINEMENT="3600"
export BK_MEMORY_CONFINEMENT="16384"

# this is specific to your benchmark or test

export BIN_DIR="$HOME/BenchKit/bin"

# remove the execution directoty if it exists (to avoid increse of .vmdk images)
if [ -d execution ] ; then
rm -rf execution
fi

tar xzf /home/mcc/BenchKit/INPUTS/PhilosophersDyn-PT-10.tgz
mv PhilosophersDyn-PT-10 execution
cd execution
pwd
ls -lh

# this is for BenchKit: explicit launching of the test
echo "====================================================================="
echo " Generated by BenchKit 2-3637"
echo " Executing tool itstools"
echo " Input is PhilosophersDyn-PT-10, examination is LTLCardinality"
echo " Time confinement is $BK_TIME_CONFINEMENT seconds"
echo " Memory confinement is 16384 MBytes"
echo " Number of cores is 4"
echo " Run identifier is r260-csrt-152732585500251"
echo "====================================================================="
echo
echo "--------------------"
echo "content from stdout:"
echo
echo "=== Data for post analysis generated by BenchKit (invocation template)"
echo
if [ "LTLCardinality" = "UpperBounds" ] ; then
echo "The expected result is a vector of positive values"
echo NUM_VECTOR
elif [ "LTLCardinality" != "StateSpace" ] ; then
echo "The expected result is a vector of booleans"
echo BOOL_VECTOR
else
echo "no data necessary for post analysis"
fi
echo
if [ -f "LTLCardinality.txt" ] ; then
echo "here is the order used to build the result vector(from text file)"
for x in $(grep Property LTLCardinality.txt | cut -d ' ' -f 2 | sort -u) ; do
echo "FORMULA_NAME $x"
done
elif [ -f "LTLCardinality.xml" ] ; then # for cunf (txt files deleted;-)
echo echo "here is the order used to build the result vector(from xml file)"
for x in $(grep '' LTLCardinality.xml | cut -d '>' -f 2 | cut -d '<' -f 1 | sort -u) ; do
echo "FORMULA_NAME $x"
done
fi
echo
echo "=== Now, execution of the tool begins"
echo
echo -n "BK_START "
date -u +%s%3N
echo
timeout -s 9 $BK_TIME_CONFINEMENT bash -c "/home/mcc/BenchKit/BenchKit_head.sh 2> STDERR ; echo ; echo -n \"BK_STOP \" ; date -u +%s%3N"
if [ $? -eq 137 ] ; then
echo
echo "BK_TIME_CONFINEMENT_REACHED"
fi
echo
echo "--------------------"
echo "content from stderr:"
echo
cat STDERR ;