CTEST_FULL_OUTPUT
CTEST_FULL_OUTPUT
--------------------------------------------------------------------------
The library attempted to open the following supporting CUDA libraries,
but each of them failed. CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with
--mca mpi_cuda_support 0 to suppress this message. If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.
--------------------------------------------------------------------------
nodes 2 rank 0
nodes 2 rank 1
Processor 0 out of 2 rank in CG_COMM 0
size of CG_COMM 1 size of FG_COMM 2 rank in FG_COMM1
Processor 1 out of 2 rank in FG_COMM 1
size of FG_COMM 2 rank in FG_COMM1 1 size of FG_COMM1
0 size of FG_COMM1 2
2
Inside initializeInside initialize thetname_pdb
/tmp/cdash/homology/source/PARAM/thetaml.5parm
thetname_pdb
/tmp/cdash/homology/source/PARAM/thetaml.5parm
51 opened
51 opened
ns= 0
MPI: node= 0 iseed= -3059742
Processor 1 1 1 ivec_start 24
ivec_end 47
indpdb= 25 pdbref= T
Call Read_Bridge.
ns= 0
Processor 0 0 0 ivec_start 1
ivec_end 23
SUMSL return code is 4 eval 886
# eval/s 757.4390127772814
refstr= T
Processor 1 CG group 0 absolute rank 1
leves ERGASTULUM.
Processor 1 wait times for respective orders order[ 0 ]
2.7682781219482422E-002 order[ 1 ] 3.8537025451660156E-002
Total wall clock time 1.226197957992554 sec
Processor 0 BROADCAST time 5.3343772888183594E-003 REDUCE time
6.8871974945068359E-003 GATHER time 0.000000000000000 SCATTER time
order[ 2 ] 0.000000000000000 order[ 3 ]
0.000000000000000 order[ 4 ] 0.000000000000000
0.000000000000000 SENDRECV 0.000000000000000 BARRIER ene
1.7316341400146484E-003 BARRIER grad 1.1088848114013672E-003
order[ 5 ] 0.000000000000000 order[ 6 ]
0.000000000000000 order[ 7 ] 0.3247745037078857
CG processor 0 is finishing work.
order[ 8 ] 0.000000000000000 order[ 9 ]
0.000000000000000 order[ 10 ] 0.000000000000000
order[ 11 ] 0.000000000000000 order[ 12 ]
0.000000000000000
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
SUMSL return code 4
SUMSL return code 4
WARNING: energy is somewhat different .156100, ene= -158.524100
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
WARNING: energy is somewhat different .156100, ene= -158.524100
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[11132,1],1]
Exit code: 1
--------------------------------------------------------------------------
[dell15:166861] 1 more process has sent help message help-mpi-common-cuda.txt / dlopen failed
[dell15:166861] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages