CTEST_FULL_OUTPUT
CTEST_FULL_OUTPUT
--------------------------------------------------------------------------
The library attempted to open the following supporting CUDA libraries,
but each of them failed. CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with
--mca mpi_cuda_support 0 to suppress this message. If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.
--------------------------------------------------------------------------
nodes 2 rank 0
nodes 2 rank 1
Processor 0 out of 2 rank in CG_COMM 0
size of CG_COMM 1 size of FG_COMM 2 rank in FG_COMM1
Processor 1 out of 2 rank in FG_COMM 1
size of FG_COMM 2 rank in FG_COMM1 1 size of FG_COMM1
0 size of FG_COMM1 2
2
Inside initializeInside initialize thetname_pdb
/tmp/cdash/homology/source/PARAM/thetaml.5parm
thetname_pdb
/tmp/cdash/homology/source/PARAM/thetaml.5parm
51 opened
51 opened
MPI: node= 0 iseed= -3059742
indpdb= 25 pdbref= T
ns= 0
Call Read_Bridge.
ns= 0
Processor 1 1 1 ivec_start 24
ivec_end 47
Processor 0 0 0 ivec_start 1
ivec_end 23
SUMSL return code is 4 eval 886
# eval/s 1246.044940934341
refstr= T
Processor 1 CG group 0 absolute rank 1
leves ERGASTULUM.
Total wall clock time 0.7605061531066895 sec
Processor 0 BROADCAST time 3.6013126373291016E-003 REDUCE time
Processor 1 wait times for respective orders order[ 0 ]
1.2965202331542969E-002 order[ 1 ] 2.3664951324462891E-002
4.3258666992187500E-003 GATHER time 0.000000000000000 SCATTER time
0.000000000000000 SENDRECV 0.000000000000000 BARRIER ene
1.1198520660400391E-003 BARRIER grad 9.2124938964843750E-004
CG processor 0 is finishing work.
order[ 2 ] 0.000000000000000 order[ 3 ]
0.000000000000000 order[ 4 ] 0.000000000000000
order[ 5 ] 0.000000000000000 order[ 6 ]
0.000000000000000 order[ 7 ] 0.1871254444122314
order[ 8 ] 0.000000000000000 order[ 9 ]
0.000000000000000 order[ 10 ] 0.000000000000000
order[ 11 ] 0.000000000000000 order[ 12 ]
0.000000000000000
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
SUMSL return code 4
SUMSL return code 4
WARNING: energy is somewhat different .156100, ene= -158.524100
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
WARNING: energy is somewhat different .156100, ene= -158.524100
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[37440,1],1]
Exit code: 1
--------------------------------------------------------------------------
[dell15:78578] 1 more process has sent help message help-mpi-common-cuda.txt / dlopen failed
[dell15:78578] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages