CTEST_FULL_OUTPUT
CTEST_FULL_OUTPUT
--------------------------------------------------------------------------
The library attempted to open the following supporting CUDA libraries,
but each of them failed. CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with
--mca mpi_cuda_support 0 to suppress this message. If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.
--------------------------------------------------------------------------
nodes 2 rank 0
nodes 2 rank 1
Processor 0 out of 2 rank in CG_COMM 0
size of CG_COMM 1 size of FG_COMM 2 rank in FG_COMM1
Processor 1 out of 2 rank in FG_COMM 1
size of FG_COMM 2 rank in FG_COMM1 1 size of FG_COMM1
0 size of FG_COMM1 2
2
Inside initializeInside initialize thetname_pdb
/tmp/cdash/homology/source/PARAM/thetaml.5parm
51 opened
thetname_pdb
/tmp/cdash/homology/source/PARAM/thetaml.5parm
51 opened
MPI: node= 0 iseed= -3059742
indpdb= 25 pdbref= T
Call Read_Bridge.
ns= 0
Processor 0 0 0 ivec_start 1
ivec_end 23
ns= 0
Processor 1 1 1 ivec_start 24
ivec_end 47
SUMSL return code is 4 eval 886
# eval/s 755.2334428606965
refstr= T
Processor 1 CG group 0 absolute rank 1
leves ERGASTULUM.
Processor 1 wait times for respective orders order[ 0 ]
1.1190652847290039E-002 order[ 1 ] 3.9843082427978516E-002
order[ 2 ] 0.000000000000000 order[ 3 ]
0.000000000000000 order[ 4 ] 0.000000000000000
order[ 5 ] 0.000000000000000 order[ 6 ]
0.000000000000000 order[ 7 ] 0.3236794471740723
order[ 8 ] 0.000000000000000 order[ 9 ]
0.000000000000000 order[ 10 ] 0.000000000000000
order[ 11 ] 0.000000000000000 order[ 12 ]
0.000000000000000
Total wall clock time 1.288190841674805 sec
Processor 0 BROADCAST time 4.8575401306152344E-003 REDUCE time
7.4434280395507813E-003 GATHER time 0.000000000000000 SCATTER time
0.000000000000000 SENDRECV 0.000000000000000 BARRIER ene
1.7716884613037109E-003 BARRIER grad 2.0022392272949219E-003
CG processor 0 is finishing work.
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
SUMSL return code 4
SUMSL return code 4
WARNING: energy is somewhat different .156100, ene= -158.524100
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
WARNING: energy is somewhat different .156100, ene= -158.524100
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[29946,1],0]
Exit code: 1
--------------------------------------------------------------------------
[dell15:185419] 1 more process has sent help message help-mpi-common-cuda.txt / dlopen failed
[dell15:185419] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages