Execution Time1.02s

Test: UNRES_MIN_prota (Passed)
Build: homology Linux-pgi-MPI E0LL2Y (dell15) on 2020-03-05 04:26:52
Repository revision: 86a7a0fff3fb6d1c18791c4d2723ee3173d2cf65


Show Command Line
Show Test Time Graph
Show Failing/Passing Graph

Test output
CTEST_FULL_OUTPUT
CTEST_FULL_OUTPUT
--------------------------------------------------------------------------
The library attempted to open the following supporting CUDA libraries, 
but each of them failed.  CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with 
--mca mpi_cuda_support 0 to suppress this message.  If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.
--------------------------------------------------------------------------
 nodes            2  rank            0
 nodes            2  rank            1
 Processor            0  out of            2  rank in CG_COMM            0 
  size of CG_COMM            1  size of FG_COMM            2  rank in FG_COMM1 
 Processor            1  out of            2  rank in FG_COMM            1 
            0  size of FG_COMM1            2
  size of FG_COMM            2  rank in FG_COMM1            1  size of FG_COMM1
             2
Inside initializeInside initialize thetname_pdb 
 /tmp/cdash/homology/source/PARAM/thetaml.5parm                                                                                                                                                                                                                  
           51  opened
 thetname_pdb 
 /tmp/cdash/homology/source/PARAM/thetaml.5parm                                                                                                                                                                                                                  
           51  opened
 MPI: node=             0  iseed=                  -3059742
 indpdb=           25  pdbref=  T
 ns=            0
 Call Read_Bridge.
 ns=            0
 Processor            1            1            1  ivec_start           24 
  ivec_end           47
 Processor            0            0            0  ivec_start            1 
  ivec_end           23
 SUMSL return code is            4  eval           886
 # eval/s    1123.519913991992     
 refstr=  T
 Processor            1  CG group            0  absolute rank            1 
  leves ERGASTULUM.
 Total wall clock time   0.8287148475646973       sec
 Processor            0  BROADCAST time   3.3113956451416016E-003  REDUCE time 
 Processor            1  wait times for respective orders order[            0 ]
    1.0637283325195313E-002  order[            1 ]   2.6120185852050781E-002 
   9.3141794204711914E-002  GATHER time    0.000000000000000       SCATTER time
     0.000000000000000       SENDRECV    0.000000000000000       BARRIER ene 
   1.1217594146728516E-003  BARRIER grad   9.2053413391113281E-004
CG processor   0 is finishing work.
  order[            2 ]    0.000000000000000       order[            3 ] 
    0.000000000000000       order[            4 ]    0.000000000000000      
  order[            5 ]    0.000000000000000       order[            6 ] 
    0.000000000000000       order[            7 ]   0.1879549026489258      
  order[            8 ]    0.000000000000000       order[            9 ] 
    0.000000000000000       order[           10 ]    0.000000000000000      
  order[           11 ]    0.000000000000000       order[           12 ] 
    0.000000000000000     
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
SUMSL return code 4
SUMSL return code 4
OK: absdiff= .156100, ene= -158.524100
OK: absdiff= .156100, ene= -158.524100
[dell15:14367] 1 more process has sent help message help-mpi-common-cuda.txt / dlopen failed
[dell15:14367] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages