Execution Time14.37s

Test: UNRES_M_MD_microcanonical (Failed)
Build: Linux-pgi-MPI E0LL2Y (dell15) on 2018-04-20 04:06:55
Repository revision: cfd3f3ea9ccac7a6b9cb0e802b4c9e5927a506f6

Exit Value1

Show Command Line
Show Test Time Graph
Show Failing/Passing Graph

Test output
CTEST_FULL_OUTPUT
CTEST_FULL_OUTPUT
--------------------------------------------------------------------------
The library attempted to open the following supporting CUDA libraries, 
but each of them failed.  CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with 
--mca mpi_cuda_support 0 to suppress this message.  If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.
--------------------------------------------------------------------------
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
           51  opened
 MPI: node=             0  iseed=                  -3059742
 indpdb=            0  pdbref=  T
 ns=            0
 Call Read_Bridge.
 ns=            0
[dell15:101665] 1 more process has sent help message help-mpi-common-cuda.txt / dlopen failed
[dell15:101665] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
 Processor            1  CG group            0  absolute rank            1 
  leves ERGASTULUM.
 Total wall clock time    14.21797704696655       sec
 Processor            0  BROADCAST time   0.2226066589355469       REDUCE time 
   0.7640290260314941       GATHER time    0.000000000000000       SCATTER time
    6.8907022476196289E-002  SENDRECV    0.000000000000000       BARRIER ene 
   7.8898191452026367E-002  BARRIER grad   7.5989961624145508E-002
CG processor   0 is finishing work.
 Processor            1  wait times for respective orders order[            0 ]
    0.4554140567779541       order[            1 ]   0.1080951690673828      
  order[            2 ]    0.000000000000000       order[            3 ] 
    0.000000000000000       order[            4 ]    1.316264152526855      
  order[            5 ]    0.000000000000000       order[            6 ] 
    0.000000000000000       order[            7 ]   0.3815226554870605      
  order[            8 ]    0.000000000000000       order[            9 ] 
    0.000000000000000       order[           10 ]    0.000000000000000     
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
average total energy  76.6542
standard deviation  1.31429
average total energy  76.6542
standard deviation  1.31429
standard deviation greater than 0.15
-------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
standard deviation greater than 0.15
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[11667,1],1]
  Exit code:    1
--------------------------------------------------------------------------