Execution Time44.95s

Test: UNRES_remd (Passed)
Build: Linux-pgi-MPI E0LL2Y (dell15) on 2017-10-20 04:11:04
Repository revision: cfd3f3ea9ccac7a6b9cb0e802b4c9e5927a506f6

Histograms 1L2Y_remd
Energy 1L2Y_remd

Show Command Line
Show Test Time Graph
Show Failing/Passing Graph

Test output
CTEST_FULL_OUTPUT
--------------------------------------------------------------------------
The library attempted to open the following supporting CUDA libraries, 
but each of them failed.  CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with 
--mca mpi_cuda_support 0 to suppress this message.  If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.
--------------------------------------------------------------------------
 Processor            5  out of            8  rank in CG_COMM            5 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
 Processor            0  out of            8  rank in CG_COMM            0 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
 Processor            1  out of            8  rank in CG_COMM            1 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
 Processor            2  out of            8  rank in CG_COMM            2 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
 Processor            3  out of            8  rank in CG_COMM            3 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
 Processor            4  out of            8  rank in CG_COMM            4 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
 Processor            6  out of            8  rank in CG_COMM            6 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
 Processor            7  out of            8  rank in CG_COMM            7 
  size of CG_COMM            8  size of FG_COMM            1  rank in FG_COMM1 
            0  size of FG_COMM1            1
Inside initializeInside initializeInside initializeInside initializeInside initializeInside initializeInside initializeInside initialize thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 thetname_pdb 
[dell15:124428] 7 more processes have sent help message help-mpi-common-cuda.txt / dlopen failed
[dell15:124428] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 thetname_pdb 
 /tmp/cdash/source/PARAM/thetaml.5parm                                                                                                                                                                                                                           
           51  opened
 MPI: node=             1  iseed=                 -12173295
 MPI: node=             7  iseed=                 -48693183
 MPI: node=             5  iseed=                 -36519887
 MPI: node=             3  iseed=                 -24346591
 indpdb=            0  pdbref=  T
 indpdb=            0  pdbref=  T
 indpdb=            0  pdbref=  T
 indpdb=            0  pdbref=  T
 Call Read_Bridge.
 ns=            0
 Call Read_Bridge.
 ns=            0
 Call Read_Bridge.
 ns=            0
 Call Read_Bridge.
 ns=            0
 Processor            1            0            0  ivec_start            1 
  ivec_end           21
 Processor            7            0            0  ivec_start            1 
  ivec_end           21
 Processor            5            0            0  ivec_start            1 
  ivec_end           21
 Processor            3            0            0  ivec_start            1 
  ivec_end           21
 Processor            7  rest  T                            restart1fie  T
 Processor            5  rest  T                            restart1fie  T
 Processor            1  rest  T                            restart1fie  T
 Processor            3  rest  T                            restart1fie  T
 MPI: node=             0  iseed=                  -6086647
 MPI: node=             6  iseed=                 -42606535
 MPI: node=             2  iseed=                 -18259943
 MPI: node=             4  iseed=                 -30433239
 indpdb=            0  pdbref=  T
 Call Read_Bridge.
 ns=            0
 Processor            0            0            0  ivec_start            1 
  ivec_end           21
 indpdb=            0  pdbref=  T
 indpdb=            0  pdbref=  T
 indpdb=            0  pdbref=  T
 Call Read_Bridge.
 ns=            0
 Processor            0  rest  T                            restart1fie  T
 Call Read_Bridge.
 ns=            0
 Call Read_Bridge.
 ns=            0
 Processor            6            0            0  ivec_start            1 
  ivec_end           21
 Processor            2            0            0  ivec_start            1 
  ivec_end           21
 Processor            4            0            0  ivec_start            1 
  ivec_end           21
 Processor            6  rest  T                            restart1fie  T
 Processor            2  rest  T                            restart1fie  T
 Processor            4  rest  T                            restart1fie  T
            1  Before broadcast: file_exist  F
            7  Before broadcast: file_exist  F
            3  Before broadcast: file_exist  F
            5  Before broadcast: file_exist  F
            0  Before broadcast: file_exist  F
            0  After broadcast: file_exist  F
            3  After broadcast: file_exist  F
            7  After broadcast: file_exist  F
            5  After broadcast: file_exist  F
            1  After broadcast: file_exist  F
            4  Before broadcast: file_exist  F
            4  After broadcast: file_exist  F
            2  Before broadcast: file_exist  F
            2  After broadcast: file_exist  F
            6  Before broadcast: file_exist  F
            6  After broadcast: file_exist  F
CG processor   4 is finishing work.
CG processor   2 is finishing work.
CG processor   5 is finishing work.
CG processor   1 is finishing work.
CG processor   3 is finishing work.
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
Warning: ieee_invalid is signaling
Warning: ieee_divide_by_zero is signaling
Warning: ieee_inexact is signaling
Bye Bye...
CG processor   0 is finishing work.
CG processor   6 is finishing work.
CG processor   7 is finishing work.


ACC   1   240.00000     0.22111  199
ACC   2   260.00000     0.21106  199
ACC   3   280.00000     0.24121  199
ACC   4   300.00000     0.22613  199
ACC   5   320.00000     0.28643  199
ACC   6   340.00000     0.33668  199
ACC   7   360.00000     0.37688  199
average exchange = 0.271357