Child pages
  • Amber Installs

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Any additional requirements (such as scripts to source for MPI libraries or modules to load) will be listed in specific entries.

CHPC

XSEDE

Blue Waters

CHPC
Anchor
chpc
chpc

/uufs/chpc.utah.edu/common/home/u0827715/Amber/GITEmber (CPU), Kingspeak

  • All MPI programs of Amber installs on CHPC resources require use of modules (default mvapich2, lonepeak impi).
  • All python programs now require the python/2.7.3 module.
  • The default minimum CUDA for Amber is now 7.5: /usr/local/cuda-7.5 (kepler is the current exception)

Ember (Intel)

amber-ember

Intel compilers

MPI: /uufs/ember.arches/sys/pkg/mvapich2/std_intel/etc/mvapich2.sh

Ember (GPU)

NOTE: MPI version has changed as of 3-13-2015 due to problems with GNU mvapich2 1.9

 module load mvapich2

CUDA 7.5
export CUDA_HOME=/usr/local/cuda-7.5
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH  

Ember (Gnu)

amber-ember-gpu

GNU compilers

MPI: /uufs/ember.arches/sys/pkg/mvapich2/2.0/etc/mvapich2.sh

CUDA 5.5 (system default)

Tangent


module swap intel gcc
module load mvapich2 

CUDA 7.5
export CUDA_HOME=/usr/local/cuda-7.5
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH 

Kingspeak

amber-kingspeak

Intel compilers

MPI: module load mvapich2

Tangent

Note: Unlike other CHPC resources, tangent does not appear to load the intel module by default.

amber-tangent

Intel compilers: module load intel

MPI: /uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT/tangent.mpi.sh module load mvapich2

Kepler (GPU)

NOTE: Kepler does NOT use a queuing system. Before running on Kepler ensure no one else is running using the 'top' command.

...

CUDA 5.0: /usr/local/cuda-5.0

No MPI

Lonepeak

...

amber-lonepeak

Intel compilers

MPI: module load impi

Use 'mpirun', not 'mpiexec'

Bash Tricks

To ensure I'm always running the correct amber install on CHPC resources I add this line to my ~/.bashrc file:

source ~/.local_bashrc

This is where I keep all of my customizations. Then in my ~/.local_bashrc:

# Amber - machine-specific
if [[ ! -z `hostname | grep lonepeak` ]] ; then
  export AMBERHOME=/uufs/chpc.utah.edu/

...

common/

...

home/

...

u0827715/

...

Amber/

...

XSEDE

GIT/amber-lonepeak
elif [[ ! -z `hostname | grep tangent` ]] ; then
  export AMBERHOME=/uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT/amber-tangent
else
  export AMBERHOME=/uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT/amber
fi
if [[ -f $AMBERHOME/amber.sh ]] ; then
  source $AMBERHOME/amber.sh
fi

XSEDE
Anchor
xsede
xsede

Stampede

/work/00301/tg455746/GIT/amber-stampede

Intel compilers

Modules for GPU runs (default cuda, currently 6.5):
module load netcdf
module load cuda

Modules for CPU runs:
module load netcdf 

Comet (still needs testing)

/oasis/projects/nsf/slc216/droe/GIT

amber (CPU only)

Intel compilers

Default system NetCDF: "module load netcdf"

amber-gpu (Serial GPU only)

GNU compilers (system default, NOT gnu 4.9.2)

CUDA 6.5.0: Use "module load cuda/6.5.0"

Default MPI

Blue Waters

"

The compiler set up is not compatible with cuda 6.5; comet staff are working on building an mvapich2 for the system default gnu compiler, 4.4.7. Until then no pmemd.cuda.MPI.

 

Blue Waters
Anchor
bluewaters
bluewaters

/projects/sciteam/jn6/GITgk4

All compiles on BW use modules other than the default ones. Only the CUDA compiles should require module loading, but the safest thing to do is add the appropriate module commands to all run scripts. The CUDA 5.0 compile should be faster than the default CUDA, let me know if notshould load appropriate modules just to be safe (full static linking on BW has proven to be tricky at best). CUDA 5.0 is no longer supported on BW since the last SW upgrade.

amber-cpu (CPU only)

source /opt/modules/default/init/bash
module unload PrgEnv-cray
module load PrgEnv-pgi
module load netcdf

amber-gpu (GPU only, default CUDA

...

)

source /opt/modules/default/init/bash
module unload PrgEnv-cray
module load PrgEnv-gnu
module load cudatoolkit/5.0.35

amber-gnu (GPU only, default CUDA)

source /opt/modules/default/init/bash
module unload PrgEnv-cray
module load PrgEnv-gnu
module load netcdf
module load cudatoolkit