Child pages
  • Amber Installs
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

NOTE: Each Amber directory contains scripts that you can source for setting up your environment:

  • amber.sh (bash)
  • amber.csh (csh)

Any additional requirements (such as scripts to source for MPI libraries or modules to load) will be listed in specific entries.

CHPC

XSEDE

Blue Waters

CHPC

/uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT

Ember (CPU), Kingspeak

amber-ember

Intel compilers

MPI: /uufs/ember.arches/sys/pkg/mvapich2/std_intel/etc/mvapich2.sh

Ember (GPU)

NOTE: MPI version has changed as of 3-13-2015 due to problems with GNU mvapich2 1.9

amber-ember-gpu

GNU compilers

MPI: /uufs/ember.arches/sys/pkg/mvapich2/2.0/etc/mvapich2.sh

CUDA 5.5 (system default)

Tangent

amber-tangent

Intel compilers

MPI: /uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT/tangent.mpi.sh

Kepler (GPU)

NOTE: Kepler does NOT use a queuing system. Before running on Kepler ensure no one else is running using the 'top' command.

amber-kepler

GNU compilers

CUDA 5.0: /usr/local/cuda-5.0

No MPI

Lonepeak

amber-lonepeak

Intel compilers

MPI: /uufs/chpc.utah.edu/sys/installdir/openmpi/std_intel/etc/openmpi.sh

Bash Tricks

To ensure I'm always running the correct amber install on CHPC resources I add this line to my ~/.bashrc file:

source ~/.local_bashrc

This is where I keep all of my customizations. Then in my ~/.local_bashrc:

# Amber - machine-specific
if [[ ! -z `hostname | grep lonepeak` ]] ; then
  export AMBERHOME=/uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT/amber-lonepeak
elif [[ ! -z `hostname | grep tangent` ]] ; then
  export AMBERHOME=/uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT/amber-tangent
else
  export AMBERHOME=/uufs/chpc.utah.edu/common/home/u0827715/Amber/GIT/amber
fi
if [[ -f $AMBERHOME/amber.sh ]] ; then
  source $AMBERHOME/amber.sh
fi

XSEDE

Stampede

/work/00301/tg455746/GIT/amber-stampede

Intel compilers

CUDA 5.0: Use "module load cuda/5.0"

Default MPI

Comet (still needs testing)

/oasis/projects/nsf/slc216/droe/GIT

amber (CPU only)

Intel compilers

Default system NetCDF: "module load netcdf"

amber-gpu (Serial GPU only)

GNU compilers (system default, NOT gnu 4.9.2)

CUDA 6.5: "module load cuda/6.5"

The compiler set up is not compatible with cuda 6.5; comet staff are working on building an mvapich2 for the system default gnu compiler, 4.4.7. Until then no pmemd.cuda.MPI.

 

Blue Waters

/projects/sciteam/jn6/GIT

All compiles on BW use modules other than the default ones. Only the CUDA compiles should require module loading, but the safest thing to do is add the appropriate module commands to all run scripts. The CUDA 5.0 compile should be faster than the default CUDA, let me know if not.

amber-cpu (CPU only)

source /opt/modules/default/init/bash
module unload PrgEnv-cray
module load PrgEnv-pgi
module load netcdf

amber-gpu (GPU only, default CUDA)

source /opt/modules/default/init/bash
module unload PrgEnv-cray
module load PrgEnv-gnu
module load netcdf
module load cudatoolkit
  • No labels