Child pages
  • Meteorology Department Compute & Storage Services
Skip to end of metadata
Go to start of metadata

Compute


Introduction:

Meteorology in collaboration with CHPC has been setting up a specialized set of compute resources for use by the meteorology department groups. These servers are running RedHat Enterprise Linux 4 with the latest update sets applied. They are running in a special NFS Root environment which simply means the OS is not locally installed on the servers hard drives but instead comes from across the network. If you have used the CHPC Arches HPC Clusters before this is the same type of setup as they have for the OS. The servers are setup to use our campus UNIDs as our usernames as well as our campus password for authentication. The user environment on these systems again is done in the same way as on Arches. The following links will take you to the various CHPC general user documentation:

CHPC User Services Main Page
CHPC Getting Started Guide
Frequently Asked Questions
CHPC User Documentation

Creating a new account:

If you do not yet have an account with CHPC the process is fairly simple. Please refer to this link from our FAQ and follow the instructions there:

CHPC Account Creating Form

User Environment Settings:

This is part of the above linked documentation but its significant enough I want to give it a spot of its own here too. CHPC has some prepared . files for the various shell types that will setup your environment for you. To get a copy of these files if the one you have is out of date or you just want a fresh copy of them refer to this link:

Q: Are there default .cshrc and .login files available?

When new users account is created, we populate the home directory with the default .tcshrc or .login file. Existing users can update their .tcshrc/.login by pulling it from CHPC's website via command:

wget  http://www.chpc.utah.edu/docs/manuals/getting_started/code/chpc.tcshrc for csh/tcsh .tcshrc or

wget  http://www.chpc.utah.edu/docs/manuals/getting_started/code/chpc.bashrc for sh/bash .login

The default shell init script files set most of the environment needed by both the meteorology compute servers and desktops. They also include setup for all other CHPC's clusters. Each cluster/desktop has its own section that is denoted by $UUFSCELL variable. The meteo compute servers are called "hiddenarch.arches", the meteo desktops are using section "chpc.utah.edu".

There are few things to note on the .tcshrc/.login implementation.

First, we recommend to do minimal changes to these files, as this will help to minimize conflicts when updating the files. For user customizations, we suggest to use external files that are sourced from .tcshrc/.login.

There are pretty much just two locations in these files where it makes sense to put customizations. One place is for a specific cluster or desktop. For example, if one wants to add specific setup for meteo desktops, at the end of section:
    if ($UUFSCELL == "chpc.utah.edu") then
add something like:
    source .my_desktop
and in file .my_desktop add the desired customizations.

The second location for customizations is at the very end of the .tcshrc/.login file, and is good for global aliases and commands that a power user may want to use from any machine he/she is logged in. For that, we have as the last line in .tcshrc:
#load aliases
#source ~/.aliases

Uncomment the source line, create file .aliases and populate it with desired aliases, shortcuts, etc.

Applications:

CHPC has been installed many of the applications the MET groups require. The selection of applications that have been installed should be considered a "work in progress" meaning if you find an application you need not present please contact CHPC and we will work with you to get it installed. Refer to the "Getting Help" section below for ways to contact CHPC. For a current list of what is available you can ls /uufs/chpc.utah.edu/sys/pkg to see a full list of the applications that have been installed to the system.

IDL environment:

We have had many questions on setting up IDL environment correctly. The main IDL environment is loaded by default. However, there are two IDL's locations. On meteoxx servers, IDL is located in /uufs/chpc.utah.edu/sys/pkg/idl, while on desktops, it is in /uufs/chpc.utah.edu/sys/pkg/idl. Determining if one is on a server or on a desktop is fairly easy with command echo $UUFSCELL. hiddenarch.arches means server, chpc.utah.edu means desktop.

In order to add another local directory to the IDL's path, open ~/.tcshrc, find section that corresponds to either the meteoxx servers (hiddenarch.arches) or desktops (chpc.utah.edu) and at the end of this section (below source ~/$met_include), add line that defines the addition to the IDL path, e.g.
setenv IDL_PATH /uufs/chpc.utah.edu/sys/pkg/idl/7.0/idl70/lib:+/uufs/chpc.utah.edu/common/home/u0123456/idllib for the meteo servers or
setenv IDL_PATH /uufs/chpc.utah.edu/sys/pkg/idl/7.0/idl70/lib:+/uufs/chpc.utah.edu/common/home/u0123456/idllib for the desktops.

Video Management Packages:

Both mencoder and ffmpeg are needed to generate movies on these nodes. Neither package builds nicely against redhat and for the time being we need to manually install the rpm packages, which can be sticky with dependencies. Future development will include specific redhat channels or yum repositories to facilitate the package installation.

Compute Server Hardware:

Currently we have 20 compute servers in place. These are all Dell 2950 servers with dual quad core Intel 2.6Ghz processors ( 8 CPU cores per server ), 16GB of RAM, Gigabit Ethernet, and ~58GB of local disk space mounted as /tmp. These boxes are currently broken up into two groups, WX and Meteo:

  • WX Nodes used by John Horel and Jim Steenburgh
    • wx1.chpc.utha.edu
    • wx2.chpc.utah.edu
    • wx3.chpc.utah.edu
    • wx4.chpc.utah.edu
  • Meteo General Compute Nodes
    • Jay Mace Meteo Nodes
      • meteo01.chpc.utah.edu ( Aliased as mace01.chpc.utah.edu )
      • meteo02.chpc.utah.edu ( Aliased as mace02.chpc.utah.edu )
      • meteo03.chpc.utah.edu ( Aliased as mace03.chpc.utah.edu )
      • meteo04.chpc.utah.edu ( Aliased as mace04.chpc.utah.edu ) (special features: runs mysql and apache)
    • Steven Krueger Meteo Nodes
      • meteo05.chpc.utah.edu ( Aliased as kreuger01.chpc.utah.edu )
      • meteo06.chpc.utah.edu ( Aliased as krueger02.chpc.utah.edu )
    • General Meteo Nodes
      • meteo07.chpc.utah.edu
      • meteo08.chpc.utah.edu
      • atmos01.chpc.utah.edu
    • Zaoxia Pu Nodes (with 32 GB of RAM)
      • meteo09.chpc.utah.edu (blade)
      • meteo10.chpc.utah.edu (blade)
      • meteo17.chpc.utah.edu (blade)
      • meteo18.chpc.utah.edu (R410 48 GB of Ram)
    • Tim Garrett Nodes (with 32 GB of RAM)
      • meteo11.chpc.utah.edu (blade)
      • meteo12.chpc.utah.edu (blade)
    • More Jay Mace Nodes (with 32 GB of RAM)
      • meteo13.chpc.utah.edu (blade)
      • meteo14.chpc.utah.edu (blade)
    • Liu Nodes
      • meteo15.chpc.utah.edu (blade)
      • meteo16.chpc.utah.edu (blade)
      • meteo20.chpc.utah.edu (R410)
      • meteo21.chpc.utah.edu (R410)
      • meteo22.chpc.utah.edu (R410)
    • John Horel
      • meteo19.chpc.utah.edu

The usage of these systems is to be managed between the users, respective nodes designated to a particular group. CHPC has not put any additional limitations on the usage of these nodes. However it is worth noting that CHPC periodically does server maintenance as part of a scheduled downtime. We will send out notifications in advance for these scheduled downtimes to keep users aware of these plans. There is also the chance that an emergency need will come up that would require an outage outside a scheduled downtime. We will make effort to communicate with the groups involved if/when these conditions occur.

Adding Compute Servers:

If you are in a position to want to add additional compute servers or some other change to an existing server the proper procedure is to contact Jim Steenburgh to explore if there is opportunity to make a group purchase. CHPC can then work with all involved to get the proper configuration selected. To give you an idea of the current price point as of March 2008, a Dell dual quad core 2.6Ghz Intel server, 16GB of RAM, 80GB local hard disk and Gigabit Ethenet runs ~$2700.00. If a more custom setup is needed that is not an issue we will work with you to get the specifics you need and then request a quote from the vendor.


Storage


Introduction:

CHPC has a newer model for offering disk storage out to research groups. This model using a storage area netowrk, a SAN, to attach disk to servers. The brief goals of this model is to give us better scalability for growing storage as well as keep administrative overhead at a minimum. So with this new model groups looking to add space simply work with CHPC to identify the capacity and any other characteristics they need for their storage and a purchase is made. A few current data points regarding storage are:

  • Costs are approx $1,000.00/TB of storage.
  • This includes 3 years of next business day warranty.
  • This storage is not backed up to tape. If backups are needed we will have to meet and address the need on a case by case basis.
Currently Storage:

These are the current file systems we have in place. The path they are accessed on CHPC systems as well as a note if these file systems are use for $HOME space:

  • Thomas Reichler:
    • 11TB file system accessed at /uufs/chpc.utah.edu/common/home/reichler_group as well as $HOME for the Thomas Reichler group.
    • 9.7TB file system accessed at /uufs/chpc.utah.edu/common/home/reichler_grp.
  • Zhaoxia Pu:
    • 3TB file system accessed at /uufs/chpc.utah.edu/common/home/zpu_group as well as $HOME for some of the Zhaozia Pu Group.
  • Jay Mace:
    • 9.7TB file system accessed at /uufs/chpc.utah.edu/common/home/mace_grp as well as $HOME for the Jay Mace group.
  • John Horel:
    • 8.6TB file system accessed at /uufs/chpc.utah.edu/common/home/metwx as well as $HOME for the John Horel group.
  • Steven Krueger:
    • 4.9TB file system accessed at /uufs/chpc.utah.edu/common/home/krueger_grp.
  • Tim Garrett:
    • 1.6TB file system accessed at /uufs/chpc.utah.edu/common/home/garrett_grp.
Adding/Expanding Storage:

If you are in a position to want to add storage or expand storage you already have the proper procedure is to contact Jim Steenburgh to explore if there is opportunity to make a group purchase. CHPC can then work with all involved to get the proper configuration selected. Again the current, as of March 2008, pricing is ~$1000.00/TB.

Backups

CHPC preforms daily backups of home directories for users in the Meteorology department regardless of the location of their home directories. Currently CHPC has agreed to provided two terabytes of home directory backups for the entire Meteorology department. Individual group storage is not backup. However CHPC offers an archiving service that can provide archives of groups storage as often as every three months. For move information about CHPC's archiving policy please refer to the following web page:  http://www.chpc.utah.edu/docs/policies/disk.html

Storage Reliability:

The CHPC filesystems have some redundancy built in using the RAID6 protocol. RAID6 has 2 parity drives as compared to 1 parity drive with RAID5. That means the storage system can loose 2 drives in a R6 setup as compared to 1 in the R5 setup. In addition to the parity drives, we have a hot spare drive setup for each array as well so when a failure happens it starts a rebuild to the spare. Most of the motivation for RAID6 comes from the security of having an extra drives capacity from the total usable. As drives have become larger and larger the rebuild time when a drive failure happens has increased significantly. That window where an RAID5 array is rebuilding from a failed drive is the window of time that another drive failure would kill the whole array. So to help mitigate this RAID6 has become popular as it offers a bit more protection from such situations.

Getting Help:

The CHPC Help Desk is covered 8 am to 5 pm on University work days by CHPC staff. Please submit questions or problems one of these three ways:

  • email: issues@chpc.utah.edu
  • telephone: (801) 581-6440
  • Visit us in 405 INSCC. (Please note the Help Desk is not a physical location. You will be re-directed to a person for help when you visit us.)

A beginners guide to basic Linux commands for getting started at chpc is evolving at Basic Linux Help

Meteorology users who wish to share knowledge, tips and methods are welcomed to add to the page for the Meteo Knowledge Pool


  • No labels