Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

  1. CHPC Home Directory File Systems
    Many of the CHPC home directory file systems are based on NFS (Network File System), and proper management of files is critical to the performance of applications and the performance of the entire network. All files in home directories are NFS mounted from a fileserver, and a request for data must go over the network. Therefore, it is advised that all executables and input files be copied to a scratch directory before running a job on the clusters.
    1. INSCC Building Residents
      1. A CHPC home directory file system is available for use for INSCC building residents.
      2. This file system is backed up to tape with weekly full backups, daily incremental.  The retention of these backups is 2 weeks.
    2. The general CHPC home directory file system (CHPC_HPC) is available to users who have a CHPC account and do not have a department file system maintained by CHPC. This file system enforces quotas set at 50 GB per user. If you need a temporary increase on this limit, please let us know (issues@chpc.utah.edu) and we can increase it based on your needs. To apply for a permanent quota increase, the CHPC PI responsible for the user should contact CHPC (issues@chpc.utah.edu) and make a formal request. This request should include a justification for this increase. This file system is not backed up and users are encouraged to move important data back to a file system that is backed up, such as a department file server.
    3. Department owned storage
      1. Departments can work with CHPC to procure storage to be used as CHPC Home Directory or Group Storage
      2. Usage Policies of this storage will be set by the owning department/group
      3. When using shared infrastructure to support this storage it is still expected that all groups be 'good citizens'. By good citizens we mean that utilization should be moderate and not impact other users of the file server.
      4. Quotas
        1. User and or Group quotas can be used to control usage
        2. The quota layer will be enabled allowing for reporting of usage even if quota limits are not set
      5. Any backups run regularly by CHPC have a two week retention period - See Backup Policies below.
      6. Life Cycle
        1. CHPC will support storage for the duration of the warranty period.
        2. A 'best effort' will be applied to supporting storage beyond the warranty period.
        3. Factors that would contribute to the termination of 'best effort' support include:
          1. General health of the device
          2. Potential impact of maintaining an unsupported device
          3. Ability to acquire and replace components
          4. Other points of consideration might be included as well
    4. Web Support from home directories
      1. Place html files in public_html directories
      2. URL published: "http://home.chpc.utah.edu/~<uNID>"
      3. May request more human readable URL handle to redirect to something like: "http://www.chpc.utah.edu/~<my_name>"
  2. Backup Policies
    1. /scratch file systems are not backed up
    2. The HPC general file system is not backed up
    3. Owned home directory space (Note: All of an owner's CHPC Home directory space is NOT necessarily backed up. Due to restraints in cost and capacity, the CHPC PI together with CHPC staff, will negotiate the amount to be backed up when CHPC Home directory space is purchased. For example, if a group purchases 5 TB of space, CHPC may only agree to backup 1 or 2 TBs of this space.)
      1. Full backup weekly
      2. Incremental backup daily
      3. Two week retention
    4. Archive Backup Service: While CHPC does NOT perform regular backups on the default HPC home directory space, we have recognized the needs of some groups to protect their data. CHPC has the ability to make periodic archive backups to tape of data for research groups. These archives can be no more frequent than once per quarter. Each research group is responsible for the cost of the tapes. Future archive backups can be made to the original tapes (more tapes may be needed if data set has grown), or a new tape purchase can be done with CHPC's assistance. To schedule this service, please:
      1. send email to issues@chpc.utah.edu
      2. purchase tapes (CHPC will assist you with tape requirements).
      3. CHPC will perform the archive backup to tape.
      4. tapes can be stored at CHPC or can be stored by the PI.
      5. CHPC suggests that group have 2 sets of tapes so that any time a full backup is being done to one set we still have the copy on the other set to protect us if the disaster were to happen mid archive.
      6. **DISCLAIMER ON ARCHIVE BACKUPS**
        1. Period archive backups are not done automatically. If you request period archive backups, CHPC will send you a quarterly reminder that it is time to request a backup.
        2. These backups are made onto a high-density tape medium and are either retained in the CHPC backup library or can be delivered to the user for long-term storage at the user's discretion.
        3. CHPC wishes to make a note here that high-density tapes do have a limited life span and so should not be regarded as a reliable medium for archive recovery after a certain reasonable length of time. For this reason we are going to specify that after two years, any archive tapes which the user wishes to still maintain for possible recovery should be replaced with new media.
        4. CHPC can at that time furnish the user with a quote for the number of tapes which need to be updated, at which time we can make new duplicate tapes of the archive. which again may be stored for up to two years.
  3. Scratch Disk SpaceScratch space for each HPC system is architected differently
    1. Local Scratch (/scratch/local):
      1. unique to each individual node and is not accessible from any other node. 
      2. cleaned aggressively: scrubbed weekly of files that have not been modified for over 7 days. 
      3. users are expected to clean this space (if used) at the end of every job. 
      4. there is no access to /scratch/local outside of a job. 
      5. this space will be the fastest, but not necessarily the largest. 
      6. users should use this space at their own risk.
      7. not backed up
    2. NFS Scratch:
      1. /scratch/serial mounted everywhere
      2. /scratch/general and /scratch/uintah only mounted on updraft and interactive nodes
      3. not intended for use as storage beyond the data's use in batch jobs
      4. scrubbed weekly of files that have not been modified for over 60 days 
      5. visible to all arches nodes and can be accessed with the path /scratch/serial 
      6. each user will be responsible for creating directories and cleaning up after their jobs 
      7. not backed up
      8. quota layer enabled to facilitate usage reporting
    3. Parallel Scratch: (/scratch/ibrix/chpc_gen): This general space is only available on updraft and ember (and all interactive nodes). 
      1. scrubbed weekly of files that have not been modified for over 60 days.
      2. not intended for use as storage beyond the data's use in batch jobs.
      3. not backed up.
      4. Quota layer enabled to facilitate usage reporting
    4. Owner Scratch Storage
      1. configured and made available as per the owner groups requirements.
      2. not subject to the general scrub policies that CHPC enforces on CHPC provided scratch space.
      3. owners/groups can request automatic scrub scripts to be run per their specifications on their scratch spaces.
      4. not backed up.
      5. quota layer enabled to facilitate usage reporting.
      6. quota limits can be configured per owner/groups needs.
    5. CHPC offers no guarantee on the amount of available /scratch disk space available at any given time
  4. File Transfer Services 3.2 Guest File Transfer Policy
  • No labels