Difference between revisions of "CIRCE Layout"

 
(5 intermediate revisions by the same user not shown)
Line 26: Line 26:
=== <code>/work</code> ===
=== <code>/work</code> ===


<code>/work</code> is the scratch storage space used when running jobs. Jobs will read and write files from <code>/work</code> as it is designed to be fast and responsive so as not to hold up calculations. <code>/work</code> is
<code>/work</code> is the scratch storage space used when running jobs on the partitions as listed on the [[SLURM_Partitions|SLURM Partition Layout]] page. Jobs will read and write files from <code>/work</code> as it is designed to be fast and responsive so as not to hold up calculations.  
 
<code>/work</code> is:


* '''2 TB quota'''
* '''2 TB quota'''
Line 42: Line 44:
=== <code>/work_bgfs</code> ===
=== <code>/work_bgfs</code> ===


<code>/work_bgfs</code> is the scratch storage space used when running jobs on the following partitions:
<code>/work_bgfs</code> is the scratch storage space used when running jobs on the partitions as listed on the [[SLURM_Partitions|SLURM Partition Layout]] page. Jobs will read and write files from <code>/work_bgfs</code> as it is designed to be fast and responsive so as not to hold up calculations.
 
<code>/work_bgfs</code> is:


* bgfsqdr
* '''2 TB quota'''
* henderson_itn18
* /work resides on a BeeGFS clustered file server environment
* simmons_itn18
* NOT backed up in any way
* snsm_itn19
* NOT replicated
* Regularly purged of files that have not been accessed within 6 months.
** Please see the [[CIRCE_Data_Management#Guidelines for Running Jobs|Guidelines for Running Jobs]] section of the [[CIRCE Data Management]] page for data retention and management procedures


It is mounted on the CIRCE login nodes to facilitate data transfer between the systems in question. Directories will be automatically created for authorized users of the partitions listed above. For all intents and purposes, the file system <code>/work_bgfs</code> should be the same as <code>/work</code>.
It is mounted on the CIRCE login nodes to facilitate data transfer between the systems in question. Directories will be automatically created for authorized users of the partitions listed above. For all intents and purposes, the file system <code>/work_bgfs</code> should be the same as <code>/work</code>.

Latest revision as of 20:09, 27 September 2021

CIRCE Layout

The following page will describe the system layout with details on where files and directories are stored.

CIRCE Login Nodes

When a user connects via SSH to circe.rc.usf.edu, they will end up on one of several login nodes. These are the redundant, load-balancing login nodes which will allow you to access your files, develop code, submit jobs, and review your data.

Storage

CIRCE has three tiers of storage, details below:

/home

/home is your primary storage location and the default location you are assigned when you log into the system.

  • 200 GB quota
  • /home resides on a GPFS clustered file server environment
  • Backed-up nightly

Home directories are laid out as follows:

/home/[a-z]/<netid>


where [a-z] is the first letter of the user’s NetID.

/work

/work is the scratch storage space used when running jobs on the partitions as listed on the SLURM Partition Layout page. Jobs will read and write files from /work as it is designed to be fast and responsive so as not to hold up calculations.

/work is:

  • 2 TB quota
  • /work resides on a GPFS clustered file server environment
  • NOT backed up in any way
  • NOT replicated
  • Regularly purged of files that have not been accessed within 6 months.

Work directories are laid out as follows:

/work/[a-z]/<netid>


where [a-z] is the first letter of the user’s NetID.

/work_bgfs

/work_bgfs is the scratch storage space used when running jobs on the partitions as listed on the SLURM Partition Layout page. Jobs will read and write files from /work_bgfs as it is designed to be fast and responsive so as not to hold up calculations.

/work_bgfs is:

  • 2 TB quota
  • /work resides on a BeeGFS clustered file server environment
  • NOT backed up in any way
  • NOT replicated
  • Regularly purged of files that have not been accessed within 6 months.

It is mounted on the CIRCE login nodes to facilitate data transfer between the systems in question. Directories will be automatically created for authorized users of the partitions listed above. For all intents and purposes, the file system /work_bgfs should be the same as /work.

The BeeGFS work directories are laid out as follows:

/work_bgfs/[a-z]/<netid>


where [a-z] is the first letter of the user’s NetID.

/shares

/shares contains folders for group collaboration. If you are a member of any groups or you create any groups, you will have a folder under /shares of the same name. /shares is

  • /shares resides on a GPFS clustered file server environment
  • Backed-up nightly

/apps

/apps contains most of the system’s third-party applications.

Clusters

CIRCE provides access to several compute cluster environments, some of which are accessible University-wide and others which are only accessible by certain research groups. Please see the queue layout documentation for more information.