Difference between revisions of "SC Hardware"

Line 35: Line 35:
|}
|}


== GPU Hardware ==


{| class=wikitable
|- style="background-color:#f1edbe;"
|'''Card Model'''
|'''Quantity'''
|'''Memory'''
|'''Additional Info'''
|-
|-
|NVIDIA Tesla M2070
|7
|5GB
|
|}
== File System Hardware ==
{| class=wikitable
|- style="background-color:#f1edbe;"
|'''File System Path'''
|'''File System Type'''
|'''Interconnect'''
|'''Available Size'''
|'''Backed Up?'''
|'''Long-Term Storage'''
|'''Additional Info'''
|-
|/home
|GPFS
|QDR Infiniband
|2.4PB
|Daily
|Yes
|home directory space for secure file storage
|}


== Additional Information for Cluster Hardware and Usage ==
== Additional Information for Cluster Hardware and Usage ==


* [[SLURM_Partitions|SLURM Partition Layout]]
* [[SLURM_Partitions|SLURM Partition Layout]]
* [[SC_Layout|SC File System Layout]]
* [[SC Data Management|SC Data Management]]
* [[SC Data Management|SC Data Management]]

Revision as of 19:36, 15 October 2019

SC Hardware

Advanced computing resources at the University of South Florida are administered by Research Computing (RC). RC hosts a student cluster computer (SC) which currently consists of approximately 22 nodes with nearly 444 processor cores running Red Hat Enterprise Linux 6. The cluster is built on the condominium model with 1.296TB of memory shared across the nodes in various configurations. Seven of these nodes are also equipped with single Nvidia Tesla M2070 GPUs. The nodes utilize QDR InfiniBand for a computational interconnect. For long-term storage, students share a 2.4PB replicated file system for home directories and shared files that is on a nightly backup cycle. RC also provides and supports more than 120 scientific software packages for use on a variety of platforms. Remote system and file access is available from essentially anywhere via VPN connection. RC staff members are available to facilitate use of the cluster, as well as provide direct assistance. User education and training sessions are also provided several times per semester.

Server Hardware

Partition Nodes Core Count Processors Cores per Node Memory per Node GPUs per Node Interconnect
sc 10 240 2x Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 24 64GB none 40GB Infiniband
Totals 10 240 640 GB 0


Additional Information for Cluster Hardware and Usage