CIRCE Hardware

Revision as of 18:32, 23 June 2016 by Botto (talk | contribs) (Created page with "= CIRCE Hardware = Advanced computing resources at the University of South Florida are administered by Research Computing (RC). RC hosts a cluster computer which currently co...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

CIRCE Hardware

Advanced computing resources at the University of South Florida are administered by Research Computing (RC). RC hosts a cluster computer which currently consists of approximately 500 nodes with nearly 7200 processor cores running Red Hat Enterprise Linux 6. The cluster is built on the condominium model with 24TB of memory shared across the nodes in various configurations. The most recent major addition to the cluster is comprised of 128, dual 8-core 2.6GhZ Intel Sandy Bridge nodes with 32GB RAM per node. Twenty of these nodes are also equipped with dual Nvidia Kepler K20 GPUs. Additionally, there are 8 Nvidia Fermi GPGPUs available. The nodes utilize SDR/DDR/QDR InfiniBand for a computational interconnect. A 250TB Lustre file system is used to support high speed and I/O intensive computations. For long-term storage, researchers share a 100TB replicated file system for home directories and shared files that is on a nightly backup cycle. RC also provides and supports more than 120 scientific software packages for use on a variety of platforms. Remote system and file access is available from essentially anywhere via a web interface, including computational job submission and monitoring. RC staff members are available to facilitate use of the cluster, as well as provide direct assistance with research projects that require high-performance computing or advanced visualization and analysis of data. User education and training sessions are also provided several times per semester.

  • The above statement as well as any information below can be used as part of the facilities description for grant proposals, provided that it is acknowledged that these resources are administered by Research Computing at the University of South Florida.

Server Hardware

Nodes Core Count Processors Memory per node Interconnect Additional Info
138 1656 2 x Intel Xeon E5649 (Six Core) 24GB QDR InfiniBand
128 2048 2 x Intel Xeon E5-2670 (Eight Core) 32GB QDR InfiniBand 2013 Expansion
68 816 2 x Intel Xeon E5-2630 (Six Core) 24GB QDR InfiniBand
40 800 2 x Intel Xeon E5-2650 v3 (10-core) 128GB QDR InfiniBand hii02 partition
36 288 2 x AMD Opteron 2384 (Quad Core) 16GB DDR InfiniBand
34 408 2 x AMD Opteron 2427 (Six Core) 24GB DDR InfiniBand
20 320 2 x Intel Xeon E5-2650 v2 (Eight Core) 192GB QDR InfiniBand hii01 partition
20 320 2 x Intel Xeon E5-2650 v2 (Eight Core) 64GB QDR InfiniBand hii01 partition
16 192 2 x Intel Xeon E5-2620 (Six Core) 64GB QDR InfiniBand hii01 partition
4 48 2 x Intel Xeon E5649 (Six Core) 24GB QDR InfiniBand Login nodes
4 80 2 x Intel Xeon E5-2650 v3 (10-core) 512GB QDR InfiniBand 2015 Large-memory nodes
2 32 2 x AMD Opteron 6128 (Eight Core) 192GB DDR InfiniBand Large-memory nodes
2 32 2 x AMD Opteron 6128 (Eight Core) 18GB DDR InfiniBand
1 16 4 x Intel Xeon E7330 (Quad Core) 132GB SDR InfiniBand Large-memory node
1 16 2 x Intel Xeon E5-2650 (Eight Core) 32GB QDR Infiniband Chemistry GPU node
Totals
520 7168 24.6TB

 

GPU Hardware

Card Model Quantity Memory Additional Info
NVIDIA Kepler K20 40 6GB 2013 Expansion
NVIDIA Fermi 8 2GB

 

File System Hardware

File System Path File System Type *Interconnect* Available Size *Backed Up?* Long-Term Storage Additional Info
/home NFS NFS over Ethernet 55TB Daily Yes home directory space for secure file storage

 

Additional Information for Cluster Hardware and Usage