Difference between revisions of "CIRCE Hardware"
(44 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
= CIRCE Hardware = | = CIRCE Hardware = | ||
Advanced computing resources at the University of South Florida are administered by Research Computing (RC). RC hosts the CIRCE (Central Instructional and Research Computing Environment) cluster computer which currently consists of approximately | Advanced computing resources at the University of South Florida are administered by Research Computing (RC). RC hosts the CIRCE (Central Instructional and Research Computing Environment) cluster computer which currently consists of approximately 350 nodes with over 9,000 processor cores running Red Hat Enterprise Linux v7. The cluster is built on the [https://wiki.rc.usf.edu/index.php/What_is_the_condominium_model condominium model] with approximately 57TB of memory shared across the nodes in various configurations. Additionally, there are 158 GPUs available. The nodes utilize either 100GB Omnipath, 100GB Infiniband, or 40GB Infiniband for a computational interconnect. There are three parallel file systems employed on CIRCE: a 2.9PB GPFS file system, a 819TB BeeGFS file system, and a 350 TB encrypted BeeGFS file system. These file systems are used to support high speed and I/O intensive computations, as well as for long-term storage. RC also provides and supports more than 120 scientific software packages for use on a variety of platforms. Remote system and file access is available from essentially anywhere. RC staff members are available to facilitate use of the cluster, as well as provide direct assistance with research projects that require high-performance computing or advanced visualization and analysis of data. '''User education and training sessions are also provided upon request.''' | ||
* The above statement as well as any information below can be used as part of the facilities description for grant proposals, provided that it is acknowledged that these resources are administered by Research Computing at the University of South Florida. | * The above statement as well as any information below can be used as part of the facilities description for grant proposals, provided that it is acknowledged that these resources are administered by Research Computing at the University of South Florida. | ||
Line 17: | Line 17: | ||
|'''GPUs per Node''' | |'''GPUs per Node''' | ||
|'''Interconnect''' | |'''Interconnect''' | ||
|- | |||
|amd_2021 | |||
|4 | |||
|512 | |||
|2x AMD Epyc 7702 CPU @ 2.0GHz | |||
|128 | |||
|1TB | |||
|none | |||
|100GB Infiniband | |||
|- | |||
|amdwoods_2022 | |||
|4 | |||
|128 | |||
|2x Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | |||
|32 | |||
|192GB | |||
|none | |||
|100GB Infiniband | |||
|- | |- | ||
|bfbsm_2019 | |bfbsm_2019 | ||
Line 22: | Line 40: | ||
|288 | |288 | ||
|2x Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | |2x Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | ||
| | |24 | ||
|192GB | |192GB | ||
|none | |none | ||
|100GB Omnipath | |100GB Omnipath | ||
|- | |- | ||
|cbcs | |cbcs | ||
Line 54: | Line 63: | ||
|100GB Infiniband | |100GB Infiniband | ||
|- | |- | ||
| | |charbonnier_2022 | ||
| | |4 | ||
| | |128 | ||
|2x Intel(R) Xeon(R) CPU | |2x Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | ||
|32 | |||
|96GB | |||
|none | |||
|100GB Infiniband | |||
|- | |||
|chbme_2018 | |||
|12 | |||
|240 | |||
|2x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz | |||
|20 | |||
|96GB | |||
|2x GTX 1080 Ti | |||
|100GB Omnipath | |||
|- | |||
|chbme_2018 | |||
|2 | |||
|48 | |||
|2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | |||
|24 | |||
|96GB | |||
|1x TITAN RTX | |||
|100GB Omnipath | |||
|- | |||
|chbme_2018 | |||
|1 | |||
|24 | |||
|2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | |||
|24 | |||
|96GB | |||
|1x TITAN V | |||
|100GB Omnipath | |||
|- | |||
|chbme_2018 | |||
|4 | |||
|96 | |||
|2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | |||
|24 | |||
|96GB | |||
|3x GTX 1080 Ti | |||
|100GB Omnipath | |||
|- | |||
|chbme_2018 | |||
|1 | |||
|24 | |||
|2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | |||
|24 | |24 | ||
| | |96GB | ||
| | |2x GTX 1080 Ti | ||
| | |100GB Omnipath | ||
|- | |- | ||
| | |circe | ||
| | |108 | ||
| | |2592 | ||
|2x Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | |2x Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | ||
|24 | |24 | ||
Line 81: | Line 135: | ||
|100GB Omnipath | |100GB Omnipath | ||
|- | |- | ||
| | |cms_ocg_2022 | ||
| | |10 | ||
|320 | |||
|Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz | |||
|32 | |||
|256GB | |||
|none | |||
|100GB Infiniband | |||
|- | |||
|cool2022 | |||
|4 | |||
|512 | |||
|2x AMD Epyc 7702 CPU @ 2.0GHz | |||
|128 | |128 | ||
| | |1TB | ||
|none | |none | ||
|40GB Infiniband | |40GB/100GB Infiniband | ||
|- | |- | ||
| | |qcg_gayles_2022 | ||
| | |8 | ||
| | |256 | ||
|2x Intel(R) Xeon(R) | |2x Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | ||
| | |32 | ||
| | |192GB | ||
| | |none | ||
| | |100GB Infiniband | ||
|- | |- | ||
|hchg | |hchg | ||
Line 109: | Line 172: | ||
|- | |- | ||
|hii02 | |hii02 | ||
| | |34 | ||
| | |544 | ||
|2x Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz | |2x Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz | ||
|16 | |16 | ||
Line 118: | Line 181: | ||
|- | |- | ||
|hii02 | |hii02 | ||
| | |32 | ||
| | |640 | ||
|2x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz | |2x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz | ||
|20 | |20 | ||
Line 130: | Line 193: | ||
|56 | |56 | ||
|2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | |2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | ||
| | |28 | ||
|1000GB | |1000GB | ||
|none | |none | ||
|40GB Infiniband | |40GB Infiniband | ||
Line 161: | Line 206: | ||
|none | |none | ||
|40GB Infiniband | |40GB Infiniband | ||
|- | |- | ||
|margres_2020 | |margres_2020 | ||
Line 216: | Line 216: | ||
|100GB Infiniband | |100GB Infiniband | ||
|- | |- | ||
| | |muma_2021 | ||
| | |5 | ||
| | |640 | ||
|2x | |2x AMD Epyc 7702 CPU @ 2.0GHz | ||
| | |128 | ||
| | |1TB | ||
|none | |none | ||
|100GB Infiniband | |100GB Infiniband | ||
|- | |- | ||
| | |muma_2021 | ||
| | |1 | ||
| | |32 | ||
|2x Intel(R) Xeon(R) Silver | |2x Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz | ||
|32 | |32 | ||
| | |256GB | ||
| | |2x RTX A6000 | ||
|100GB Infiniband | |100GB Infiniband | ||
|- | |- | ||
Line 271: | Line 244: | ||
|- | |- | ||
|snsm_itn19 | |snsm_itn19 | ||
| | |27 | ||
| | |540 | ||
|2x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz | |2x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz | ||
|20 | |20 | ||
Line 298: | Line 271: | ||
|- | |- | ||
|'''Totals''' | |'''Totals''' | ||
|''' | |'''333''' | ||
|''' | |'''8956''' | ||
| | | | ||
| | | | ||
|''' | |'''55952 GB''' | ||
|''' | |'''155''' | ||
| | | | ||
|} | |} |
Latest revision as of 21:18, 13 February 2024
CIRCE Hardware
Advanced computing resources at the University of South Florida are administered by Research Computing (RC). RC hosts the CIRCE (Central Instructional and Research Computing Environment) cluster computer which currently consists of approximately 350 nodes with over 9,000 processor cores running Red Hat Enterprise Linux v7. The cluster is built on the condominium model with approximately 57TB of memory shared across the nodes in various configurations. Additionally, there are 158 GPUs available. The nodes utilize either 100GB Omnipath, 100GB Infiniband, or 40GB Infiniband for a computational interconnect. There are three parallel file systems employed on CIRCE: a 2.9PB GPFS file system, a 819TB BeeGFS file system, and a 350 TB encrypted BeeGFS file system. These file systems are used to support high speed and I/O intensive computations, as well as for long-term storage. RC also provides and supports more than 120 scientific software packages for use on a variety of platforms. Remote system and file access is available from essentially anywhere. RC staff members are available to facilitate use of the cluster, as well as provide direct assistance with research projects that require high-performance computing or advanced visualization and analysis of data. User education and training sessions are also provided upon request.
- The above statement as well as any information below can be used as part of the facilities description for grant proposals, provided that it is acknowledged that these resources are administered by Research Computing at the University of South Florida.
Server Hardware
Partition | Nodes | Core Count | Processors | Cores per Node | Memory per Node | GPUs per Node | Interconnect |
amd_2021 | 4 | 512 | 2x AMD Epyc 7702 CPU @ 2.0GHz | 128 | 1TB | none | 100GB Infiniband |
amdwoods_2022 | 4 | 128 | 2x Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | 32 | 192GB | none | 100GB Infiniband |
bfbsm_2019 | 12 | 288 | 2x Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | 24 | 192GB | none | 100GB Omnipath |
cbcs | 7 | 168 | 2x Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | 24 | 192GB | none | 100GB Infiniband |
cbcs | 1 | 32 | 2x Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | 32 | 192GB | 4x Tesla V100 | 100GB Infiniband |
charbonnier_2022 | 4 | 128 | 2x Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | 32 | 96GB | none | 100GB Infiniband |
chbme_2018 | 12 | 240 | 2x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz | 20 | 96GB | 2x GTX 1080 Ti | 100GB Omnipath |
chbme_2018 | 2 | 48 | 2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | 24 | 96GB | 1x TITAN RTX | 100GB Omnipath |
chbme_2018 | 1 | 24 | 2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | 24 | 96GB | 1x TITAN V | 100GB Omnipath |
chbme_2018 | 4 | 96 | 2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | 24 | 96GB | 3x GTX 1080 Ti | 100GB Omnipath |
chbme_2018 | 1 | 24 | 2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | 24 | 96GB | 2x GTX 1080 Ti | 100GB Omnipath |
circe | 108 | 2592 | 2x Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 24 | 64GB | none | 40GB Infiniband |
cms_ocg | 9 | 216 | 2x Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | 12 | 192GB | none | 100GB Omnipath |
cms_ocg_2022 | 10 | 320 | Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz | 32 | 256GB | none | 100GB Infiniband |
cool2022 | 4 | 512 | 2x AMD Epyc 7702 CPU @ 2.0GHz | 128 | 1TB | none | 40GB/100GB Infiniband |
qcg_gayles_2022 | 8 | 256 | 2x Intel(R) Xeon(R) Gold 6226R @ 2.90GHz | 32 | 192GB | none | 100GB Infiniband |
hchg | 2 | 48 | 2x Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 24 | 128GB | none | 40GB Infiniband |
hii02 | 34 | 544 | 2x Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz | 16 | 128GB | none | 40GB Infiniband |
hii02 | 32 | 640 | 2x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz | 20 | 128GB | none | 40GB Infiniband |
hii02 | 2 | 56 | 2x Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | 28 | 1000GB | none | 40GB Infiniband |
himem | 4 | 80 | 2x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz | 20 | 512GB | none | 40GB Infiniband |
margres_2020 | 3 | 72 | 2x Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | 24 | 192GB | none | 100GB Infiniband |
muma_2021 | 5 | 640 | 2x AMD Epyc 7702 CPU @ 2.0GHz | 128 | 1TB | none | 100GB Infiniband |
muma_2021 | 1 | 32 | 2x Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz | 32 | 256GB | 2x RTX A6000 | 100GB Infiniband |
simmons_itn18 | 20 | 480 | 2x Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz | 24 | 96GB | 3x GTX 1080 Ti | 100GB Omnipath |
snsm_itn19 | 27 | 540 | 2x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz | 20 | 192GB | 1x GTX 1070 Ti | 100GB Omnipath |
snsm_itn19 | 10 | 200 | 2x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz | 20 | 192GB | 2x GTX 1070 Ti | 100GB Omnipath |
tfawcett | 2 | 40 | 2x Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz | 20 | 192GB | 1x GTX 1080 Ti | 100GB Omnipath |
Totals | 333 | 8956 | 55952 GB | 155 |