Boston University Shared Computing Cluster (SCC)
Listed below are the technical details on the SCC login and compute nodes, including run time limits, SU charge rate for each node, and the configuration of the batch system. If for some reason your code is not able to run within these parameters, please don’t hesitate to send email to help@scc.bu.edu.
Hardware Configuration
Host Name(s) & Node Type |
# of Nodes | Processors / Node | Memory / Node |
Scratch Disk / Node |
Network | CPU Architecture | SU Charge per CPU hour |
---|---|---|---|---|---|---|---|
Login Nodes | |||||||
scc1.bu.edu, scc2.bu.edu (General access, 32 cores) |
2 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 256 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 0 |
geo.bu.edu (E&E Dept., 32 cores) |
1 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 256 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 0 |
scc4.bu.edu (BUMC/dbGaP, 32 cores) |
1 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 256 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 0 |
Compute Nodes – Shared | |||||||
scc-aa1..aa8, scc-ab1..ab8, scc-ac1..ac8, scc-ad1..ad8, scc-ae1, scc-ae3, scc-ae4 16 cores |
35 | 2 eight-core 2.6 GHz Intel Xeon E5-2670 | 128 GB | 885 GB | 1 Gbps Ethernet FDR Infiniband | sandybridge | 1.0 |
scc-ba2..ba8, scc-bb1..bb8, scc-bc1..bc4, scc-bd3..bd8 16 cores |
25 | 2 eight-core 2.6 GHz Intel Xeon E5-2670 | 128 GB | 885 GB | 1 Gbps Ethernet | sandybridge | 1.0 |
scc-ca1..ca8 16 cores |
8 | 2 eight-core 2.6 GHz Intel Xeon E5-2670 | 256 GB | 885 GB | 10 Gbps Ethernet | sandybridge | 1.0 |
scc-c01 20 cores |
1 with 2 K40m GPUs2 | 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 | 128 GB | 885 GB | 1 Gbps Ethernet | haswell | 1.0 |
scc-c06, scc-c07 36 cores |
2 | 2 eighteen-core 2.4 GHz Intel Xeon E7-8867v4 | 1024 GB | 1068 GB | 10 Gbps Ethernet | broadwell | 1.0 |
scc-c08..c11 28 cores |
4 with 2 P100 GPUs each4 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 256 GB | 849 GB | 10 Gbps Ethernet | broadwell | 1.0 |
scc-ed1..ee4, scc-ef1 scc-fa1..fi4 32 cores |
41 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-ib1,scc-ib2 68 cores |
2 | 1 sixtyeight-core 1.4 GHz Intel Xeon Phi (Knights Landing) 7250 | 192 GB | 152 GB | 10 Gbps Ethernet | knl | 0.0 |
scc-ic1,scc-ic2 28 cores |
2 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 128 GB | 885 GB | 10 Gbps Ethernet | broadwell | 1.0 |
scc-pa1..pa5, scc-pa7, scc-pa8, scc-pb1..pb4, scc-pb6..pb8 scc-pc1..pc8, scc-pd1..pd8, scc-pe1..pe8, scc-pf1..pf8, scc-pg1..pg8, scc-ph1..ph8, scc-pi1..pi4, 16 cores |
66 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 128 GB | 885 GB | 1 Gbps Ethernet | ivybridge | 1.0 |
scc-ua1..ua4, scc-ub1..ub4, scc-uc1..uc4, scc-ud1..ud4, scc-ue1..ue4, scc-uf1..uf4, scc-ug1..ug4, scc-uh1..uh4, scc-ui1..ui4 28 cores |
36 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 256 GB | 885 GB | 10 Gbps Ethernet EDR Infiniband |
broadwell | 1.0 |
scc-v01 12 cores |
1 with 1 K2200 GPU5 | 1 twelve-core 2.4 GHz Intel Xeon E5-2620v3 | 64 GB | 427 GB | 10 Gbps Ethernet | haswell | 1.0 |
scc-v02, scc-v03 8 cores |
2 with 1 M2000 GPUs each6 | 1 eight-core 2.1 GHz Intel Xeon E5-2620v4 | 128 GB | 427 GB | 10 Gbps Ethernet | broadwell | 1.0 |
scc-va1..va4, scc-wa1..wa4, scc-wb1..wb4, scc-wc1..wc4, scc-wd1..wd4, scc-we2-we4, scc-wf1, scc-wf2, scc-wf4, scc-wg2..wg4, scc-wh1..wh4, scc-wi1..wi4, scc-wl1..wl4, scc-wm1..wm4, scc-wn1..wn4 28 cores |
49 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 256 GB | 885 GB | 10 Gbps Ethernet | broadwell | 1.0 |
scc-wj1..wj4, scc-wk1..wk 28 cores |
8 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 512 GB | 885 GB | 10 Gbps Ethernet | broadwell | 1.0 |
scc-x05,scc-x06 28 cores |
2 with 2 V100 GPUs each7 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.0 |
scc-ya1..ya4, scc-yb1..yb4, scc-yc1..yc4, scc-yd1..yd4, scc-ye1..ye4, scc-yf1..yf4, scc-yg1..yg4, scc-yh1..yh4, scc-yi1..yi4, scc-yp3, scc-yp4, scc-yr4 28 cores |
39 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.0 |
scc-yj1..yj4, scc-yk1..yk4, scc-zk3 28 cores |
9 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 384 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.0 |
scc-za1..za4, scc-zb1..zb4, scc-zc1..zc4, scc-zd1..zd4, scc-ze1..ze4, scc-zf1..zf4, scc-zg1..zg4, scc-zh1..zh4, scc-zi1..zi4 28 cores |
36 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 885 GB | 10 Gbps Ethernet EDR Infiniband |
skylake | 1.0 |
scc-211 32 cores |
1 with 4 A40 GPUs16 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 859 GB | 10 Gbps Ethernet | icelake | 1.0 |
scc-212 32 cores |
1 with 4 A100 GPUs17 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 859 GB | 10 Gbps Ethernet | icelake | 1.0 |
scc-4a01..4a12, scc-4b01..4b12, scc-4c01..4c12 64 cores |
36 | 2 thirty-two-core 2.6 GHz Intel Xeon Platinum 8358 | 512 GB | 849 GB | 25 Gbps Ethernet 200 Gbps HDR Infiniband |
icelake | 1.0 |
scc-501..506 32 cores |
6 with 4 L40S GPUs each20 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 256 GB | 839 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
Compute Nodes – Buy-In – Buy-In nodes have no SU charge for use by their owners. | |||||||
scc-ae5..ae7, scc-be4 16 cores |
4 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 256 GB | 885 GB | 1 Gbps Ethernet | ivybridge | 1.01 |
scc-bc5..bc8, scc-be1,scc-be2 16 cores |
8 | 2 eight-core 2.6 GHz Intel Xeon E5-2670 | 128 GB | 885 GB | 1 Gbps Ethernet | sandybridge | 1.01 |
scc-be3, scc-be5..be8 16 cores |
5 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 128 GB | 885 GB | 1 Gbps Ethernet | ivybridge | 1.01 |
scc-c03 16 cores |
1 | 2 eight-core 2.0 GHz Intel Xeon E7-4809v3 | 1024 GB | 794 GB | 10 Gbps Ethernet | haswell | 1.01 |
scc-c04, scc-c05 20 cores |
2 with 4 K40m GPUs each3 | 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 | 128 GB | 885 GB | 10 Gbps Ethernet | haswell | 1.01 |
scc-c12..c14 28 cores |
3 with 4 P100 GPUs each4 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 256 GB | 849 GB | 10 Gbps Ethernet | broadwell | 1.0 |
scc-cb1..cb4 16 cores |
4 | 2 eight-core 2.6 GHz Intel Xeon E5-2670 | 256 GB | 885 GB | 10 Gbps Ethernet | sandybridge | 1.01 |
scc-cb5..cb8 16 cores |
4 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 256 GB | 885 GB | 10 Gbps Ethernet | ivybridge | 1.01 |
scc-cc1..cc8 16 cores |
8 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 128 GB | 885 GB | 10 Gbps Ethernet | ivybridge | 1.01 |
scc-da1..da4, scc-db1,scc-db4, scc-df4 16 cores |
7 | 2 eight-core 2.7 GHz Intel Xeon E5-2680 | 128 GB | 427 GB | 1 Gbps Ethernet | sandybridge | 1.01 |
scc-dc2,scc-dc4, scc-dd2..dd4, scc-df1 16 cores |
6 | 2 eight-core 2.7 GHz Intel Xeon E5-2680 | 64 GB | 427 GB | 1 Gbps Ethernet | sandybridge | 1.01 |
scc-e01 32 cores |
1 with 9 A6000 GPUs 11 | 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R | 384 GB | 1729 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-e02 32 cores |
1 with 10 A6000 GPUs 11 | 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R | 384 GB | 849 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-e03 32 cores |
1 with 10 A6000 GPUs 11 | 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R | 384 GB | 1729 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-e04 32 cores |
1 with 8 RTX 8000 GPUs 12 | 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R | 384 GB | 3312 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-e05 28 cores |
1 with 10 TitanXp GPUs 13 | 2 fourteen-core 2.2 GHz Intel Xeon Gold 5120 | 384 GB | 3312 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-ea1,scc-ea2, scc-eb1..3, scc-ec1..4 scc-gb1..4, scc-gc1, scc-gc2, scc-gd4, scc-gk1, scc-gk2, scc-gl1, scc-gl3, scc-gm2, scc-gm3, scc-gn1..4, scc-go1..4, scc-gr1, scc-gr2, scc-gr4, scc-zk4, scc-zp2 32 cores |
35 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 384 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-eb4, scc-ef3, scc-ef4, scc-eg2, scc-eh2..4, scc-ei1, scc-ei2 scc-vb1..4, scc-vc1..3, scc-vd1..4, scc-ve1..4, scc-vf1..4 scc-vg1..4 32 cores |
32 | 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R | 192 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-ef2, scc-eg1, scc-eg3, scc-eg4, scc-eh1, scc-ei3, scc-vc4 32 cores |
5 | 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R | 384 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-ga1..4, scc-gc3, scc-gc4 scc-gd1..3, scc-ge1..4, scc-gf1..4, scc-gg1..4, scc-gh1..4, scc-gi1..4, scc-gj1..4, scc-gk3, scc-gk4 scc-gl2, scc-gl4, scc-gm1, scc-gm4, scc-gp1..gp4, scc-gq1..gq4 scc-zl1..4, scc-zm1..4, scc-zn1..4, scc-zo1..4, scc-zp1, scc-zp3, scc-zp4, scc-zq1..4, scc-gr3 32 cores |
71 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-f01 24 cores |
1 with 8 TitanV GPUs 14 | 2 twelve-core 2.3 GHz Intel Xeon Gold 5118 | 384 GB | 1641 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-f02 32 cores |
1 with 5 RTX 6000 GPUs 15 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 384 GB | 1802 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-f03, scc-301,scc-302 32 cores |
3 with 6 A40 GPUs 16 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 885 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-f04 32 cores |
1 with 10 A40 GPUs 16 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 885 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-f05 32 cores |
1 with 8 A40 GPUs 16 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 885 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-h01..h24 32 cores |
24 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 1068 GB | 10 Gbps Ethernet 200 Gbps HDR Infiniband |
icelake | 1.01 |
scc-h25 32 cores |
1 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 1024 GB | 7056 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-h26..h29 32 cores |
4 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 512 GB | 839 GB | 10 Gbps Ethernet | sapphirerapids | 1.01 |
scc-h30 32 cores |
1 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 1024 GB | 839 GB | 10 Gbps Ethernet | sapphirerapids | 1.01 |
scc-j01..j08, scc-j10..j13 32 cores |
11 with 4 L40S GPUs each20 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 256 GB | 839 GB | 10 Gbps Ethernet | sapphirerapids | 1.01 |
scc-j09, scc-j10 32 cores |
2 with 2 L40S GPUs each20 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 256 GB | 839 GB | 10 Gbps Ethernet | sapphirerapids | 1.01 |
scc-k01, scc-k02 28 cores |
2 with 4 P100 GPUs each9 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 189 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-k03..k05 28 cores |
3 with 4 P100 GPUs each9 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 189 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-k06 28 cores |
1 with 1 P100 GPU9 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 384 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-k07..09, scc-k11 28 cores |
4 with 2 V100 GPUs each7 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-k10 28 cores |
1 with 1 V100 GPU 7 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-kc1..kc3 32 cores |
3 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 256 GB | 839 GB | 10 Gbps Ethernet | sapphirerapids | 1.01 |
scc-mb3,scc-mb4, scc-mc1..mc8, scc-me1..me7, scc-mf4,scc-mf5, scc-mf8, scc-mg1..mg8, scc-mh1..mh6, scc-ne5..ne7, scc-pi5..pi8 16 cores |
41 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 128 GB | 885 GB | 1 Gbps Ethernet | ivybridge | 1.01 |
scc-mb5..mb8, scc-md1..md8 scc-me8, scc-mf1,scc-mf2, scc-mf6,scc-mf7, scc-mh7,scc-mh8, scc-ne4 16 cores |
20 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 256 GB | 885 GB | 1 Gbps Ethernet | ivybridge | 1.01 |
scc-na1..na8, scc-nb1..nb8, scc-nc1..nc8, scc-nd1..nd8, scc-ne1..ne3 16 cores |
35 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 128 GB | 885 GB | 1 Gbps Ethernet FDR Infiniband |
ivybridge | 1.01 |
scc-q09..q12 32 cores |
4 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 1793 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-q17..q21 28 cores |
5 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 876 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-q23, scc-209 32 cores |
2 with 1 V100 GPUs each7 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-q24 32 cores |
1 with 1 V100 GPU 7 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 384 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-q25..28 32 cores |
4 with 2 V100 GPUs each 7 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-q29 32 cores |
1 with 1 V100 GPU 8 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-q30 32 cores |
1 with 4 V100 GPUs each7 | 2 sixteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 189 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-q31..q36,scc-201..204 32 cores |
10 with 4 V100 GPUs each 7 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 189 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-sa1..sa8, scc-sb1..sb8 16 cores |
16 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 256 GB | 885 GB | 1 Gbps Ethernet FDR Infiniband |
ivybridge | 1.01 |
scc-sc1,scc-sc2 16 cores |
2 with 2 K40m GPUs each2 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 128 GB | 885 GB | 1 Gbps Ethernet FDR Infiniband |
ivybridge | 1.01 |
scc-sc3..sc6 16 cores |
4 | 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 | 128 GB | 885 GB | 1 Gbps Ethernet FDR Infiniband |
ivybridge | 1.01 |
scc-ta1..ta3, scc-to1 20 cores |
4 | 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 | 128 GB | 885 GB | 10 Gbps Ethernet | haswell | 1.01 |
scc-ta4, scc-tb1..tb3, scc-tc1..tc4, scc-td1..td4, scc-te1..te4, scc-tf1, scc-tf3, scc-tf4, scc-tg1..tg4, scc-th1..th4, scc-ti1..ti4, scc-tj1, scc-tj2, scc-tk1..tk4, scc-tl1..tl4, scc-tm1..tm4, scc-tn1, scc-tn2, scc-to2..to4 20 cores |
50 | 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 | 256 GB | 885 GB | 10 Gbps Ethernet | haswell | 1.01 |
scc-tj3, scc-tj4, scc-tr1, scc-tr3, scc-tr4, scc-wo1, scc-wo2, scc-wp3, scc-xa1..xa4, scc-xb1..xb4, scc-xc1..xc4, scc-xd1..xd4, scc-xe1..xe4, scc-xf1..xf4 28 cores |
32 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 512 GB | 885 GB | 10 Gbps Ethernet | broadwell | 1.01 |
scc-tn3, scc-tn4, scc-tp1..tp4, scc-tq1..scc-tq4, scc-tr2, scc-uj1..uj4, scc-uk1..uk4, scc-ul1..ul4, scc-um1, scc-um2, scc-wo3, scc-wo4, scc-wp1, scc-wp2, scc-wp4, scc-wq1..scc-wq4, scc-wr1..wr4, scc-xg1..xg4, scc-xh1..xh4, scc-xi1..xi4, scc-xj1..xj4, scc-xk1..xk4, scc-xl1..xl4, scc-xm1..xm4 28 cores |
66 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 256 GB | 885 GB | 10 Gbps Ethernet | broadwell | 1.01 |
scc-un1..4, scc-uo1..4, scc-yq1..4 28 cores |
12 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-up1, scc-up2, scc-zk1, scc-zk2 28 cores |
4 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 384 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-up3, scc-up4, scc-uq1..4, scc-yl1..4, scc-ym1..3, scc-yn1..4, scc-yo1, scc-yo3, scc-yo4, scc-yp2, yr1..3, scc-zj1..4 28 cores |
28 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 192 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-v04..v07 8 cores |
4 with 1 M2000 GPU each6 | 1 eight-core 2.1 GHz Intel Xeon E5-2620v4 | 128 GB | 427 GB | 10 Gbps Ethernet | broadwell | 1.01 |
scc-v08 32 cores |
1 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 1024 GB | 1068 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-x01..x04 28 cores |
4 with 2 P100 GPUs each4 | 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 | 256 GB | 849 GB | 10 Gbps Ethernet | broadwell | 1.01 |
scc-x07,scc-q22 32 cores |
2 | 2 sixteen-core 2.4 GHz AMD Epyc 7351 | 1024 GB | 885 GB | 10 Gbps Ethernet | epyc | 1.01 |
scc-yp1 28 cores |
1 | 2 fourteen-core 2.6 GHz Intel Gold 6132 | 768 GB | 885 GB | 10 Gbps Ethernet | skylake | 1.01 |
scc-205, scc-210 32 cores |
2 with 4 A100 GPUs each10 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 859 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-207 32 cores |
1 with 1 A100 GPU10 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 768 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-208 32 cores |
1 with 2 V100 GPUs 7 | 2 sixteen-core 2.8 GHz Intel Gold 6242 | 192 GB | 885 GB | 10 Gbps Ethernet | cascadelake | 1.01 |
scc-213, scc-217,scc-218 32 cores |
3 with 2 A40 GPUs16 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 1014 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-214..216 32 cores |
3 with 4 A40 GPUs16 each | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 859 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-219,scc-221 32 cores |
2 with 2 A100 GPUs17 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 1014 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-220 32 cores |
1 with 4 A100 GPUs17 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 256 GB | 1435 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-222 96 cores |
1 | 2 forty-eight-core 2.1 GHz Intel Platinum 8468 | 512 GB | 840 GB | 10 Gbps Ethernet | sapphirerapids | 1.01 |
scc-303 32 cores |
1 with 10 A40 GPUs16 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 1024 GB | 859 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-304 32 cores |
1 with 6 L40s GPUs18 | 2 sixteen-core 2.9 GHz Intel Gold 6326 | 512 GB | 859 GB | 10 Gbps Ethernet | icelake | 1.01 |
scc-305, scc-306 48 cores |
2 with 4 A100 GPUs17 | 2 twentyfour-core 2.65 GHz AMD EPYC-7413 | 512 GB | 1426 GB | 10 Gbps Ethernet | epyc | 1.01 |
scc-307 32 cores |
1 with 4 A6000s GPUs11 | 2 sixteen-core 3.00 GHz AMD EPYC-7302 | 256 GB | 3480 GB | 10 Gbps Ethernet | epyc | 1.01 |
scc-308 32 cores |
1 with 4 RTX 6000 ada GPUs19 | 2 sixteen-core 3.00 GHz AMD EPYC-7302 | 256 GB | 3480 GB | 10 Gbps Ethernet | epyc | 1.01 |
scc-309 64 cores |
1 with 10 RTX 6000 ada GPUs19 | 2 thirty-two-core 3.25 GHz AMD EPYC 9354 | 1536 GB | 1720 GB | 10 Gbps Ethernet | epyc | 1.01 |
scc-507 32 cores |
1 with 2 L40S GPUs20 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 256 GB | 839 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
scc-508, scc-509 32 cores |
2 with 2 L40S GPUs each20 | 2 sixteen-core 2.5 GHz Intel Gold 6426Y | 256 GB | 839 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
scc-601 32 cores |
1 with 8 L40S GPUs20 | 2 sixteen-core 2.5 GHz EPYC 9124 | 768 GB | 1719 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
scc-602 32 cores |
1 with 8 RTX 6000 ada GPUs19 | 2 sixteen-core 2.5 GHz EPYC 9124 | 768 GB | 1719 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
scc-603 32 cores |
1 with 5 A6000 GPUs11 | 2 sixteen-core 2.5 GHz EPYC 9124 | 384 GB | 1719 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
scc-604..608 32 cores |
5 with 8 A6000 GPUs11 each | 2 sixteen-core 2.5 GHz EPYC 9124 | 768 GB | 1719 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
scc-609 64 cores |
1 with 8 RTX 6000 ada GPUs19 | 2 thirty-two-core 3.25 GHz EPYC 9354 | 1152 GB | 1719 GB | 25 Gbps Ethernet | sapphirerapids | 1.01 |
1These machines have limited access; not all SCF users can fully utilize these systems. For those users with special access to these systems, the SU charge is 0.0 for these systems only.
2Each of these nodes has two NVIDIA Tesla K40m GPU Cards with 12 GB of Memory.
3Each of these nodes has four NVIDIA Tesla K40m GPU Cards with 12 GB of Memory.
4Each of these nodes has two or four (specified above) NVIDIA Tesla P100 GPU Cards with 12 GB of Memory.
5This node is for support of VirtualGL and has one NVIDIA Tesla K2200 GPU Card with 4 GB of Memory.
6These nodes are for support of VirtualGL and each has one NVIDIA Tesla M2000 GPU Card with 4 GB of Memory.
7Each of these nodes has one, two, or four (specified above) NVIDIA Tesla V100 GPU Cards with 16 GB of Memory.
8Each of these nodes has one, two, or four (specified above) NVIDIA Tesla V100 GPU Cards with 32 GB of Memory.
9Each of these nodes has one, two, or four (specified above) NVIDIA Tesla P100 GPU Cards with 16 GB of Memory.
10Each of these nodes has the indicated number of NVIDIA Tesla A100 GPU Card with 40 GB of Memory.
11Each of these nodes has the indicated number of NVIDIA A6000 GPU Cards with 48 GB of Memory each.
12Each of these nodes has the indicated number of NVIDIA RTX 8000 GPU Cards with 48 GB of Memory each.
13Each of these nodes has the indicated number of NVIDIA TitanXp GPU Cards with 12 GB of Memory each.
14Each of these nodes has the indicated number of NVIDIA TitanV GPU Cards with 12 GB of Memory each.
15Each of these nodes has the indicated number of NVIDIA RTX 6000 GPU Cards with 24 GB of Memory each.
16Each of these nodes has the indicated number of NVIDIA A40 GPU Cards with 48 GB of Memory each.
17Each of these nodes has the indicated number of NVIDIA Tesla A100 GPU Cards with 80 GB of Memory.
18Each of these nodes has the indicated number of NVIDIA L40 GPU Cards with 48 GB of Memory each.
19Each of these nodes has the indicated number of NVIDIA RTX 6000 ada GPU Cards with 48 GB of Memory each.
20Each of these nodes has the indicated number of NVIDIA L40S GPU Cards with 48 GB of Memory each.
Use of all of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment for all SCC nodes with GPUs.
The Knights Landing nodes (scc-ib1 and scc-ib2) are also currently in ‘friendly user’ mode with no charge for CPU usage on these nodes.
Depending on the speed, memory, and other factors, each node type is charged at a different SU rate per hour, as shown in the table above. Our allocations policy is explained in more detail here.
Batch System and Usage
The batch system on the SCC is the Open Grid Scheduler (OGS), which is an open source batch system based on the Sun Grid Engine scheduler.
Job Run Time Limits
The limitations below apply to the shared SCC resources. Limitations on buy-In nodes are defined by their owners.
Limit | Description |
---|---|
15 minutes of CPU Time | Jobs running on the login nodes are limited to 15 minutes of CPU time and a small number of cores. |
12 hours default wall clock | Jobs on the batch nodes have a default wall clock limit of 12 hours but this can be increased, depending on the type of job. Use the qsub option -l h_rt=HH:MM:SS to ask for a higher limit. |
720 hours -serial job 120 hours – mpi job 48 hours – GPU job |
Single processor (serial) and omp (multiple cores all on one node) jobs can run for 720 hours, MPI jobs for 120 hours, and jobs using GPUs are limited to 48 hours. |
1024 cores | An individual user is also only allowed to have 1024 shared cores maximum simultaneously in the run state. This limit does not affect job submission. |
Note that usage on the SCC is charged by wall clock time; if you request 8 cores for 12 hours and your code runs for 3 hours, you will be charged for the 24 hours (8 cores * 3 hours) even if your job did not use all requested cores. The charge is computed by multiplying the wall clock time by the SU factor of the node(s) used.