NDSU North Dakota State University

Center for Computationally
Assisted Science and Technology

Hardware

CCAST provides large-scale, state-of-the-art computing resources to the users:

Cluster 2

The cluster2 consists of 32x compute nodes and 1x login node. The total aggregated theroretical peak performance is ~ 2.8TFLOPS. The aggregated memory and aggregated local storage are ~ 1.0TB and ~ 5.3TB, respectively. Each node is interconnected through Gigabit Ethernet switches. The system details are shown below:

  • Compute node:  dual-socket Intel E5430 2.66GHz with 32GB ECC DDR2 667MHz (2GB DIMMs), 160GB 7.2K RPM SATA HDD, dual Gigabit Ethernet ports.
  • Login node: dual-socket Intel E5405 2.00GHz with 4GB ECC DDR2 667MHz (2GB DIMMs), 2x 146GB 15K RPM SAS HDD, 3x Gigabit Ethernet ports.
  • Processor: quad-core Intel E5430 @ 2.66GHz (for Compute node), quad-core Intel E5405 @ 2.00GHz (for Login node).
  • Number of Compute node: 32
  • Number of Processor core: 256
  • Number of Login node: 1
  • Global scratch storage system: IBM GPFS
  • Global scratch storage maximum bandwidth: 1.25GB/s
  • Interconnection: Gigabit Ethernet
  • Network switch:  Cisco 3750G, Force10 S60

Cluster 3

The cluster3 consists of 128x compute nodes and 1x login node. The total aggregated theroretical peak performance is ~ 10.9TFLOPS. The aggregated memory and aggregated local storage are ~ 6.1TB and ~ 20.5TB, respectively. Each node is interconnected via both ultra-low latency high-speed Myri-10G and Gigabit Ethernet. The system details are shown below:

  • Compute node:  dual-socket Intel X5550 2.67GHz with 48GB ECC DDR3 1333MHz (8GB DIMMs), 160GB 7.2K RPM SATA HDD, 1x Myri-10G port, dual Gigabit Ethernet ports.
  • Login node: dual-socket Intel X5550 2.67GHz with 48GB ECC DDR3 1333MHz (8GB DIMMs), 160GB 7.2K RPM SATA HDD, 1x Myri-10G port, dual Gigabit Ethernet ports.
  • Processor: quad-core Intel X5550 @ 2.67GHz.
  • Number of Compute node: 128
  • Number of Processor core: 1024
  • Number of Login node: 1
  • Global scratch storage system: IBM GPFS
  • Global scratch storage maximum bandwidth: 5.0GB/s
  • Interconnection: Myri-10G, Gigabit Ethernet
  • Network switch:  Myri-10G, Cisco 3750G.

Thunder Cluster

The Thunder cluster consists of 53 compute nodes and 2 login nodes. The total aggregated theroretical peak performance is ~40TFLOPS. All these nodes are interconnected with FDR Infiniband at a 56Gbit/s transfer rate. The system details are shown below

  • Compute node: 48x Dual socket Intel Xeon 2670v2 "Ivy Bridge" (10 core per socket) 2.5GHz with 64GB DDR3 RAM at 1866MHz and 14x of these 48 nodes are equipped with Intel Phi (aka mic) accelarator cards with 60 x86 cores at 1.047GHz, 7.5GB of RAM. 2x Large memory nodes 1TB RAM at 1600MHz, 4 sockets with 8 cores (Intel Xeon 4640 Sandy Bridge processrs) on each socket, 3.3 TB of local SSD scratch. 2 Development nodes with Intel Ivy Bridge processors with Intel Phi cards and one Sandy Bridge node. 1x Mellanox ConnectX3 FDR IB on all 53 comptue nodes.
  • Login Node: 2x Intel Ivy Bridge ES-2670 v2 (2 socket, 10 core per socket) 2.5GHz, 64GB DDR3 1866MHz memory, 2x 1TB SATA hard disks, 1 Mellanox ConnectX3 FDR IB, 1 dual 10GbE port.
  • Number of Compute node: 53
  • Number of Processor core: 1080
  • Number of Login node: 2
  • Storage System: 2 tier IBM GPFS filesystem with a policy driven HSM. Tier 1 with 120x 10K SAS drives at about 4GB/s. Tier 2 with 80x 7.2K SATA drives at about 2.6GB/s. Tape storage: TS3584 L53 with 8x LTO6 Tape drive, 274x tape slot and TS3584 S54 with 1340x tape slots. Included are 258x LTO6 tape cartridge 645TB and expandable upto 4PB.
  • Interconnection: FDR Infiniband at 56Gbit/s
  • Network Switch: Mellanox FDR IB switch

Server 1

The theoretical peak performance is ~4.2TFLOPS.  The system specifications are show below:

  • Processor: quad-core Intel X5560 @ 2.80GHz.
  • Memory: 96GB ECC DDR3 1333MHz (8GB DIMMs)
  • Local storage: 2x 146G 15K RPM SAS HDD
  • GPU: Nvidia Tesla S1070
  • Number of GPU: 4
  • Global scratch storage system: IBM GPFS
  • Global scratch storage maximum bandwidth: 1.25GB/s
  • Interconnection: Gigabit Ethernet
  • Network switch:  Cisco 3750G, Force10 S60