Content | Navigation |

Newest CCAST HPC Resource has been Released to NDSU Researchers

ANNOUNCEMENTS - POSTED ON DEC 11, 2014

We are pleased to announce the immediate availability of a new CCAST resource – the Thunder Cluster.  Thunder is intended as a new generation of advanced computing infrastructure at NDSU.  It is built upon several recent, state-of-the-art HPC technologies, including a tiered General Parallel File System (GPFS) coupled seamlessly with the Hierarchical Storage Management tape storage for transparent, policy based data movement, Intel Ivy Bridge CPUs and Phi co-processors, and FDR InfiniBand interconnect.  Thunder also provides the convenience of large memory nodes for in-situ “big data” analysis, and a series of development nodes for convenient code development, optimization, and benchmarking. 

From an architectural standpoint, Thunder is designed for multiple levels of expansion and, as such, provides unprecedented levels of economies of scale.  On the storage side, at the low level, individual researchers and educators who require more Thunder than is included with a “standard” account can “buy into” the system by providing their own media (disks and tapes).  At the highest level, entire trays of disks or disk controllers or even library frames can be added by individual researchers and educators – all at a fraction of the cost of a self-contained system with similar capabilities.  On the compute side, individual researchers and educators have a choice of adding their own combinations of CPUs, Intel Phis, or NVIDIA GPGPUs, according to what best suits their computational tasks at hand, with nearly exclusive control of associated queuing parameters.  Due to Thunder’s unmitigated modular design, its eventual storage and processing capacity is limited practically only by the datacenter’s envelope. 

With regard to investment, the system embodies a philosophy of data centricity with roughly 50% of the initial investment going to the filesystems and roughly 50% going to analytics.  CCAST believes that such an approach best addresses the dominant bottleneck of today’s high-performance computing – the I/O latency and throughput.  Thunder’s “shared scratch” filesystem is capable of > 6.3 GByte/s and > 4.1 GByte/s of sustained throughput for READ and WRITE, respectively, and uses multiples of fast SAS drives for increased IOPS.  Unlike other parallel file systems in use today, Thunder GPFS is optimized for the broad spectrum of file types and sizes – the data profile corresponding to the demands typically put on a campus-wide HPC resource at a research intensive institution like NDSU.

More details about the Thunder Cluster are available at our website.

Twenty-five leading NDSU computational researchers contributed to the proposal to the National Science Foundation, which was in response to a competitive solicitation. Professor Dinesh Katti from Civil Engineering, serves as principal investigator. Co-principal investigators include Anne Denton from Computer Science, Samee Khan from Electrical and Computer Engineering, Martin Ossowski (CCAST Director), and Wenfang Sun from Chemistry and Biochemistry. NDSU faculty contributing to the proposal include Cristinel Ababei, Iskander Akhatov, Adnan Akyuz, Bret Chisholm, Xeufeng Chu, Doğan Çömez, Sivaguru Jayaraman, Kalpana Katti, Svetlana Kilina, Ghodrat Karami, Muhammet Erkan Köse, Andrei Kryjevski, Juan Li, Simone A. Ludwig, William Perrizo, Saeed Salem, Alexander Wagner, Yechun Wang, Changhui Yan, Mijia Yang, and Mariusz Ziejewski.

“The success of the proposal illustrates the importance of computational science as a unifying driver to researchers across the university,” said Dinesh Katti, principal investigator for the Thunder project. “The rapid growth of computational power, along with important developments in computationally-driven science and engineering, has and will aid in major discoveries in a wide variety of fields.”

“These facilities will allow researchers access to additional state-of-the-art research computing resources, where ‘big data’ analytics are transparently coupled to high-performance modeling and simulation environments,” said Martin Ossowski, CCAST Director. “What we are really excited about is that the system is designed to expand as NDSU’s computational needs grow, by using what’s called a ‘resort condominium model’ where individual researchers and research groups will be able to add their own hardware modules to Thunder Cluster, resulting in unprecedented economies of scale.”

With gratefully acknowledged additional support provided by the NDSU Office of the Provost and the U.S. Department of Energy, the new system is now available for the use by NDSU researches.

Happy computing!


Student Focused. Land Grant. Research University.

Follow NDSU
  • Facebook
  • Twitter
  • RSS
  • Google Maps

CCAST Support
Phone: 701.231.5184
Physical/delivery address:  1805 NDSU Research Park Drive/Fargo, ND 58102
Mailing address:  P.O. Box 6050—Dept. 4100/Fargo, ND 58108-6050
Page manager: CCAST

Last Updated: Friday, July 28, 2017 8:31:08 AM
Privacy Statement