High Performance Computing
The High Performance Computing (HPC) Cluster is the core of our computational infrastructure. With more than 16,328 cores and 776 nodes, the HPC Cluster provides powerful and scalable high performance computing resources for running large, multi-threaded and distributed parallel computations.
What is the HPC?
The High Performance Computing Cluster at FSU is a tightly integrated system of uniform servers connected by a fast Infiniband data network that is designed to run long compute-intensive programs. This uniformity and integration makes the system extremely well suited for processing workloads that would not scale on regular computers because of memory requirements or CPU limits.
What is it used for?
The HPC is used for long-running jobs that require large compute resources, like many CPUs or memory. To allow many users to run program at the same time, HPC systems make use of batch non-interactive jobs that are scheduled on available resources. In a batch system, users describe the workflow of their program and once a job is submitted to the system, it runs independent of any user input until it finishes. Jobs can be monitored, but not interacted with.
Compute jobs running on the HPC can operate in parallel using popular frameworks like OpenMP and MPI.
Many users write and/or compile their own software to run on the HPC, for which we provide number of tools and libraries to support. Other users can run jobs using general-purpose applications, such as MATLAB.
Who has access?
Access to the HPC is available for all FSU faculty and students/staff with a faculty sponsor. To get priority access to our resources, many faculty have made investments in the HPC by purchasing nodes.
At A Glance
Popular Technologies
The HPC can run many different types of jobs Some popular, new, or noteworthy platforms and technologies include the following:
- OpenMPI - OpenMPI is one of several implementations of the Message Passing Interface (MPI) model of parallel computing for distributed systems. OpenMPI is an open-source implementation of this model with a wide range of powerful features.
- HADOOP - The HPC now supports creating and compiling HADOOP jobs via our Slurm job scheduler. This means that you can take advantage of the entire HPC cluster for distributed HADOOP jobs written in in Java.
- Python - The HPC provides a robust implementation of Python, which is increasingly used in computational science. We provide support for a number of Python utilities for compiling Python code to C (e.g. Cython) and working with Python visually.
- MATLAB - The HPC provides support for distributed MATLAB jobs. In addition, you can compile MATLAB code to C and run that on the HPC for higher performance and fewer license restrictions.
- GPUs and CUDA - The HPC cluster includes several GPU nodes running NVIDIA GeForce GTX 1080 Ti GPU cards.