The High Performance Computing (HPC) Cluster comprises computing resources well suited for distributed memory parallel computations. The FSU HPC Cluster is made up of individual servers linked together by high performance networks. A job scheduler is used to control access and priority to parts of the cluster owned by FSU faculty members. Software written using the Message Passing Interface (MPI) can run over sets of processor cores that collectively make up the cluster or users can submit any number of independent jobs to run on the cluster.
The HPC provides 403 compute nodes and 6,464 CPU cores to promote the advancement of scientific research at the Florida State University. Jobs are managed by the MOAB and TORQUE scheduling software.
All faculty and students are eligible to use the HPC whenever you would like through our general access compute nodes. Research groups that have additional compute needs beyond the general access offerings may purchase dedicated nodes, which are configured and managed by the Research Computing Staff.
|Total Compute Nodes||309|
|MPI Interconnects (IB)||40 & 56 Gbps (QDR & FDR)|
|Storage Interconnects (IP)||10 Gbps|