Parallel Storage

RCC users have access to a parallel storage system, GPFS, that supports fast I/O for HPC and Spear operations.  Data can be shared among groups or kept in private home directories.

GPFS is a high-performance storage system that supports operations on our HPC and Spear systems.  If you are logged into the HPC or Spear, you are using GPFS.  The GPFS storage system houses data for both research groups and private home directories.  Individuals, departments, or research groups can purcahase additional storage space and manage shared group access to files and folders.

Who has access?

All RCC users are entitled to 150gb of storage space on GPFS.  This space is allocated when your RCC account is setup.  We also offer a large, general purpose scratch volume for job I/O on a per-request basis. If you are interested in gaining access to scratch space, you can send us a support request.

For those research groups that need more parallel storage space, we offer options for purchasing storage, either on-demand or on a five-year basis.

Panasas and Lustre Deprecated

In March, 2018, we began phasing out of our old parallel storage systems, Panasas and Lustre. 

Panasas was the storage system for our High Performance Computing Cluster. The Panasas filesystem was connected to the HPC via a high-throughput Infiniband network. The service contract for Panasas expired in 2017, and we replaced the system with GPFS.

Lustre was an alternative parallel storage system that was used primarily on our Spear system. This system was not as highly performant as Panasas, but was much more flexible. We used this system to connect data collection instruments, such as the FEI Titan Krios Electron Microscope at the FSU Institute for Molecular Biophysics.  We have since consolidated the functions of the Spear system , including integration with data collection instruments, into GPFS.