News Items

  • Paritally Resolved: SSH issues with Spear and Owner-Based Login Nodes

    Yesterday evening, we started experiencing issues with SSH on our Spear systems and owner based login nodes. SSH is functional, but ECDSA keys are not.

  • NoleStor NFS Mounts Issues Resolved

    UPDATE (2015 Jun 19): All issues have been resolved. You may need to unmount and remount the NFS mounts on your servers. If you need further assistance, please let us know: support@rcc.fsu.edu . If you use NoleStor, you may notice a service disruption today. This is due to an issue on our …

  • ECDSA Keys Updated

    Recently, we have been having a problem with the ECDSA keys for our Spear nodes and Login nodes. If you have been using a recent version of Debian, Ubuntu, CentOS, or Redhat, you may have seen a message similar to the following when trying to connect to our services: @@@@@@@@@@@@@@@@@@@@@@@@@@…

  • System Upgrades (including Slurm) to occur July 13 - 19

    We have scheduled our Slurm and RHEL7 upgrade to occur on Monday, July 13 through Sunday, July 19. During this time, our HPC and Spear systems will be unavailable.

  • MOAB to Slurm Migration Guide Published

    We have published our MOAB to Slurm migration guide! This guide provides an overview of how to migrate your MOAB scripts to use the new Slurm scheduler. Check it out: https://rcc.fsu.edu/docs/moab-slurm-migration-guide

  • Planned Spear Outage - Friday, May 1: 5-8pm

    We are turning the Spear system off from 5-8pm on Friday, May 1 for a planned power outage.

  • Lustre Issues

    UPDATE Apr 21 - We are still working our Lustre issues. Spear and Lustre are still unavailable. Yesterday (Sunday, April 19), the data center in Dirac suffered a brief power outage. Most systems are back online now, but there are lingering issues with the Lustre filesystem. This means that …

  • Scheduler Changes coming this Summer

    This summer, we are going to replace the scheduling and job management software on the High Performance Computing Cluster with a new package called Slurm. Slurm will replace the current MOAB/Torque workload manager that we have been using for the past eight years. This will affect all HPC users.

  • Journal publication by RCC staff member

    A publication by Bin Chen , a member of applications group has been accepted for publication in the journal Astrophysical Journal Supplement Series . The article titled "Algorithms And Programs For Strong Gravitational Lensing In Kerr Space-time Including Polarization" used the HPC, Matlab and …

  • Benchmarking MATLAB and Python at RCC

    HPC jobs are traditionally compiled and run in low-level languages, such as C and Fortran. The reason for this—speed. Parallel libraries like OpenMP, MPI or gpGPU accelerators enable code to run in parallel faster on more hardware. Conversely, higher level interpreted languages like Python …