The Condor service was consolidated into the High Performance Computing Cluster on October 30, 2015.
The Condor cluster was a free cluster that we provided from 2012 through 2015. It consisted of donated hardware and older hardware that we retired from the HPC Cluster. It ran the HTConder scheduling software.
In October 2015, we decided to consolidate the hardware in the Condor cluster into the High Performance Computing (HPC) cluster. There are several advantages for doing this:
In July, 2015, we replaced the MOAB/Torque HPC scheduling software with software named Slurm. Slurm brought many new features to the HPC, and in the context of our implementation, has feature parity with Condor; thus, HPC staff can more effectively manage a single scheduling and cluster environment, rather than managing two environments, while still providing the same qualitiy of service to RCC users; and, RCC users who use both systems no longer have to work around the complexities of submitting and managing jobs within two different environments. You can now focus on getting your work done in one: Slurm. The resources that made up the Condor cluster are still available and are still free for all RCC users. We will continue to maintain the service in the form of an HPC Partition named condor. Any job you would have previously submitted to the Condor cluster should now be submitted to the Slurm cluster using the condor partition.
The new condor partition has the following characteristics:
- All RCC users have access to this partition; this remains a free, permanent resource;
- Maximum job execution time is 90 days;
- MPI and OpenMP are available, but only for single-node jobs; these nodes are not be connected by an IB fabric;
- Both Lustre and Panasas file systems are mounted on compute nodes in this partition;
- This partition is configured 'fair-share', similar to how Condor was managed. The more jobs you submit in a given time period, the lower your job priority becomes;
- The hardware available in this partition consists of the oldest end-of-life HPC nodes and other various systems that were donated to the RCC over the past few years.
For full details about the hardware available, login to the HPC and run
rcctool my:partitions condor.