Using GPUs on the HPC Cluster

The HPC Cluster includes a speciality node, hpc-15-36, which contains two TESLA M2050 GPUs available for GPU parallel programming using CUDA. The CUDA version on this node is 6.0, and it includes the CUDA Toolkit and Computing SDK.

Many software packages are already optimized to take advantage of GPU accelerators, including MATLAB and Gaussian. A more comprehensive list of supported software is available from the NVIDIA Website. Note: this list is provided by the GPU vendor, NVIDIA. Not all the packages listed available in the RCC. If you see one you are interested in that is not currently available, please let us know:

Submitting GPU Jobs

To submit a job for the GPU Nodes, simply submit it to the general access gpu_q. An example submit script is below:

#SBATCH -n "my_gpu_job"
#SBATCH -l nodes=1
#SBATCH -p gpu_q
#SBATCH --job-time=00:00:10

Compiling GPU software using CUDA

You can compile your CUDA software on the HPC Login nodes, and then submit them to the gpu_q to be run on the CUDA processing nodes. To compile your CUDA software, login to the HPC, and load the CUDA library with the command:

$ module load cuda

This will allow the user to use all of the available resources for compiling and profiling CUDA programs, as well as have access to the non-compiled example programs found in the CUDA Computing SDK.

The nvcc compiler and associated profiling tools are located in the directory:


and the example programs are located in:


Compiling Sample Code

To compile a sample code package, copy the desired sample folder directory to your home directory. For example:

$ cp -r /usr/local/cuda-5.5/samples/1_Utilities/deviceQuery ~

When compiling, use the Makefile included with the samples:


Learning More about GPU Programming

If you are new to GPU programming, NVIDIA provides a few tools for getting started with CUDA on their website:

For the more advanced user, refer to the CUDA Best Practices Guide