The HPC now supports GPU jobs. The HPC includes several processing nodes with NVIDIA GeForce GTX 1080 Ti GPU cards. Currently, the only general access (non-owner) partition that supports GPU jobs is the backfill partition. This means that any GPU job you submit must run for no longer than four hours.
Submitting GPU Jobs
If you wish to submit your job to node(s) that support GPU processing, simply add the following line to your Slurm submit script.
#SBATCH --gres=gpu:[1-4] # <-- Choose between 1 and 4 GPU cards to use.
Each node contains four GPU cards. Specify the number of GPU cards per node you wish to use after the gpu: directive. For example, if your job uses all four GPU cards, simply specify 4:
#SBATCH --gres=gpu:4 # This job will reserve all four GPU cards.
The following HPC job will run on the GPU node and simply print information about the available GPU cards:
#!/bin/bash #SBATCH --job-name="gpu_test" #SBATCH -n 1 #SBATCH --mail-type="ALL" #SBATCH -t 1:00 # Here is the magic line to ensure we're running on a node with GPUs #SBATCH --gres=gpu:1 # If your owner-based partition has access to GPU nodes, you can use that. # For general access users, GPU jobs will run only on backfill. #SBATCH -p backfill # Not strictly necessary for this example, but most # folks will want to load the CUDA module for GPU jobs module load cuda # Print out GPU information /usr/bin/nvidia-smi -L
Your job output should look something like this:
GPU 0: GeForce GTX 1080 Ti (UUID: GPU-dc0def06-a6a8-e652-a626-967ca59ea0cd) GPU 1: GeForce GTX 1080 Ti (UUID: GPU-cdb555e0-dce2-52c6-1029-361375ed79ce) GPU 2: GeForce GTX 1080 Ti (UUID: GPU-a8606fee-ac2b-4763-fb0d-2f5c9f75244a) GPU 3: GeForce GTX 1080 Ti (UUID: GPU-433b5155-9a11-055a-5970-770545ae6264)
For more examples and information about GPU programming, refer to our CUDA software documentation.