A C/C++/Fortran parallel computing platform and application programming interface (API) that allows software to use graphics processing units (GPUs) for general purpose processing.
In order to use CUDA, you must first load the appropriate
environment module:
module load cuda
Warning
Due to disk space constraints, NVIDIA CUDA libraries are avaialble only on the login nodes and GPU nodes.
They are not available on general-purpose compute nodes. Be sure to specify the Slurm --gres:gpu=[1-4] option when
submitting jobs to the cluster.
#!/bin/bash#SBATCH --nodes 1#SBATCH --ntasks 1#SBATCH --account genacc_q#SBATCH --gres=gpu:1 # Ensure your job is scheduled on a node with GPU resources # Load CUDA module librariesmoduleloadcuda# Load the CUDA libraries into the environment # Execute your CUDA codesrun-n1./my_cuda_code<input.dat>output.txt
device id 0, name GeForce GTX 1080 Ti
number of multi-processors = 28
Total constant memory: 64.00 kb
Shared memory per block: 48.00 kb
Total registers per block: 65536
Maximum threads per block: 1024
Maximum threads per multi-processor: 2048
Maximum number of warps per multi-processor 64