Container Reference Guide

Singularity allows you to run containers on the HPC.  Take complete control of your software runtime environment, and increase the portability of your workflow.  This document provides a reference for performing common tasks with containers at the RCC.  If you're new to containers, check out our introduction guide.

Create a Singularity image from a Docker image

The recommended way to create a Singularity image for execution on the HPC is to first create a Docker image, upload it to Docker Hub (or your private repo), and then import it onto the HPC.

You can convert any Docker image in Docker Hub or in your own Docker image repository into a Singularity image using a few commands.  Commands to do this are available on all HPC login nodes.

First, create a blank Singularity image to import the Docker image into:

# substitute your own image name for 'cowsay.img'; size is Megabytes
$ singularity image.create --size 100 cowsay.img

Creating empty 100MiB image file: cowsay.img
Formatting image with ext3 file system
Image is done: cowsay.img

Next, instruct Singularity to download the Docker image and convert it into Singularity format:

# Use docker://[vendor]/imagename[:version] syntax
singularity build cowsay.img docker://dcshrum/singularity:cowsay

You now have a runnable Singularity image.  See below for how to execute it.

NOTE: It is possible to create Singularity images from scratch (without using Docker), but you will need to do so on your own computer.  The reason for this is that the HPC does not allow you to be root inside of your container.  Documentation is available on the Singularity website.

Submitting a job to the HPC using a Singularity Container

Once you have created or downloaded a Singularity image, you can submit a job to the HPC.

The following submit script (cowsay.sh) creates a job for the example cowsay.img we creted in the prior example:

#!/bin/bash
#SBATCH --job-name="Test Singularity Job"
#SBATCH -n 1
#SBATCH -p quicktest
#SBATCH --mail-type="ALL"

# Since we bind the output to a file in our home directory (see below), we must ensure that
# it exists first.
touch ~/output.txt

# The --bind parameter will bind any file paths in your container to locations in your home
# or group directories on the host.  You can use this parameter multiple times for both
# input and output files.
singularity run --bind ~/output.txt:/results.txt ~/cowsay.img

Now, we can submit it as we would any other job:

$ sbatch cowsay.sh
Submitted batch job 234801

Output and error output will be written normally as with any other Slurm job according to the parameters you specify in your job submit file, or using the default file path (slurm-[JOB_ID].out),

NOTE: The cowsay example image writes output to a specific path inside of the container: /results.txt.  Thus, we must bind that path to a file in our home directory using the --bind parameter.  However, if the process inside of your container writes to STDOUT or STDERR, Slurm will capture the program execution the normal way (by automatically creating files in your work directory with the output).

Each Docker or Singularity image has its own input/output configuration, so you'll want to read the documentation for the image you are attempting to use.

Using a Singularity Hub image

Another way to run Singularity on the HPC is to use pre-packaged Singularity images available at Singularity-hub.org.  This allows you to leverage the utility of Singularity images without having to write your own images.

You can inject Singularity hub containers directly into the singularity run command, but you must first pull the images on an HPC login node.  For example, if you wish to run the hello-world Singularity image, you must first run the following on the login node:

$ singularity pull shub://vsoch/singularity-hello-world

Once this is done, you can submit the following script to Slurm (hello.sh) in order to run the container:

#!/bin/bash
#SBATCH --job-name="Test Singularity Job"
#SBATCH -n 1
#SBATCH -p quicktest
#SBATCH --mail-type="ALL"

# Slurm treats the output of the echo statement as input arguments for the container.
# You could alternatively use `cat filename` to read a file as input.
echo "Hello There" 

# Since we have already pulled this container image using `singularity pull` on the login node,
# we can run it on HPC compute nodes.
singularity run shub://vsoch/singularity-hello-world

Then, submit it:

$ sbatch hello.sh
Submitted batch job 2753257

# wait a few seconds for the job to run...

$ more slurm-2753257.out
Hello There

This container will simply produce a Slurm .out file, since the output from the running container is written to STDOUT.

How to create and run an OpenMPI Singularity Container

A popular application of Singularity is to build a portable, reusable OpenMPI runtime inside of a Singularity Container.  By containerizing OpenMPI, you can build OpenMPI once and re-use it indefinitely regardless of kernel or environment changes on the host.

NOTE: Due to a quirk with the way MPI runs, the version of OpenMPI inside your container must match the version on the host.  This is the only unique constraint for MPI.  You can check the OpenMPI version currently installed on the HPC by running the command:

yum info openmpi|grep Version|head -n 1

Procedure:

Run the following commands on your own workstation with Singularity installed:

# Create a base Centos container
$ sudo singularity build --writable centos-mpi.img docker://openshift/base-centos7

# Expand it
$ sudo singularity expand centos-mpi.img
 
# We will install everything from within the container
$ sudo singularity shell --writable centos-mpi.img

# Update OS to the latest
$ yum update

# Copy the mpi tar ball from HPC
$ scp <username>@hpc-login.rcc.fsu.edu:/gpfs/research/software/singularity/openmpi.tar.bz2 .

# Extract it
$ tar -xjf openmpi.tar.bz2
$ cd openmpi-2.1.0rc3

# Create target location
$ mkdir -p /opt/hpc/gnu/openmpi

#Install openMPI
$ ./configure --prefix=/opt/hpc/gnu/openmpi
$ make
$ make install

# Copy the container to HPC
$ scp centos-mpi.img <username>@hpc-login.rcc.fsu.edu:~/

On the HPC:

#Compile your code with mpicc FROM THE CONTAINER
[you@hpc-login] $ singularity exec centos-mpi.img /opt/hpc/gnu/openmpi/bin/mpicc -o hello hello.c
 
#...and run it
[you@hpc-login] $ mpirun -np 4 singularity exec centos-mpi.img ./hello