ParaView is a tool for developing scalable parallel processing tools with an emphasis on distributed memory implementations. The program is capable of rapidly building visualizations for data analysis using qualitative and quantitative techniques. Data analysis and exploration in ParaView can be done either interactively in a 3D display or in program form by using the batch processing capability that ParaView has. It is best suited for exploring very large data files that are too large to fit on a single node and generating visualizations and simulations. There are three basic components to ParaView: data server for data processing, render server for data rendering, and client for user interaction.
ParaView is installed on HPC using gnu-openmpi compiler and loading that module will enable you to use ParaView. On HPC, ParaView supports a few different operating modes.
Stand-alone Mode (Serial)
The user runs the application just like any other application, with all data existing on the same node where all processing is done. Note that using ParaView GUI is NOT recommended. However, the pyhton interface
pvpython is available for use on HPC login nodes.
Combined Server Mode
The user runs a client (ParaView GUI) on USERS' computer and use ssh tunnelling as described below to connect to server running on on separate remote HPC compute nodes. The data and render servers can be run on every compute node via MPI. The client can be used to monitor a job in real time.
The first step is to submit server job. This is accomplished by submitting an interactive job and running ParaView GUI or pvpython on login node to connect to the job when it is running. An example job submission command would be:
srun --pty -t10:00 -n 2 -p quicktest /bin/bash
where I use two cores in the quicktest queue. The maximum runtime of this queue is 10 minutes. After this jobs started running, you can start the pvserver on the compute cores allocated to this job using,
module load gnu-openmpi srun -n 2 pvserver
The pvserver will start and you will receive a message similar to the following:
Waiting for client... Connection URL: cs://hpc-tc-2.local:11111 Accepting connection(s): hpc-tc-2.local:11111
Now, launch ParaView GUI on YOUR computer. You can download ParaView 4.4 version from documentation if you have not already done so. Make sure to select proper
Version of ParaView and
Operating System (on your computer). in a separate terminal, create an ssh tunnel using the command,
ssh -X -N -L localhost:11111:<main server node>:11111 <RCC id>@hpc-login-38.rcc.fsu.edu
Make sure to change with the actual node running the server (hpc-tc-2.local in the above example). You may have to enter password if you have not configured passwordless login to HPC login nodes. Then, launch ParaView GUI, click File > Connect > Add server from the menu and enter the following in the window that pops up.
Name:RCC Server Type: Client/Server Host:localhost Port:11111
Click Configure and select manual from Startupâ€© type â€©and click save. Select the RCC configuration you created above and click connect. Now youâ€™ll see client connected in the serverâ€© terminal and pipeline â€©browser in the client ParaView changes to cs://hpc-tc-2.local:11111from builtin:. Now you are ready to run ParaView in combined server mode where all processing happens on HPC compute nodes (servers).
Users can also use pvpython to connect to a remote job running pvserver and please refer to the documentation for further information.
This is the most suitable mode to run ParaView with large number of cpu cores on HPC. The only difference between combined server mode and the batch mode is that there is no client connected to data and render servers. Instead, ParaView reads a python script and executes the commands as specified. A sample job submission is as follows.
#!/bin/bash #SBATCH --partition=genacc_q #SBATCH -n 6 module load gnu-openmpi srun -n 6 pvbatch my_pv_script.py
Split Server Mode
In this mode, both servers are explicitly run on different nodes. This mode is suitable for running render jobs separately on GPU nodes, for example.