A software package for visualizing scientific results

Running VisIt on RCC

Below are a number of ways to run VisIt using the Spear or HPC cluster. Note that if HPC resources are being accessed from a computer not on the FSU network, a VPN connection is required. An installer can be found here.

Running Visit Remotely

The user has the options of using either an SSH client (PuTTY on Windows, or the basic terminal on Unix) or the NX NoMachine client, which must be installed. The NX Client will run faster than a normal terminal due to the image compression, but this may not be ideal for running a program such as VisIt where the user may not want image compression.


  • Does not require the installation of VisIt on the local machine.


  • Slow rendering of images.
  • Slow response time from the user interface.

SSH Client

The simplest way to run VisIt through an SSH client is on the Spear node. Though batch jobs can be executed using commands through X, it is suggested that the user installs the local client and runs batch jobs through VisIt's internal scheduling routines.

  • Login using ssh -Y
    • Note that the -Y option allows for VisIt's GUI to be piped through the SSH client.
    • Other options may be used, but this is the basic command.
  • To run VisIt, navigate to the visit/bin directory and execute with ./visit

NX NoMachine

NX NoMachine is optimized for compressing and piping GUIs, so the performance will be a little better.

  • Navigate browser to the Spear Interactive Queueing System
  • Under "Cluster", click "General Access" or whichever cluster the user has access to.
  • Under "Resource", click "xterm"
  • Under "GPUs", make sure the option is set to 0.
  • This will download a .nxs file. Open this and log in. A NX client terminal will open up, and can be treated as a normal terminal.
  • To run VisIt, navigate to the visit binary directory (/panfs/storage.local/opt/visit/bin) and execute with ./visit

Running VisIt on Local Machine Offloading to RCC Resources

Running VisIt on a user's local machine and offloading the processing to HPC or Spear allows for the best performance and does not slow down the UI response time. This is the preferred method of running VisIt if the user will regularly use the program on a local machine. Both require that VisIt be installed and properly configured.


  • Much faster rendering and UI response time.


  • Installation may take a while depending on the OS used.
  • Need to setup Host and Launch Profiles.

Configuration for Spear

Configuring VisIt for Spear will allow the user to immediately connect to a Spear node and run in either serial or parallel. First click 'Options' on the main VisIt UI and go to 'Host Profiles...'. This is where we will setup and save the configuration options for interactive visualization on a Spear node. Click 'New Host' at the bottom of the screen. Under the 'Host Settings' tab on the right, use the following entries, which will rename this host profile 'FSU HPC Spear', specify the location of the remote copy of VisIt on Spear, and specify maximum number of resources we can use.

Host nickname: FSU Spear
Remote host name:
Host name aliases: spear-##
(checked) Maximum nodes: 1
(checked) Maximum processors: 8
Path to VisIt installation: /panfs/storage.local/opt/visit
Username: [Enter your FSU HPC username here]
(checked) Tunnel data connections through SSH.

The remainder of the options do not need to be modified. Now move to the 'Launch Profiles' tab. Create a new profile and under 'Profile name' call it 'Serial' with a timeout of 480 (should be the default). This is the setup for serial jobs on Visit. Now create another profile called 'Parallel' with a timeout of 480. Under the 'Parallel' tab, use the following entries:

(checked) Launch parallel engine
(checked) Parallel launch method: mpirun
Default number of processors: 2
(checked) Default number of nodes: 1
(checked) Default machine file: /etc/visit-hostfile

No other options need to be modified here. Note that the default number of processors here is 2, but this value can be changed to any value up to 8 when opening a file (explained in the Example Visualization section). Running serial processes on Spear does not require any more modifications. However, to run on Spear in parallel, the gnu-openmpi module must be automatically loaded on Spear. To do this, a .bash_profile and .bashrc file must be either created or modified in the user's home directory on Lustre (the default system for Spear).

First create a .bash_profile file in your home directory and add the following lines:

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then
      . ~/.bashrc

# User specific environment and startup programs


export PATH

Then create a .bashrc file (or modify your existing one) and add the following lines:

# .bashrc

# Source global definitions

if [ -f /etc/bashrc ]; then
    . /etc/bashrc

# User specific aliases and functions

module load gnu-openmpi

From here, VisIt should work properly on Spear for both serial and parallel processes. To test your configuration and/or see an example visualization, try this example.

Configuration for HPC Queue

While we can immediately run interactive jobs on Spear, this limits our processing to only a single node. VisIt has the built-in capability of using a batch queue system to run jobs across multiple nodes in a HPC-like environment. It should be noted that when using VisIt on a queue system, the same rules apply here as would be applied to any batch job. It may take a while for resources to be available and there may be time limitations. However, once connected, no other changes are required and VisIt will function similarly to how it functions on a local machine or when running on Spear, with the added advantage of more processing resources.

To run VisIt on the HPC queue, first choose one you have access to from the list of queues. Then, in VisIt, click 'Options' on the main UI window and go to 'Host Profiles...'. Create a new host and call it 'FSU [queue]' under 'Host nickname', where [queue] stands for the specific queue being used. In this example, we'll use the scientific computing queue 'SC', so Host nickname will be FSU SC. Then the remote host nickname will be ''. The 'Host name alias' does not need to be specified when using the queue system. The maximum number of nodes and maximum number of processors depends on the limitations of the queue used. As a default, two nodes with eight processors is used in order to verify that the configuration works across multiple processors and nodes. For a configuration for a generic queue, the final options should look similar to:

Host nickname: FSU [queue]
Remote host name: [queue]
(checked) Maximum nodes: 2
(checked) Maximum processors: 8
Path to VisIt installation: /panfs/storage.local/opt/visit
Username: [Enter your HPC FSU username here]
(checked) Tunnel data connections through SSH

No other options should need to be modified. Under the 'Launch Profiles' tab, create a new profile and call it 'Parallel' with a timeout of 480. Under 'Parallel', use the following options:

(checked) Launch parallel engine
(checked) Parallel launch method: msub
(checked) Partition / Pool / Queue: [queue used]
Default number of processors: 2

Again, no other options should need to be modified for this tab. Finally go to the 'Advanced' tab and check 'Use VisIt script to set up parallel environment'. From here, running VisIt on a queue should be properly configured. Try the example visualization in the next section to verify that everything is set up correctly.

Currently GPU Acceleration is not yet supported on RCC resources, but should be available at a later time.

Example Visualization

To connect to the Spear node or an HPC queue and run an example:

  • Go to File > Open File.
  • Under host, select "FSU Spear" or "FSU SC" from the drop down menu. Whatever was used for "Host Nickname".
  • Enter your HPC password when prompted. Make sure your username is also correct.
  • For the path, navigate to the examples directory: /panfs/storage.local/opt/visit/examples
  • Under Files on the right, open "crotamine.pdb" to view a Crotamine molecule.
  • Under "Open file as type:" you can either leave it as "Guess from file name/extension", or select ProteinDataBank from the drop down menu. Sometimes VisIt will not recognize a file type automatically, so the specific file type must be selected.
  • If you've already set up the launch profiles, you'll be presented with the options of running the job using the Serial or Parallel Interactive versions (Spear) or just Parallel Queue (HPC). Choose one, but it may be a good idea to eventually test each option.
  • Also note that if you choose the parallel option, the number of processors can be changed if you're running on Spear, or both the number of processors and nodes can be changed if you're running on an HPC queue.
  • Now under "Plots" on the main window, select Add > Molecule > element.
  • Finally click "Draw" in the "Plots" section. (You may need to click the double arrows on right side of the buttons to see "Draw" under the additional options.)
  • The molecule should now be properly displayed in the window.
  • NOTE: Running parallel jobs on Spear or the HPC may return a few error messages, which can be ignored.
  • This YouTube link has information regarding the manipulation of the image using some of VisIt's tools, if you are not familiar with VisIt yet.
  • To open another file, simply select "Open" under "Sources" on the main window, or File > Open like before. Note that only a few of the files in the example folder will work.
Applications and Tools
Available On: