Loughborough University
Leicestershire, UK
LE11 3TU
+44 (0)1509 222222
Loughborough University

IT Services : High Performance Computing



Paraview is a system to allow rendering of graphics for datasets. It can be run either stand alone, or as a client-server system.



Compiled with the intel 2013sp1 compilers for speed, and with MPI support for parallel render server support.


Pre-compiled version.


Compiled with the GCC compilers for compatibility.


Preferred version.

Compiled with the intel 2013sp1 compilers for speed, and with MPI support for parallel render server support, and with FFMPEG video support.


The following examples assume the use of 4.2.0, but you may substitue for any preferred version available on the system.

Standalone (Windows, Linux or Mac desktop) - version 1

Ensure that the results files from simulations you wish to visualise are in an appropriate location and then run a version of paraview you have installed paraview locally on your desktop.

If you are visualising large amounts of data you might find that the data transfer rates over the mount result in poor performance.

Mount hydra's file system onto your desktop machine. See Storage .

Currently this is the only way that works if you desktop is Windows.

Standalone (Linux or Mac desktop) - version 2

Login to the Hydra headnode using ssh -Y -C yourusername@hydra.

Load the required modules with:

module purge
module load paraview/4.2.0/intel-qt-4.8.6

Note that unlike some module loads which require the user to load prerequisite modules, this version loads all the prerequisites as the number is large and it is not very dependent on a large number of modules. The use of the purge command ensures no conflicting modules are present.

Run paraview by running paraview.

Since this is running on the head node it is only appropriate for very light visualisations and the client-server method below is preferred. If a visualisation run on the head node is using up significant resources it may be killed without warning.

Client-Server, Client on Head Node

General Advice

This is appropriate for when the visualisation is more demanding, or if you are wishing to visualise the output of a job as it is going on.

For this you should run the client on the headnode, as noted in Standalone (Linux or Mac desktop) - version 2 above and then follow the instructions below to connect to a render server of the appropriate type, running on the interior of the cluster. The client on the head node is to enable information from the interior of the cluster to be presented to you.

Dedicated Visualisation Node

This is most appropriate when you have already run a simulation and wish to explore the results. It combines 20 CPU cores for dealing with the data processing overhead, and the hardware accleration of the nodes, along with high memory for in-memory data manipulation.

Start the client on the hydra headnode (hydra5)

Use the connect button on the client and the resulting dialogue, to configure the client to accept reverse connections on a particular port.

The connect button is the icon immediately below the View menu item towards the top left of paraview.

Clicking on it brings up a Choose Server Configuration dialogue and you should pick the Add server button.

This brings up a further dialogue. Use reverse for the name. For Server Type use Client / Server (reverse connection) For Port pick a random port greater than 11111 (see below for more details and security implications)

Hit the Configure button and then Save

Now select the connection you just created and hit the Connect connect button and proceed to the server start stage below.

Run a server

Start a paraview server with the following:

#!/bin/bash -l
# other SLURM commands
#SBATCH --partition=visual
module purge
module load paraview/4.2.0/intel-qt-4.8.6
mpirun -np 20 pvserver -display :0 \
  -rc --client-host=hydra5 

If you have issues seeing the render then use

#!/bin/bash -l
# other SLURM commands
#SBATCH --partition=visual
module purge
module load paraview/4.2.0/intel-qt-4.8.6
mpirun -np 20 pvserver -display :0 \
  -rc --client-host=hydra5 --use-offscreen-rendering   \

The latter forces software rendering, and so does not make use of the GPU nodes on the visualisation nodes, but still uses the high memory and the multiple cores and improves compatibility.

Use sbatch to submit the job, adding any required options such as account name ( #SBATCH --account=youraccountname), job name, time specification. See the SLURM documentation as a whole for more information.

Since someone else may be running a render client on port 11111 (the default paraview port) you are encouraged to try a random port above 11111 to avoid this conflict, which replaces some_number


Be aware that there is no security set on the ports. I.e. if you run a job using port number 11112 then any job that runs a server on port 11112 may be intercepted by you job. (And if you didn't use a reverse connection you could guess a node and port and connect to any node).

Be aware that the time limit for these nodes' use is short.

There are only three dedicated nodes. If you need one at a specific time then you can use Allocations to reserve one at a specific time. You can then send an sbatch into the allocation or use srun at the end of the allocation command to directly run pvserver, provided you have the paraview module loaded. You will need to specify an account name, and so on, as usual for an sbatch for the salloc command.

Remember that if you reserve a node and don't use it at that time you will still be charged for its use.

General Compute Node

If you urgently need visualisation and the dedicated nodes are full you can use the compute nodes. They do not have hardware acceleration so performance is reduced.

Use the following script:

#!/bin/bash -l
# other SLURM commands
#SBATCH --partition=compute-12
module purge
module load bullxmpi
module load paraview/4.2.0/intel-qt-4.8.6
xvfb-run mpirun -np $SLURM_JOB_CPUS_PER_NODE pvserver -display :0

For other steps in the process, follow the advice above.

Live visualisation, Computational Steering
Use Case

In this scenario a simulation is running and you wish to visualise the information as it is running.

Note that if you do this the Research Computing team would be interested to hear of your usage.


Because the run time of the visualisation nodes is limited, you cannot run (nor is it appropriate to run) the computation on a visualisation node. This means that the two options below are your best options.

Visualisation Via Periodic File Output

In this scenario your job periodically outputs data you can visualise, which you can then look at by creating a series of allocations on the visualisation nodes to view these output data, using full hardware acceleration.

Continuous Visualisation, Client-Server

In this scenario you submit a job to the compute nodes via sbatch to start pvserver as above, onto a compute node. You then create a client connection on the head node and connect to the server instantiation and then use pvbatch (see page 193 of the current ParaView guide) to run a python script that runs the computation you need, along with the visualisation.

Note this does not use hardware acceleration but is the only way to run long or large jobs in this way.

Client-Server, Client on Desktop

Normally the server side is on a compute or visualisation node in the interior of the cluster. In theory tunnelling can be used to allow connections from a desktop client through to these nodes. However, this is not currently supported by the SSH configuration on the cluster.

All Elements on Compute Nodes (Client and Server)

There is little benefit to doing this as the machine on which pvserver is running does most of the work.

Secure Paraview

Using a visualisation node

Because the client-server system is not secure if you need to view data that has security or commercial confidentiality concerns then you should not use the client-server system. Instead use the following steps

Log on to hydra: ssh -Y -C hydra

Create an allocation on the visualisation servers: salloc --partition=visual --time=3:0:0 --account=youraccountname and use squeue to find where it is running (e.g. hydra220).

ssh -Y -C hydra220; module load paraview/4.2.0;paraview

This runs paraview on a render node directly but only single core and without much in the way of hardware optimisation, but does take the strain off the hydra headnode.

Local rendering

The other option is to render on a local machine (your desktop) if this also has paraview installed. Whilst you can mount hydra's files onto your local machine often the size of visualisation files you may use with paraview is such that this is not practicable and you will need to copy data to your desktop.


OpenFOAM comes with its own version of ParaView.


Note that the client-server operation is insecure and not encypted. If you need security of operation additional instructions on setting up ssh tunnels can be provided, but be aware this will slow down communication.


  • Fails to start up server side, saying that the port is blocked.

    This sometimes happens if a pvserver job fails unexpectedly. In this case you should modify the scripts above to use something along the lines of mpirun -np 20 --server-port=$1 -display :0 and then submit with sbatch some_number where some_number is 11112 is greater ( pvserver uses 11111 by default) and ensure that you use the same port number in the client connection option. Note that it may be a previous user's job that caused the issue.

Tutorials and Documentation


User Guide

Running pvserver


Partially tested. Some option combinations cause client-side or server-side crashes. There have been some reports with issues with some filters. Please report issues in the usual manner.