Skip to content
f-schmitt-zih edited this page May 6, 2014 · 28 revisions

You are here: Home > PIConGPU User Documentation > ParaView


This is a short introduction for the post processing of our data.

Contents

  1. Preparations
  2. Hypnos Users @HZDR
  3. Extended Technical Documentation

Preparations

Software

On your local computer, download the ParaView 3.98 binaries and install Install ssh-askpass, e.g. with

sudo apt-get install ssh-askpass

Download the PIConGPU Paraview Server file

https://raw.githubusercontent.com/ComputationalRadiationPhysics/picongpu/dev/src/tools/share/paraview.pvsc

Start the downloaded (and unzipped) paraview binary in bin/, go to

Connect -> Load Servers and select the paraview.pvsc file.

Select the new configuration and hit Connect. In the next window, adjust appropriate settings such as your username on hypnos. After a moment, the new connection should show up in Paraview's Pipeline Browser window. After you closed paraview, the established connection must be closed manually by killing the respective process.

Simulation

Let us assume you created our output with libSplash using the --hdf5.period <N> output flags.

Afterwards, run our small script that analyses your outputs meta data and creates an xdmf file for it:

python pic2xdmf.py -t <PATH-to-your-run>/simOutput/h5

Note: <SPLASH_INSTALL>/bin/ needs to be in your PYTHONPATH in order for pic2xdmf.py to find the required libSplash python module.

Hypnos Users @HZDR


Extended Technical Documentation

This is the extended technical documentation of the reverse connected remote rendering. If you are not a maintainer, this is not for you.

Prepare Tunnels

Assuming your cluster looks more or less like this:

  [YOUR-COMPUTER]
       |               ~
*big-bad-internet*     ~ [ClusterNode001] [ClusterNode002]
       |               ~ [ClusterNode003] [ClusterNode004]
       V               ~ [ClusterNode005] [ClusterNodeXYZ]
  [LOGIN-NODE]         ~~~~~~~ some more firewall'n ~~~~~~~~
       |                       ^
~~~~FIREWALL~~~~              /
       |                  *batch system*
       \_->  [HEAD-NODE] ---/

In case your working at HZDR, replace the words like this:

  • LOGIN-NODE -> uts.fz-rossendorf.de
  • HEAD-NODE -> hypnos2

Open a Tunnel to the HEAD-NODE:

ssh -f -L 44333:<HEAD-NODE>:22 <user>@<LOGIN-NODE> -N
# this makes logins easier:
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 44333 <user>@localhost

Prepare Start Scripts

Log into the HEAD-NODE

ssh -p 44333 <user>@localhost

and prepare a job script startPV.sh like this one (assuming a CPU only queue):

#PBS -q laser
#PBS -l walltime=01:00:00
#PBS -N pvserver
#PBS -l nodes=8:ppn=32
#PBS -d .

#PBS -o stdout
#PBS -e stderr
echo 'Running reverse pvserver...'

cd .

export MODULES_NO_OUTPUT=1
source ~/picongpu.profile
unset MODULES_NO_OUTPUT
module load tools/mesa/7.8
module load analysis/paraview/4.0.1.laser

#set user rights to u=rwx;g=r-x;o=---
umask 0027
sleep 2

mpiexec --prefix $MPIHOME -x LIBRARY_PATH -x LD_LIBRARY_PATH -npernode 32 -n 256 \
    pvserver --use-offscreen-rendering -rc -ch=hypnos2
# some interesting flags one can use:
#   --mca mpi_yield_when_idle 1
#       reduces load while idle (no busy loop)
#       http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded
#   -am $HOME/openib.conf
#       in case you send HUGE data chunks over infiniband

Configure a Server in ParaView

You might have spotted it already above: we are starting parallel data/render servers on the cluster nodes which normally can not be reached from the head node. (E.g. you can not connect via ssh to them.) But there is a trick: the cluster nodes itself can connect reversely to the head node (-rc -ch=<HEAD-NODE>).

Now start ParaView and hit Connect

We are going to Add Server to configure a new reverse connection:

Now go to Configure and add the start-up mode Command:

  ssh -p 44333 <user>@localhost "/opt/torque/bin/qsub ~/startPV.sh"

Configure wait for 5,0 seconds below and save.

Start up!

Al right! Now go to Connect again, select your Server and start with Connect

Now run locally:

  # to <HEAD-NODE> loopback:
  ssh -p 44333 -f -R 11112:localhost:11111 <user>@localhost -N
  # make public on <HEAD-NODE>
  ssh -p 44333 <user>@localhost  "ssh -g -f -L 11111:localhost:11112 localhost -N"

This will forward your local port 11111 which is listening the HEAD-NODE: it can now wait for connections from running pvserver's at the cluster-nodes. Finally, we bind it to the so called wildcard adress for extern connections.

The last commands can be shortened up with GatewayPorts clientspecified (1) in the HEAD-NODE's /etc/ssh/sshd_config (2) to:

# forward local 11111 port to <HEAD-NODE> and make public for all interfaces
ssh -p 44333 -R :11111:localhost:11111 <user>@localhost -N