... | @@ -1283,126 +1283,17 @@ will limit the number of simultaneously running tasks from this job array to 25 |
... | @@ -1283,126 +1283,17 @@ will limit the number of simultaneously running tasks from this job array to 25 |
|
|
|
|
|
We recommend to use this option.
|
|
We recommend to use this option.
|
|
|
|
|
|
## Not possible anymore !! Remote GPU-accelerated Visualization on res-hpc-gpu01
|
|
## Remote GPU-accelerated Visualization on res-hpc-lo02
|
|
|
|
|
|
At this moment the login node res-hpc-lo01 does not have a powerful GPU card.
|
|
If you want to run a graphical program that will show 3D animation, movies or any other kind of simulation/visualization, we have the second login node **res-hpc-gpu01** for this.
|
|
If you want to run a graphical program that will show 3D animation, movies or any other kind of simulation/visualization, we have reserved 1 GPU on res-hpc-gpu01 for this.
|
|
This server has a powerful Tesla T4 GPU card (16Gb memory).
|
|
|
|
|
|
Steps for setting a up a remote GPU-accelerated visualization:
|
|
Steps for setting a up a remote GPU-accelerated visualization:
|
|
|
|
|
|
* login to res-hpc-gpu01
|
|
* connect your remote desktop (X2Go) to the second login node res-hpc-lo02
|
|
* start a vncserver
|
|
* start your visualization program (with "vglrun" in front of it if needed)
|
|
* connect from your remote desktop to the vcnserver
|
|
|
|
* start your visualisation program
|
|
|
|
|
|
|
|
### login to res-hpc-gpu01
|
|
Once you are in your remote desktop, open a terminal
|
|
As a normal user, you are not allowed to login to a node, in this case res-hpc-gpu01.
|
|
|
|
You have to create a small job which will allocate a reservation on res-hpc-gpu01, so that you are allowed to login to res-hpc-gpu01
|
|
|
|
|
|
|
|
Run:
|
|
|
|
```
|
|
|
|
srun -n 1 -c 1 -p gpu -w res-hpc-gpu01 -t 2:0:0 --mem=200M --pty bash
|
|
|
|
```
|
|
|
|
You are now logged in on res-hpc-gpu01
|
|
|
|
|
|
|
|
### create a vncserver setup and start a vncserver session
|
|
|
|
If you are setting up a vcnserver session for the first time, it will ask you a few questions and after that you have to adapt the configuration and startup file:
|
|
|
|
|
|
|
|
```
|
|
|
|
vncserver
|
|
|
|
|
|
|
|
You will require a password to access your desktops.
|
|
|
|
|
|
|
|
Password:
|
|
|
|
Verify:
|
|
|
|
Would you like to enter a view-only password (y/n)? n
|
|
|
|
|
|
|
|
Desktop 'TurboVNC: res-hpc-gpu01.researchlumc.nl:1 (username)' started on display res-hpc-gpu01.researchlumc.nl:1
|
|
|
|
|
|
|
|
Creating default startup script /home/username/.vnc/xstartup.turbovnc
|
|
|
|
Starting applications specified in /home/username/.vnc/xstartup.turbovnc
|
|
|
|
Log file is /home/username/.vnc/res-hpc-gpu01.researchlumc.nl:1.log
|
|
|
|
|
|
|
|
```
|
|
|
|
You should choose a strong password, the password should not be the same as the user login password.
|
|
|
|
|
|
|
|
Kill the vncserver connection:
|
|
|
|
* vncserver -kill :1
|
|
|
|
|
|
|
|
Now adapt the **xstartup.turbovnc** file:
|
|
|
|
* vi $HOME/.vnc/xstartup.turbovnc
|
|
|
|
```
|
|
|
|
#!/bin/sh
|
|
|
|
|
|
|
|
unset SESSION_MANAGER
|
|
|
|
unset DBUS_SESSION_BUS_ADDRESS
|
|
|
|
XDG_SESSION_TYPE=x11; export XDG_SESSION_TYPE
|
|
|
|
|
|
|
|
exec icewm-session
|
|
|
|
```
|
|
|
|
|
|
|
|
Adapt/create a **turbovncserver.conf** file for the vncserver with some useful settings:
|
|
|
|
* vi $HOME/.vnc/turbovncserver.conf
|
|
|
|
```
|
|
|
|
$geometry="1280x1024"
|
|
|
|
$depth=24
|
|
|
|
```
|
|
|
|
Now start the vncserver:
|
|
|
|
```
|
|
|
|
vncserver
|
|
|
|
|
|
|
|
Desktop 'TurboVNC: res-hpc-gpu01.researchlumc.nl:1 (username)' started on display res-hpc-gpu01.researchlumc.nl:1
|
|
|
|
|
|
|
|
Starting applications specified in /home/username/.vnc/xstartup.turbovnc
|
|
|
|
Log file is /home/username/.vnc/res-hpc-gpu01.researchlumc.nl:1.log
|
|
|
|
```
|
|
|
|
You can list you vncserver session with the following command:
|
|
|
|
* vncserver -list
|
|
|
|
```
|
|
|
|
vncserver -list
|
|
|
|
TurboVNC sessions:
|
|
|
|
|
|
|
|
X DISPLAY # PROCESS ID
|
|
|
|
:1 33915
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
You can/should kill your vncserver session when you are done with running your own application:
|
|
|
|
* vncserver -kill :1
|
|
|
|
```
|
|
|
|
vncserver -kill :1
|
|
|
|
Killing Xvnc process ID 47947
|
|
|
|
```
|
|
|
|
|
|
|
|
#### vncserver and port numbers
|
|
|
|
Everytime someone start a vcnsession and there is already a vncsession running, the port number will increase.
|
|
|
|
The first connection will be on **:1** which is port 5900 + 1, where the port 5900 is the standard port number for VNC.
|
|
|
|
For example:
|
|
|
|
```
|
|
|
|
Desktop 'TurboVNC: res-hpc-gpu01.researchlumc.nl:3 (username)' started on display res-hpc-gpu01.researchlumc.nl:3
|
|
|
|
```
|
|
|
|
In this case, the port to connect to is number **3**.
|
|
|
|
So you connect to "res-hpc-gpu01.researchlumc.nl:3".
|
|
|
|
|
|
|
|
You can always list your own open vnc session with the **vncserver -list** command.
|
|
|
|
|
|
|
|
Remember to kill you vnc session when you are done running your own application.
|
|
|
|
|
|
|
|
### connect from your remote desktop to the vcnserver
|
|
|
|
If you are running the **X2Go** client, open a terminal.
|
|
|
|
Then run the command:
|
|
|
|
```
|
|
|
|
/opt/TurboVNC/bin/vncviewer res-hpc-gpu01.researchlumc.nl:1
|
|
|
|
```
|
|
|
|
It will ask you for your vnc password:
|
|
|
|
|
|
|
|
![alt text](images/vncviewer-02.gif "vncviewer-02")
|
|
|
|
|
|
|
|
Or if you run: "/opt/TurboVNC/bin/vncviewer" without the hostname, you have to enter the hostname: "res-hpc-gpu01.researchlumc.nl:1"
|
|
|
|
|
|
|
|
![alt text](images/vncviewer-01.gif "vncviewer-01")
|
|
|
|
|
|
|
|
This will then open a (second) icewm windows manager. From there you can launch your accelerated visualization.
|
|
|
|
For the GPU acceleration, you have to run the VirtualGL command: **vglrun** in front of the real program you want to run.
|
|
For the GPU acceleration, you have to run the VirtualGL command: **vglrun** in front of the real program you want to run.
|
|
|
|
|
|
Examples:
|
|
Examples:
|
... | @@ -1412,7 +1303,21 @@ Examples: |
... | @@ -1412,7 +1303,21 @@ Examples: |
|
|
|
|
|
With the "glxinfo" program, you should check for the strings:
|
|
With the "glxinfo" program, you should check for the strings:
|
|
* direct rendering: Yes
|
|
* direct rendering: Yes
|
|
* OpenGL renderer string: TITAN Xp/PCIe/SSE2
|
|
* OpenGL renderer string: Tesla T4/PCIe/SSE2
|
|
|
|
|
|
|
|
```
|
|
|
|
vglrun glxinfo | egrep "rendering|OpenGL"
|
|
|
|
direct rendering: Yes
|
|
|
|
OpenGL vendor string: NVIDIA Corporation
|
|
|
|
OpenGL renderer string: Tesla T4/PCIe/SSE2
|
|
|
|
```
|
|
|
|
|
|
|
|
```
|
|
|
|
glxinfo | egrep "rendering|OpenGL"
|
|
|
|
direct rendering: Yes
|
|
|
|
OpenGL vendor string: VMware, Inc.
|
|
|
|
OpenGL renderer string: llvmpipe (LLVM 9.0.0, 256 bits)
|
|
|
|
```
|
|
|
|
|
|
#### Programs that can run with "vglrun"
|
|
#### Programs that can run with "vglrun"
|
|
|
|
|
... | @@ -1424,7 +1329,7 @@ How to run: |
... | @@ -1424,7 +1329,7 @@ How to run: |
|
|
|
|
|
![alt text](images/fsleyes-01.gif "fsleyes-01")
|
|
![alt text](images/fsleyes-01.gif "fsleyes-01")
|
|
|
|
|
|
Check for: **OpenGL renderer: TITAN Xp/PCIe/SSE2**
|
|
Check for: **OpenGL renderer: Tesla T4/PCIe/SSE2**
|
|
|
|
|
|
With the **nvidia-smi** command, you can also check if your program is running on the GPU.
|
|
With the **nvidia-smi** command, you can also check if your program is running on the GPU.
|
|
Below you can see 2 programs running on the GPU: the Xorg server and the fsleyes program:
|
|
Below you can see 2 programs running on the GPU: the Xorg server and the fsleyes program:
|
... | @@ -1432,39 +1337,17 @@ Below you can see 2 programs running on the GPU: the Xorg server and the fsleyes |
... | @@ -1432,39 +1337,17 @@ Below you can see 2 programs running on the GPU: the Xorg server and the fsleyes |
|
```
|
|
```
|
|
nvidia-smi
|
|
nvidia-smi
|
|
+-----------------------------------------------------------------------------+
|
|
+-----------------------------------------------------------------------------+
|
|
| Processes: GPU Memory |
|
|
| Processes: |
|
|
| GPU PID Type Process name Usage |
|
|
| GPU GI CI PID Type Process name GPU Memory |
|
|
|
|
| ID ID Usage |
|
|
|=============================================================================|
|
|
|=============================================================================|
|
|
| 2 19234 G ...0.3/fslpython/envs/fslpython/bin/python 17MiB |
|
|
| 0 N/A N/A 25565 G /usr/libexec/Xorg 25MiB |
|
|
| 2 24920 G /usr/libexec/Xorg 17MiB |
|
|
| 0 N/A N/A 28059 G ...ng/fsl/FSLeyes/bin/python 2MiB |
|
|
+-----------------------------------------------------------------------------+
|
|
+-----------------------------------------------------------------------------+
|
|
```
|
|
```
|
|
|
|
|
|
##### Bugs
|
|
|
|
* Problem: sometimes, if you start the program, the program seems to be very slow and is running at a frame rate of 1 frame per second
|
|
|
|
* Solution: stop the program and start it again. The second time it should run at normal/smooth speed
|
|
|
|
|
|
|
|
##### Remember:
|
|
|
|
* kill your own vnc session with "vncserver -kill:[port number] when you are done with your application
|
|
|
|
* do not close (exit or ^D) the **bash shell** which you got with the "srun" command before you are done with your application
|
|
|
|
* if you close your bash shell (srun), the vnc session will be terminated.
|
|
|
|
You will see something like this if you run the "vncserver -list" command:
|
|
|
|
```
|
|
|
|
vncserver -list
|
|
|
|
|
|
|
|
TurboVNC sessions:
|
|
|
|
|
|
|
|
X DISPLAY # PROCESS ID
|
|
|
|
|
|
|
|
Warning: res-hpc-gpu01.researchlumc.nl:1 is taken because of /tmp/.X1-lock
|
|
|
|
Remove this file if there is no X server res-hpc-gpu01.researchlumc.nl:1
|
|
|
|
:1 31636
|
|
|
|
```
|
|
|
|
In this case you have to manually remove the lock file.
|
|
|
|
|
|
|
|
### Remote visualization with a "reverse SSH tunnel"
|
|
### Remote visualization with a "reverse SSH tunnel"
|
|
|
|
|
|
You can skip a step in the above explanation about how to setup a remote visualization.
|
|
|
|
With a reverse SSH tunnel you can make a quick connection to a remote desktop.
|
|
With a reverse SSH tunnel you can make a quick connection to a remote desktop.
|
|
We assume that you already have setup correctly your **vncserver**.
|
|
We assume that you already have setup correctly your **vncserver**.
|
|
|
|
|
... | @@ -1475,14 +1358,13 @@ We are using the SSH proxy server for this: |
... | @@ -1475,14 +1358,13 @@ We are using the SSH proxy server for this: |
|
#### Setting up the reverse SSH tunnel
|
|
#### Setting up the reverse SSH tunnel
|
|
|
|
|
|
You have to follow the following steps:
|
|
You have to follow the following steps:
|
|
* srun -n 1 -c 1 -p gpu -w res-hpc-gpu01 -t 2:0:0 --mem=200M --pty bash
|
|
* [on res-hpc-lo02:] vncserver
|
|
* [on gpu01:] vncserver
|
|
|
|
|
|
|
|
```
|
|
```
|
|
Desktop 'TurboVNC: res-hpc-gpu01.researchlumc.nl:1 (username)' started on display res-hpc-gpu01.researchlumc.nl:1
|
|
Desktop 'TurboVNC: res-hpc-lo02.researchlumc.nl:1 (username)' started on display res-hpc-lo02.researchlumc.nl:1
|
|
|
|
|
|
Starting applications specified in /home/username/.vnc/xstartup.turbovnc
|
|
Starting applications specified in /home/username/.vnc/xstartup.turbovnc
|
|
Log file is /home/username/.vnc/res-hpc-gpu01.researchlumc.nl:1.log
|
|
Log file is /home/username/.vnc/res-hpc-lo02.researchlumc.nl:1.log
|
|
```
|
|
```
|
|
|
|
|
|
* [on gpu01:] ssh -R 8899:localhost:5901 -l username 145.88.35.10
|
|
* [on gpu01:] ssh -R 8899:localhost:5901 -l username 145.88.35.10
|
... | | ... | |