Changes
Page history
Update home
authored
Apr 14, 2020
by
van Vliet
Show whitespace changes
Inline
Side-by-side
home.md
View page @
61ea2394
...
...
@@ -1029,6 +1029,8 @@ You can see from the output that we have **3 GPU's: Cuda devices: 0,1,2**
#### Compiling and running GPU programs
First download and compile the samples from NVidia:
```
module purge
module add library/cuda/10.1/gcc.8.3.1
...
...
@@ -1037,7 +1039,11 @@ cd
git clone https://github.com/NVIDIA/cuda-samples.git
cd cuda-samples/Samples/UnifiedMemoryPerf/
make
```
Create a slurm batch script:
```
cat gpu-test.slurm
#!/bin/bash
#
...
...
@@ -1053,8 +1059,13 @@ module add library/cuda/10.2/gcc.8.3.1
hostname
echo "Cuda devices: $CUDA_VISIBLE_DEVICES"
$HOME/cuda-samples/Samples/UnifiedMemoryPerf/UnifiedMemoryPerf
```
*
sbatch gpu-test.slurm
While running, ssh to the node (in this case res-hpc-gpu01) and run the command
**nvidia-smi**
.
This will show that the "UnifiedMemoryPerf" program is running on a GPU.
```
[user@res-hpc-gpu01 GPU]$ nvidia-smi
Tue Apr 14 16:06:06 2020
+-----------------------------------------------------------------------------+
...
...
@@ -1079,8 +1090,11 @@ Tue Apr 14 16:06:06 2020
|=============================================================================|
| 1 29726 C ...les/UnifiedMemoryPerf/UnifiedMemoryPerf 145MiB |
+-----------------------------------------------------------------------------+
```
Output:
```
cat slurm-206625.out
res-hpc-gpu01.researchlumc.nl
Cuda devices: 0
...
...
...
...