Update home authored by van Vliet's avatar van Vliet
...@@ -817,18 +817,20 @@ echo "Program finished with exit code $? at: `date`" ...@@ -817,18 +817,20 @@ echo "Program finished with exit code $? at: `date`"
### Slurm Environment Variables ### Slurm Environment Variables
Available environment variables include: Available environment variables include:
SLURM_CPUS_ON_NODE - processors available to the job on this node | Variable | Meaning |
SLURM_JOB_ID - job ID of executing job | --- | --- |
SLURM_LAUNCH_NODE_IPADDR - IP address of node where job launched | SLURM_CPUS_ON_NODE | processors available to the job on this node |
SLURM_NNODES - total number of nodes | SLURM_JOB_ID | job ID of executing job |
SLURM_NODEID - relative node ID of current node | SLURM_LAUNCH_NODE_IPADDR | IP address of node where job launched |
SLURM_NODELIST - list of nodes allocated to job | SLURM_NNODES | total number of nodes |
SLURM_NTASKS - total number of processes in current job | SLURM_NODEID | relative node ID of current node |
SLURM_PROCID - MPI rank (or relative process ID) of the current process | SLURM_NODELIST | list of nodes allocated to job |
SLURM_SUBMIT_DIR - directory from with job was launched | SLURM_NTASKS | total number of processes in current job |
SLURM_TASK_PID - process ID of task started | SLURM_PROCID | MPI rank (or relative process ID) of the current process |
SLURM_TASKS_PER_NODE - number of task to be run on each node | SLURM_SUBMIT_DIR | directory from with job was launched |
CUDA_VISIBLE_DEVICES - which GPUs are available for use | SLURM_TASK_PID | process ID of task started |
| SLURM_TASKS_PER_NODE | number of task to be run on each node |
| CUDA_VISIBLE_DEVICES | which GPUs are available for use |
## More Slurm info ## More Slurm info
... ...
......