Use the ssh protocol and connect through shark or the ssh jump server with your research username and password.
Slurm shark ip addres login node: 22.214.171.124
from here you have 2 options:
interactive login with : srun --pty /bin/bash or if you need a GPU srun --partition=gpu --gres=gpu:1 --pty /bin/bash
batch submission with : sbatch
The directories /home, /exports, /bam-export are all the same as on Shark.
Please keep in mind that some programs create .config files inside your own home directory.
This can cause binaries, pipelines etc. to use a wrong config file.
The scrips directive for slurm is #SBATCH, you will need to rewrite your submission scripts to work with slurm.
cgroups (control groups) limits, isolates and measures resource usage of a group of processes.
SLURM works together with cgroups to limit your resources, what you get by default or have asked for is what you can only use.
Lets say you ask for 8G mem with the option --mem=8G and with 2 cores with --ntasks-per-node=2,
when this jobs starts running cgroups will ensure that the only resources you will have available is 2 cores and 8G total memory.