... | ... | @@ -1629,11 +1629,34 @@ Data anonymization has been defined as a "process by which personal data is irre |
|
|
## Data storage / access
|
|
|
This shark cluster has multiple data storage types.
|
|
|
### Storage solutions
|
|
|
* HPC Isilon storage.
|
|
|
This is fast storage for direct acces to your data on the cluster, which can be purchased from the IT&DI department through [Topdesk](https://topdesk.lumc.nl). Once purchased this storage will be NFS v4 mounted on all the nodes on the cluster. The deafult mountpoint will be **/exports/<storage-share-name>**.
|
|
|
* HPC Isilon storage.
|
|
|
This is fast storage for direct acces to your data on the cluster, which can be purchased from the IT&DI department through [Topdesk](https://topdesk.lumc.nl). Once purchased this storage will be NFS v4 mounted on all the nodes on the cluster. The deafult mountpoint will be **/exports/<storage-share-name>**.
|
|
|
Access to this mountpoint is handeld by an Active Directory group. The default mount access rights are set by an Ansible playbook. To grant users access to this share you need to have them added to the Active Directory group attached to the share. To find out which group is attached to your data storage use the following command `ls -aldh /exports/<storage-share-name> | awk '{print $4}'`
|
|
|
* research LTS Isilon storage
|
|
|
* research LTS Isilon storage
|
|
|
This is slow storage for archiving data which can be purchased from the IT&DI department through [Topdesk](https://topdesk.lumc.nl). Once purchased this storage will be NFS v4 mounted on all the execution/gpu/mem nodes on the cluster with read only access, on the login nodes you wil have read and write access. The deafult mountpoint will be **/exports/archive/<storage-share-name>**. Access to this mountpoint is handeld by an Active Directory group. The default mount access rights are set by an Ansible playbook. To grant users access to this share you need to have them added to the Active Directory group attached to the share. To find out which group is attached to your data storage use the following command `ls -aldh /exports/archive/<storage-share-name> | awk '{print $4}'`
|
|
|
* BeeGFS
|
|
|
BeeGFS is a parallel file system, created within the LUMC with 4 Virtual Macines (* vcpus and 32G memory), each have a 5TB unity storage attached. Each Virtual Machine is connected with a 10Gb ethernet to the Shark slurm cluster. The 4 VMs with the 5TB storage creates a total of 20TB BeeGFS storage mounted on **/scratch/shared/**.
|
|
|
This BeeGFS is faster compared to the HPC Isilon if you have a lot of small files and do a lot of read write operations.
|
|
|
This storage is a so called scratch space, everyone can use this BeeGFS scratch space for **staging** your data. Please **do not** use this scratch space for **storage**.
|
|
|
The HPC Isilon and the LTS Isilon are for storage, the BeeGFS is only for staging.
|
|
|
Staging your data is a normal procedure on large clusters.
|
|
|
Howto Stage your data:
|
|
|
From within your submit script
|
|
|
Create a directory on the /scratch/shared `mkdir -p /scratch/shared/$USER/$SLURM_JOB_ID`
|
|
|
change permissions so only the owner can view the files `chmod 700 /scratch/shared/$USER`
|
|
|
Copy your data `cp <path>/to/Data /scratch/shared/$USER/$SLURM_JOB_ID/`
|
|
|
execute your binary and save output to /scratch/shared/$USER/$SLURM_JOB_ID/
|
|
|
At the end of your sbatch script move/copy your output to your HPC/LTS storage.
|
|
|
Now remove the complete directory on the BeeGFS `rm -Rf /scratch/shared/$USER/$SLURM_JOB_ID/'
|
|
|
#### Importand information for BeeGFS /scratch/shared
|
|
|
* Data is scratch and will be treated as such
|
|
|
* If needed data can be removed by the admins without notice !!
|
|
|
* There is no backup for /scratch/shared
|
|
|
* Data and empty folders will be removed **automatically after 21 days**
|
|
|
* There is no quota do not abuse this FileSystem, data can and will be removed if necessary
|
|
|
* Set the security to read,write,execute for the owner only `chmod 700`
|
|
|
|
|
|
|
|
|
|
|
|
### Special directories on the cluster
|
|
|
* /bam-export/
|
... | ... | |