forked from percyfal/slurm
-
Notifications
You must be signed in to change notification settings - Fork 45
Open
Description
Imagine I have a rule which should run mytool via MPI on 8 processes.
Locally I would start this with e.g.: mpirun -np 8 mytool.
On a slurm cluster I would write a submit script and then inside the script start with srun, e.g.:
#!/bin/bash
#SBATCH --job-name=myjob1
#SBATCH -n 20
...
srun mytoolFor MPI and slurm it is crucial to start the parallelized application (mytool) via srun, see also https://slurm.schedmd.com/mpi_guide.html.
I assumed that when using this profile to submit jobs to a slurm cluster using Snakemake the following would happen:
- A similar submit script like above is created
- The submit script is submitted via
sbatch
But instead Snakemake does the following:
- It creates a slurm job
- Inside the job
snakemakeis run again with lots of parameters to ensure that only the current rule is run (basically the contents of{exec_job}in https://github.com/Snakemake-Profiles/slurm/blob/master/%7B%7Bcookiecutter.profile_name%7D%7D/slurm-jobscript.sh)
So my questions are:
- Is it possible to modify this profile to get a similar behavior I originally assumed (directly executing
sruninside the job)? - Have others successfully run MPI parallelized jobs on multiple nodes with Snakemake?
Explicitly pinging @jdblischak because you may have experience on this topic as well.
cc: @TobiasMeisel
Metadata
Metadata
Assignees
Labels
No labels