Difference between revisions of "Basic Usage: CPU Based Resources With Slurm"

From KENET Training
Jump to: navigation, search
Line 31: Line 31:
 
==== Create a submission script For Quantum Espresso ====
 
==== Create a submission script For Quantum Espresso ====
 
You require a submission script, which is a plain text file with all the instructions for the command you intend to run.
 
You require a submission script, which is a plain text file with all the instructions for the command you intend to run.
Create a working directory in your scratch directory:  
+
Retreive the example files in your scratch directory from this [ https://github.com/Materials-Modelling-Group/training-examples | github repository ]
 
<code bash>
 
<code bash>
 
   cd ~/localscratch/
 
   cd ~/localscratch/
   mkdir test
+
   git clone https://github.com/Materials-Modelling-Group/training-examples.git
 +
  cd  training-examples
 
</code>
 
</code>
 
and in this directory we will place the following text content in a file:
 
and in this directory we will place the following text content in a file:
Line 43: Line 44:
 
   #SBATCH -o job.%j.out        # Name of stdout output file (%j expands to jobId)
 
   #SBATCH -o job.%j.out        # Name of stdout output file (%j expands to jobId)
 
   #SBATCH -e %j.err            # Name of std err
 
   #SBATCH -e %j.err            # Name of std err
   #SBATCH --partition=normal   # Queue
+
   #SBATCH --partition=cpu_only   # Queue
 
   #SBATCH --nodes=1            # Total number of nodes requested
 
   #SBATCH --nodes=1            # Total number of nodes requested
  #SBATCH --gres=gpu:1            # Total number of gpus requested
 
 
   #SBATCH --cpus-per-task=1    #  
 
   #SBATCH --cpus-per-task=1    #  
 
   #SBATCH --time=00:03:00        # Run time (hh:mm:ss) - 1.5 hours
 
   #SBATCH --time=00:03:00        # Run time (hh:mm:ss) - 1.5 hours
Line 52: Line 52:
 
   module load applications/qespresso/7.3.1  
 
   module load applications/qespresso/7.3.1  
 
    
 
    
   cd $HOME/localscratch/test
+
   cd $HOME/localscratch/training-examples
   mpirun -np 4  pw.x <input.in > output.out
+
   mpirun -np 4  pw.x <al.scf.david.in > output.out
 
</code>
 
</code>
 
Put this in a file called *test.slurm*
 
Put this in a file called *test.slurm*

Revision as of 14:22, 8 May 2025

Slurm logo.svg.png

Simple commands with SLURM

You can obtain information on the Slurm "Partitions" that accept jobs using the sinfo command

   $ sinfo
   PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
   test         up       1:00      1   idle gnt-usiu-gpu-00.kenet.or.ke
   gpu1         up 1-00:00:00      1   idle gnt-usiu-gpu-00.kenet.or.ke
   normal*      up 1-00:00:00      1   idle gnt-usiu-gpu-00.kenet.or.ke


The test partition is reserved for testing, with a very short time limit. The normal partition is to be used for CPU only jobs, and the gpu1 queue is reserved for GPU jobs. Both production partitions have a time limit of 24 hours at a time for individual jobs.

Showing The Queue

The squeue slurm command will list all submitted jobs, and will give you an indication of how busy the cluster is, as well as the status of all running or waiting jobs. Jobs that are complete will exit the queue and will not be in this list.

   $ squeue 
   JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
    63    normal     gpu1   jotuya  R       0:03      1 gnt-usiu-gpu-00.kenet.or.ke
   $

Quantum ESPRESSO logo.jpg

Submitting Your first Job

Create a submission script For Quantum Espresso

You require a submission script, which is a plain text file with all the instructions for the command you intend to run. Retreive the example files in your scratch directory from this [ https://github.com/Materials-Modelling-Group/training-examples | github repository ]

 cd ~/localscratch/
 git clone https://github.com/Materials-Modelling-Group/training-examples.git
 cd  training-examples

and in this directory we will place the following text content in a file:

 #!/bin/bash

 #SBATCH -J testjob               # Job name
 #SBATCH -o job.%j.out         # Name of stdout output file (%j expands to jobId)
 #SBATCH -e %j.err             # Name of std err
 #SBATCH --partition=cpu_only    # Queue
 #SBATCH --nodes=1             # Total number of nodes requested
 #SBATCH --cpus-per-task=1     # 
 #SBATCH --time=00:03:00        # Run time (hh:mm:ss) - 1.5 hours
  
 # Launch MPI-based executable
 module load applications/qespresso/7.3.1 
 
 cd $HOME/localscratch/training-examples 
 mpirun -np 4  pw.x <al.scf.david.in > output.out

Put this in a file called *test.slurm*

Submitting the Job to the Queue

The slurm sbatch command provides the means to submit batch jobs to the queue:

   $ sbatch  test.slurm 
   Submitted batch job 64
   $

This will run the named program on four 4 cores, note that the parallelism is built into the program, if the program itself is not parallelised, running on multiple cores will not provide any benefit.

Next: Basic_Usage:_GPU_Based_Resources_With_Slurm

Up: HPC_Usage