GPU Cloud VMs

From KENET Training
Revision as of 14:41, 1 April 2025 by Atambo (talk | contribs)
Jump to: navigation, search

Preconfigured GPU appliances

KENET provides a set of preconfigured Virtual Machine appliances with the following codes:

  1. Quantum Espresso
  2. YAMBO
  3. SIESTA
  4. GROMACS
  5. Tensoflow
  6. PyTorch

To request for access please apply through this form: [1] The appliance requires no user configuration, and the above listed appliances will have the individual code ready with GPU support.

The codes can be run on the terminal directly, however, the SLURM job scheduler is also installed on the VM, and alternately, the codes can be run via the scheduler.

Gromacs GPU VM usage

In the Gromacs GPU vm, gromacs and mpi are available, to run gromacs, you can use the following:

   $ mpirun -np 1  /usr/local/bin/gmx_mpi 

Advanced usage with slurm:

to run gromacs in the GPU vm with slurm, create a submission script with the following contents:

  #!/bin/bash                                                                                                  
  ##SBATCH --job-name="example-name"                                                                            
  ##SBATCH --get-user-env                                                                                      
  ##SBATCH --output=_scheduler-stdout.txt                                                                      
  ##SBATCH --error=_scheduler-stderr.txt                                                                        
  ##SBATCH --nodes=1                                                                                            
  ##SBATCH --ntasks-per-node=1                                                                                  
  ##SBATCH --cpus-per-task=1                                                                                    
  ##SBATCH --time=23:58:20                                                                                      
  ##SBATCH --partition=jobs                                                                                    
                                                                                                                
  export OMP_NUM_THREADS=2                                                                                      
  mpirun -np 1 gmx_mpi  ...

give the file a name like job.mpi, edit the last line to include your commands to gromacs, and submit with slurm:

  sbatch  test.mpi