GPU Cloud VMs

From KENET Training
Jump to: navigation, search

Preconfigured GPU appliances

KENET provides a set of preconfigured Virtual Machine appliances with the following codes:

  1. Quantum Espresso
  2. YAMBO
  3. SIESTA
  4. GROMACS
  5. Tensorflow
  6. PyTorch

To request for access please apply through this form: [1] The appliance requires no user configuration, and the above listed appliances will have the individual code ready with GPU support.

The codes can be run on the terminal directly, however, the SLURM job scheduler is also installed on the VM, and alternately, the codes can be run via the scheduler.

Gromacs GPU VM usage

In the Gromacs GPU vm, gromacs and mpi are available, to run gromacs, you can use the following:

   $ mpirun -np 1  /usr/local/bin/gmx_mpi 

Advanced usage with slurm:

to run gromacs in the GPU vm with slurm, create a submission script with the following contents:

  #!/bin/bash                                                                                                  
  ##SBATCH --job-name="example-name"                                                                            
  ##SBATCH --get-user-env                                                                                      
  ##SBATCH --output=_scheduler-stdout.txt                                                                      
  ##SBATCH --error=_scheduler-stderr.txt                                                                        
  ##SBATCH --nodes=1                                                                                            
  ##SBATCH --ntasks-per-node=1                                                                                  
  ##SBATCH --cpus-per-task=1                                                                                    
  ##SBATCH --time=23:58:20                                                                                      
  ##SBATCH --partition=jobs                                                                                    
                                                                                                                
  export OMP_NUM_THREADS=2                                                                                      
  mpirun -np 1 gmx_mpi  ...

give the file a name like job.mpi, edit the last line to include your commands to gromacs, and submit with slurm:

  sbatch  test.mpi

Quantum Espresso GPU VM usage

In the QE GPU vm, quantum espresso and mpi are available, to run it, you can use the following:

   $ mpirun -np 1  /usr/local/bin/pw.x 

Advanced usage with slurm:

to run gromacs in the GPU vm with slurm, create a submission script with the following contents:

  #!/bin/bash                                                                                                  
  ##SBATCH --job-name="example-name"                                                                            
  ##SBATCH --get-user-env                                                                                      
  ##SBATCH --output=_scheduler-stdout.txt                                                                      
  ##SBATCH --error=_scheduler-stderr.txt                                                                        
  ##SBATCH --nodes=1                                                                                            
  ##SBATCH --ntasks-per-node=1                                                                                  
  ##SBATCH --cpus-per-task=1                                                                                    
  ##SBATCH --time=23:58:20                                                                                      
  ##SBATCH --partition=jobs                                                                                    
                                                                                                                                                                                                     
  mpirun -np 1 pw.x  ...

give the file a name like job.mpi, edit the last line to include your commands to pw.x, and submit with slurm:

  sbatch  test.mpi


YAMBO GPU VM usage

In the YAMBO GPU vm, yambo and mpi are available, to run yambo, you can use the following:

   $ mpirun -np 1  /usr/local/bin/yambo

Advanced usage with slurm:

to run yambo in the GPU vm with slurm, create a submission script with the following contents:

  #!/bin/bash                                                                                                  
  ##SBATCH --job-name="example-name"                                                                            
  ##SBATCH --get-user-env                                                                                      
  ##SBATCH --output=_scheduler-stdout.txt                                                                      
  ##SBATCH --error=_scheduler-stderr.txt                                                                        
  ##SBATCH --nodes=1                                                                                            
  ##SBATCH --ntasks-per-node=1                                                                                  
  ##SBATCH --cpus-per-task=1                                                                                    
  ##SBATCH --time=23:58:20                                                                                      
  ##SBATCH --partition=jobs                                                                                    
                                                                                                                
                                                                                   
  mpirun -np 1 yambo  ...

give the file a name like job.mpi, edit the last line to include your commands to yambo, and submit with slurm:

  sbatch  test.mpi

Up: HPC_Usage