GPU Cloud VMs

From KENET Training
Jump to: navigation, search

Preconfigured GPU appliances

KENET provides a set of preconfigured Virtual Machine appliances with the following codes:

  1. Quantum Espresso
  2. YAMBO
  3. SIESTA
  4. GROMACS
  5. Tensorflow
  6. PyTorch

To request for access please apply through this form: [1] The appliance requires no user configuration, and the above listed appliances will have the individual code ready with GPU support.

The codes can be run on the terminal directly, however, the SLURM job scheduler is also installed on the VM, and alternately, the codes can be run via the scheduler.

GROMACS logo.png

Gromacs GPU VM usage

In the Gromacs GPU vm, gromacs and mpi are available, to run gromacs, you can use the following:

   $ mpirun -np 1  /usr/local/bin/gmx_mpi 

We can retreive some examples to work with:

 $ mkdir ~/membrane
 $ cd ~/membrane
 $ wget https://gitlab.com/gromacs/online-tutorials/membrane-protein/-/archive/main/membrane-protein-main.zip 
 $ unzip membrane-protein-main.zip
 $ mv  membrane-protein-main/*  .
 $ mkdir run
 $ cd run 
 $ cp -rf ../data/input/charmm-gui-1MAL/gromacs/{step5_input.gro,step5_input.pdb,topol.top,index.ndx,toppar}  . 
 $ cp ../data/input/mdp/*.mdp . 

and finally run gromacs through an energy minimization:

 $ mpirun -np 1  /usr/local/bin/gmx_mpi grompp -f step6.0_minimization.mdp -o minimization.tpr -c step5_input.gro  -r step5_input.gro -p topol.top

Advanced usage with slurm:

To run gromacs in the GPU vm with slurm, create a submission script with the following contents:

#!/bin/bash                                                                                                  
##SBATCH --job-name="example-name"                                                                            
##SBATCH --get-user-env                                                                                      
##SBATCH --output=_scheduler-stdout.txt                                                                      
##SBATCH --error=_scheduler-stderr.txt                                                                        
##SBATCH --nodes=1                                                                                            
##SBATCH --ntasks-per-node=1                                                                                  
##SBATCH --cpus-per-task=1                                                                                    
##SBATCH --time=23:58:20                                                                                      
##SBATCH --partition=jobs                                                                                    
                                                                                                                
export OMP_NUM_THREADS=2  
cd ~/membrane                                                                                
mpirun -np 1 gmx_mpi grompp -f step6.0_minimization.mdp -o minimization.tpr -c step5_input.gro  -r step5_input.gro -p topol.top

give the file a name like job.mpi, edit the last line to include your commands to gromacs, and submit with slurm:

 sbatch  job.mpi

Watch Gromacs Demo

Quantum ESPRESSO logo.jpg

Quantum Espresso GPU VM usage

In the QE GPU vm, quantum espresso and mpi are available, to run it, you can use the following:

   $ mpirun -np 1  /usr/local/bin/pw.x 

We can retreive some examples to work with:

 mkdir ~/examples
 cd ~/examples/
 git clone https://github.com/Materials-Modelling-Group/training-examples.git
 cd  training-examples
 

Advanced usage with slurm:

to run gromacs in the GPU vm with slurm, create a submission script with the following contents:

#!/bin/bash                                                                                                  
##SBATCH --job-name="example-name"                                                                            
##SBATCH --get-user-env                                                                                      
##SBATCH --output=_scheduler-stdout.txt                                                                      
##SBATCH --error=_scheduler-stderr.txt                                                                        
##SBATCH --nodes=1                                                                                            
##SBATCH --ntasks-per-node=1                                                                                  
##SBATCH --cpus-per-task=1                                                                                    
##SBATCH --time=23:58:20                                                                                      
##SBATCH --partition=jobs                                                                                    
                                                                                                                                                                                                     
cd $HOME/examples/training-examples
mpirun -np 1  pw.x <al.scf.david.in > output.out

give the file a name like job.mpi, edit the last line to include your commands to pw.x, and submit with slurm:

  sbatch  job.mpi

Watch Quantum Espresso Demo

Yambo logo overlay.png

YAMBO GPU VM usage

In the YAMBO GPU vm, yambo and mpi are available, to run yambo, you can use the following:

   $ mpirun -np 1  /usr/local/bin/yambo

We can retreive some examples to work with:

mkdir examples
cd examples
wget https://media.yambo-code.eu/educational/tutorials/files/Silicon.tar.gz
tar -xf Silicon.tar.gz
cd Silicon/YAMBO/4x4x4

we have some prepared inputs we can use to run a convergence calculation, we can run it as such

mpirun -np 1 yambo -F Inputs/01HF_corrections -J HF_XXRy

or

Advanced usage with slurm:

to run yambo in the GPU vm with slurm, create a submission script with the following contents:

#!/bin/bash                                                                                                  
##SBATCH --job-name="example-name"                                                                            
##SBATCH --get-user-env                                                                                      
##SBATCH --output=_scheduler-stdout.txt                                                                      
##SBATCH --error=_scheduler-stderr.txt                                                                        
##SBATCH --nodes=1                                                                                            
##SBATCH --ntasks-per-node=1                                                                                  
##SBATCH --cpus-per-task=1                                                                                    
##SBATCH --time=23:58:20                                                                                      
##SBATCH --partition=jobs                                                                                    
                                                                                                                
cd $HOME/examples/Silicon/YAMBO/4x4x4                                                                                 
mpirun -np 1 yambo -F Inputs/01HF_corrections -J HF_XXRy

give the file a name like job.mpi, edit the last line to include your commands to yambo, and submit with slurm:

  sbatch  job.mpi

Watch Yambo Demo

TensorFlow logo.svg.png

Tensorflow GPU VM usage

We will try out the Tensorflow MNIST example from the documentation: [2] After logging in, there are instructions on how to activate the right conda based virtualenv environment:

   $ conda activate tf

this environment is preconfigured with tensorflow and has CUDA support. Next is we have to get the data and code to run, starting with the tensorflow_dataset package,

   $ pip3 install tensorflow_datasets

We can now attempt to run some code, place the following code in a plain text file, call it `example.py`

import tensorflow as tf
import tensorflow_datasets as tfds

(ds_train, ds_test), ds_info = tfds.load(
    'mnist',
    split=['train', 'test'],
    shuffle_files=True,
    as_supervised=True,
    with_info=True,
)

# training pipeline
def normalize_img(image, label):
  """Normalizes images: `uint8` -> `float32`."""
  return tf.cast(image, tf.float32) / 255., label

ds_train = ds_train.map(
    normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(128)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)

# Evaluation pipeline
ds_test = ds_test.map(
    normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(128)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)

# Create and train the model: 
model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10)
])
model.compile(
    optimizer=tf.keras.optimizers.Adam(0.001),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
 
model.fit(
    ds_train,
    epochs=6,
    validation_data=ds_test,
)

And now we can test it:

 $ python  example.py
  ...
 Epoch 1/6 
 469/469 [==============================] - 3s 2ms/step - loss: 0.3494 - sparse_categorical_accuracy: 0.9040 - val_loss: 0.1970 - 
 val_sparse_categorical_accuracy: 0.9431
 Epoch 2/6
 469/469 [==============================] - 1s 2ms/step - loss: 0.1655 - sparse_categorical_accuracy: 0.9530 - val_loss: 0.1394 - 
 val_sparse_categorical_accuracy: 0.9576
 Epoch 3/6
 469/469 [==============================] - 1s 2ms/step - loss: 0.1189 - sparse_categorical_accuracy: 0.9660 - val_loss: 0.1096 - 
 val_sparse_categorical_accuracy: 0.9666
 ...
 Epoch 6/6
 469/469 [==============================] - 1s 2ms/step - loss: 0.0599 - sparse_categorical_accuracy: 0.9827 - val_loss: 0.0775 - 
 val_sparse_categorical_accuracy: 0.9769

We have run tensorflow+Keras on the MNIST dataset, with a final accuract of 98%.

Watch Tensoflow Demo

Pytorch logo.png

PyTorch GPU VM Usage

We will run the MNIST example from the PyTorch documentation available here: [3]. Once you have logged in, there are instructions on how to activate the PyTorch Conda based virtualenv environment.

 $ conda activate pt

Once this is activated, we can now retreive the python code for the example, place it in a directory and run it:

 $ mkdir mnist    # creating a working dir
 $ cd  mnist      # changing directory to the working dir
 $ wget https://raw.githubusercontent.com/pytorch/examples/refs/heads/main/mnist/main.py

and finally, we are ready to run it:

 $ python  main.py
  Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz
  Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz to ../data/MNIST/raw/t10k-labels-idx1-ubyte.gz
  100.0%
  Extracting ../data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ../data/MNIST/raw
  Train Epoch: 1 [0/60000 (0%)]	Loss: 2.277304
  Train Epoch: 1 [640/60000 (1%)]	Loss: 1.823465
  ...
  Train Epoch: 14 [58880/60000 (98%)]	Loss: 0.013244
  Train Epoch: 14 [59520/60000 (99%)]	Loss: 0.000718
  Test set: Average loss: 0.0268, Accuracy: 9918/10000 (99%)

This code downloads some training MNIST data, runs a convolutional neural network based training, and gives a summary of the accuracy at the end 99%). There is no need to install PyTorch since its aready preconfigured in the `pt` environment.

Watch PyTorch Demo

Up: HPC_Usage