Difference between revisions of "Module system"

From KENET Training
Jump to: navigation, search
Line 27: Line 27:
 
</code>
 
</code>
  
 +
# '''Available Codes'''
 
Here are some of the pre-configured GPU capable codes available on the cluster:  
 
Here are some of the pre-configured GPU capable codes available on the cluster:  
* Quantum Espresso
+
* '''Quantum Espresso'''
* Gromacs
+
* '''Gromacs'''
* Tensorflow  ( in conda-25.1.1-python-3.9.21 module )
+
* '''Tensorflow'''   ( in conda-25.1.1-python-3.9.21 module )
* PyTorch  ( in conda-25.1.1-python-3.9.21 module )
+
* '''PyTorch'''   ( in conda-25.1.1-python-3.9.21 module )
  
 
Next:
 
Next:

Revision as of 08:29, 30 April 2025

HPC facilities provide a user friendly environment to manage a large number of codes and versions of those codes, on this cluster, we have lmod [1] module system available to manage the user environment. To see the codes that are available, the module command is available:

 $ module avail
  -------------------------- /usr/share/modulefiles -------------------------------------
  mpi/openmpi-x86_64
  ------------------------ /opt/ohpc/pub/modulefiles --------------------------------------
  applications/gpu/gromacs/2024.4    applications/gpu/qespresso/7.3.1

the module system also provide a set of additional commands for managing modules: module load to activate an environment:

   module load applications/gpu/gromacs/2024.4

Module unload:

   module unload applications/gpu/gromacs/2024.4

module purge:

   module purge

and module list, to show loaded modules

   module list

  1. Available Codes

Here are some of the pre-configured GPU capable codes available on the cluster:

  • Quantum Espresso
  • Gromacs
  • Tensorflow ( in conda-25.1.1-python-3.9.21 module )
  • PyTorch ( in conda-25.1.1-python-3.9.21 module )

Next: Advanced_Usage

Up: HPC_Usage