Difference between revisions of "Module system"
From KENET Training
Line 26: | Line 26: | ||
module list | module list | ||
</code> | </code> | ||
+ | |||
+ | Here are some of the pre-configured GPU capable codes available on the cluster: | ||
+ | - Quantum Espresso | ||
+ | - Gromacs | ||
+ | - Tensorflow ( in conda-25.1.1-python-3.9.21 module ) | ||
+ | - PyTorch ( in conda-25.1.1-python-3.9.21 module ) | ||
Next: | Next: |
Revision as of 08:26, 30 April 2025
HPC facilities provide a user friendly environment to manage a large number of codes and versions of those codes, on this cluster, we have lmod [1] module system available to manage the user environment.
To see the codes that are available, the module command is available:
$ module avail
-------------------------- /usr/share/modulefiles -------------------------------------
mpi/openmpi-x86_64
------------------------ /opt/ohpc/pub/modulefiles --------------------------------------
applications/gpu/gromacs/2024.4 applications/gpu/qespresso/7.3.1
the module system also provide a set of additional commands for managing modules:
module load to activate an environment:
module load applications/gpu/gromacs/2024.4
Module unload:
module unload applications/gpu/gromacs/2024.4
module purge:
module purge
and module list, to show loaded modules
module list
Here are some of the pre-configured GPU capable codes available on the cluster: - Quantum Espresso - Gromacs - Tensorflow ( in conda-25.1.1-python-3.9.21 module ) - PyTorch ( in conda-25.1.1-python-3.9.21 module )
Next: Advanced_Usage
Up: HPC_Usage