Difference between revisions of "Module system"
From KENET Training
(Created page with "HPC facilities provide a user friendly environment to manage a large number of codes and versions of those codes, on this cluster, we have lmod module system available to man...") |
|||
(19 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | HPC facilities provide a user friendly environment to manage a large number of codes and versions of those codes, on this cluster, we have lmod module system available to manage the user environment. | + | [[File:Lmod-logo.jpeg| 250px]] |
+ | |||
+ | HPC facilities provide a user friendly environment to manage a large number of codes and versions of those codes, on this cluster, we have lmod [https://modules.readthedocs.io/en/latest/] module system available to manage the user environment. | ||
To see the codes that are available, the module command is available: | To see the codes that are available, the module command is available: | ||
<code bash> | <code bash> | ||
$ module avail | $ module avail | ||
− | + | -------------------------- /usr/share/modulefiles ------------------------------------- | |
mpi/openmpi-x86_64 | mpi/openmpi-x86_64 | ||
− | + | ------------------------ /opt/ohpc/pub/modulefiles -------------------------------------- | |
applications/gpu/gromacs/2024.4 applications/gpu/qespresso/7.3.1 | applications/gpu/gromacs/2024.4 applications/gpu/qespresso/7.3.1 | ||
</code> | </code> | ||
Line 26: | Line 28: | ||
module list | module list | ||
</code> | </code> | ||
+ | |||
+ | == Available Codes == | ||
+ | Here are some of the pre-configured GPU capable codes available on the cluster: | ||
+ | * '''Quantum Espresso''' | ||
+ | * '''Gromacs''' | ||
+ | * '''Tensorflow''' ( in conda-25.1.1-python-3.9.21 module ) | ||
+ | * '''PyTorch''' ( in conda-25.1.1-python-3.9.21 module ) | ||
+ | |||
+ | == [https://asciinema.org/a/NXTdW10S7kgN31i4PhtWmqK32 Watch Demo] == | ||
+ | |||
+ | Next: | ||
+ | [[Debuging_and_Interactive_Slurm_Jobs |Debuging_and_Interactive_Slurm_Jobs]] | ||
+ | |||
+ | Up: | ||
+ | [[ HPC_Usage| HPC_Usage]] |
Latest revision as of 20:35, 8 May 2025
HPC facilities provide a user friendly environment to manage a large number of codes and versions of those codes, on this cluster, we have lmod [1] module system available to manage the user environment.
To see the codes that are available, the module command is available:
$ module avail
-------------------------- /usr/share/modulefiles -------------------------------------
mpi/openmpi-x86_64
------------------------ /opt/ohpc/pub/modulefiles --------------------------------------
applications/gpu/gromacs/2024.4 applications/gpu/qespresso/7.3.1
the module system also provide a set of additional commands for managing modules:
module load to activate an environment:
module load applications/gpu/gromacs/2024.4
Module unload:
module unload applications/gpu/gromacs/2024.4
module purge:
module purge
and module list, to show loaded modules
module list
Available Codes
Here are some of the pre-configured GPU capable codes available on the cluster:
- Quantum Espresso
- Gromacs
- Tensorflow ( in conda-25.1.1-python-3.9.21 module )
- PyTorch ( in conda-25.1.1-python-3.9.21 module )
Watch Demo
Next: Debuging_and_Interactive_Slurm_Jobs
Up: HPC_Usage