diff --git a/docs/environment/spack.md b/docs/environment/spack.md new file mode 100644 index 000000000..f989e887e --- /dev/null +++ b/docs/environment/spack.md @@ -0,0 +1,280 @@ +# Spack: A Package Manager on the UL HPC Platform + + + +## Introduction to Spack + +[Spack](https://spack.io/about/) is an open-source package manager designed for installing, building, and managing scientific software across a wide range of system including from personal computers to super computers. It supports multiple versions, compilers, and configurations of software packages, all coexisting in a single system without conflict. Spack provides with [over 8,500 ](https://packages.spack.io/)official software packages available since the `v1.0.0` release.Additionally users can also create [custom packages](https://spack-tutorial.readthedocs.io/en/latest/tutorial_packaging.html) via `package.py` files for software not yet available in the Spack pre-defined [packages](https://spack.readthedocs.io/en/latest/package_fundamentals.html). + + + + +??? question "Why use automatic building tools like [Easybuild](https://docs.easybuild.io) or [Spack](https://spack.io/) on HPC environments?" + + While it may seem obvious to some of you, scientific software is often surprisingly difficult to build. Not all software packages rely on standard building tools like Autotools/Automake (the famous `configure; make; make install`) or CMake. Even with standard building tools, parsing the available option to ensure that the build matches the underlying hardware is time consuming and error-prone. Furthermore, scientific software often contains hardcoded build parameters or the documentation on how to optimize the build is poorly maintained. + + Software build and installation frameworks like Easybuild or Spack allow reproducible builds, handle complex dependency trees, and automatically generate corresponding environment modulefiles (e.g., LMod) for easy usage. In the ULHPC platform, EasyBuild is the primary tool used to ensure optimized software builds. However, Spack is also available and can be valuable in more flexible or user-driven contexts.Spack Some HPC sites use both [1]. + + _Resources_ + + 1. See this [talk](https://www.archer2.ac.uk/training/courses/200617-spack-easybuild/) by William Lucas at EPCC for instance. + +??? question "when should consider [Spack](https://spack.io/)?" + + While EasyBuild is the primary and most integrated software management system on ULHPC, there are specific scenarios where users should consider using Spack. + + Spack is particularly suitable when users need greater flexibility and customization in building software. For example, if a user requires selecting specific compilers, enabling/disabling features like MPI or CUDA, or managing large and complex dependency chains easily, Spack offers a more configurable environment than EasyBuild.While EasyBuild is often favored by HPC system administrators for its robust and repeatable system-wide deployments, Spack is more focused on HPC developers and advanced users due to its easier-to-tweak nature. + + Additionally, Spack supports user-level installations, allowing users to create isolated environments without administrative privileges, making it highly suitable for personal or experimental setups. Spack's environment definition files (e.g., spack.yaml) further enhance its utility by allowing users to precisely replicate the same software stack elsewhere. + + In essence, Spack is the better choice when customization, portability, or broader package availability are required beyond what EasyBuild typically offers. + + + +## Setting up Spack. + +!!! warning "Connect to a compute node" + + For all tests and compilation with Spack, it is essential to run on a [**compute node**](../systems/iris/compute.md), not in the [**login/access**](../connect/access.md). For detailed information on resource allocation and job submission, visit the [**Slurm Job Management System**](../slurm/index.md). + + +### Clone & Setup Spack + +Cloning and setting up Spack in `$HOME` directory is recommended, as it provides significantly better performance for handling small files compared to `$SCRATCH`. + +To clone the Spack Repository: + +``` { .sh .copy } +cd $HOME +git clone --depth=2 --branch=releases/v1.0.0 https://github.com/spack/spack.git +``` + +Cloning the Spack repository creates a directory named spack, and by default, it uses the develop branch. However for improved stability switching to the latest official [release](https://github.com/spack/spack/releases) is recommended. The current release tags at that time `v1.0.0`. and to checkout the most recent release `v1.0.0` : + +``` { .sh .copy } +cd spack +git checkout v1.0.0 +``` + +To make Spack available in the shell session, source its environment setup script: + +``` { .sh .copy } +source $HOME/spack/share/spack/setup-env.sh +``` + +For convenience, this line can be added to the .`bashrc` file to make Spack automatically available in every new shell session. + + + +??? note "Verfifying Spack installtion" + + Once Spack is sourced, the following command should display the path to the Spack executable and confirming that the environment is correctly set up: + ``` { .sh .copy } + which spack + ``` + !!! note "Expected output resembles:" + + ``` { .sh } + spack () + { + : this is a shell function from: /home/users//spack/share/spack/setup-env.sh; + : the real spack script is here: /home/users//spack/bin/spack; + _spack_shell_wrapper "$@"; + return $? + } + ``` + This confirms that Spack’s environment is correctly loaded and ready for use. + + +### Spack Configuration Scopes + +Spack’s behavior is controlled by [configuration files](https://spack.readthedocs.io/en/latest/configuration.html) in different scopes, which determine settings like installation paths, compilers, and package preferences and so on.Spack’s default configuration settings reside in `$SPACK_ROOT/etc/spack/defaults`. Spack provides six distinct configuration scopes to handle this customization, applied in order of decreasing priority. + +| Scope | Directory | +|--------------|------------------------------------------------| +| Environment | In environment base directory (`spack.yaml`) | +| Custom | Custom directory, specified with `--config-scope` | +| User | `~/.spack/` | +| Site | `$SPACK_ROOT/etc/spack/` | +| System | `/etc/spack/` | +| Defaults | `$SPACK_ROOT/etc/spack/defaults/` | + +The user configuration scope, stored in `~/.spack/` is ideal for defining personal preferences, compiler settings, and package defaults that apply across multiple projects and environments.The settings of this scope affect all instances of Spack. For more details see the [official tutorials](https://spack-tutorial.readthedocs.io/en/isc22/tutorial_configuration.html#configs-tutorial) + +### Define System-Provided Packages + +Spack allows fine-grained control over how software is built through the [`packages.yaml`](https://spack.readthedocs.io/en/latest/packages_yaml.html) configuration file. This enables users to choose preferred implementations for virtual dependencies, choose particular compilers, and even configure Spack to use external installed software that are already available on the system while avoiding the need to rebuild everything from source. + +Spack’s build defaults are in the `etc/spack/defaults/packages.yaml` file.Most commonly, users define custom preferences in a user-level [configuration Scopes](https://spack.readthedocs.io/en/latest/configuration.html#configuration-scopes), which should be placed at`~/.spack/packages.yaml`. + +!!! question "Why is it crucial for users to define external packages in packages.yaml?" + + While Spack can build everything from source, fundamental libraries like [MPI](../software/swsets/mpi.md) and [BLAS](https://www.netlib.org/blas/)/[LAPACK](https://www.netlib.org/lapack/)are often highly optimized and meticulously tuned by system administrators to leverage the specific hardware capabilities of the HPC clusters (e.g., network interconnects, CPU features, GPU architectures). + + Using Spack's generic builds for these core libraries often results in sub-optimal performance compared to the finely-tuned system-provided versions. Declaring optimized external packages in `packages.yaml` ensures that Spack-built applications link against the most performant versions available in the [ULHPC software collection](https://hpc-docs.uni.lu/software/), thereby maximizing the efficiency of scientific computations. This avoids the overhead of rebuilding everything from source unnecessarily and guarantees users code benefits from HPC system's specialized hardware optimizations. + + +To create a `packages.yaml` file at the user-level configuration scope `~/.spack/`: + + +``` { .sh .copy } +mkdir -p $HOME/.spack/ +touch $HOME/.spack/packages.yaml +``` + + +Then, add the following contents, which instructs Spack to use system-provided versions of `GCC`, `binutils`, and `OpenMPI` configured with native fabrics: +``` { .sh .copy } + packages: + gcc: + externals: + - spec: gcc@13.2.0+binutils languages:='c,c++,fortran' + modules: + - compiler/GCC/13.2.0 + extra_attributes: + compilers: + c: /opt/apps/easybuild/systems/aion/rhel810-20250405/2023b/epyc/software/GCCcore/13.2.0/bin/gcc + cxx: /opt/apps/easybuild/systems/aion/rhel810-20250405/2023b/epyc/software/GCCcore/13.2.0/bin/g++ + fortran: /opt/apps/easybuild/systems/aion/rhel810-20250405/2023b/epyc/software/GCCcore/13.2.0/bin/gfortran + buildable: false + binutils: + externals: + - spec: binutils@2.40 + modules: + - tools/binutils/2.40-GCCcore-13.2.0 + buildable: false + libevent: + externals: + - spec: libevent@2.1.12 + modules: + - lib/libevent/2.1.12-GCCcore-13.2.0 + buildable: false + libfabric: + externals: + - spec: libfabric@1.19.0 + modules: + - lib/libfabric/1.19.0-GCCcore-13.2.0 + buildable: false + libpciaccess: + externals: + - spec: libpciaccess@0.17 + modules: + - system/libpciaccess/0.17-GCCcore-13.2.0 + buildable: false + libxml2: + externals: + - spec: libxml2@2.11.5 + modules: + - lib/libxml2/2.11.5-GCCcore-13.2.0 + buildable: false + hwloc: + externals: + - spec: hwloc@2.9.2+libxml2 + modules: + - system/hwloc/2.9.2-GCCcore-13.2.0 + buildable: false + mpi: + buildable: false + munge: + externals: + - spec: munge@0.5.13 + prefix: /usr + buildable: false + numactl: + externals: + - spec: numactl@2.0.16 + modules: + - tools/numactl/2.0.16-GCCcore-13.2.0 + buildable: false + openmpi: + variants: fabrics=ofi,ucx schedulers=slurm + externals: + - spec: openmpi@4.1.6 + modules: + - mpi/OpenMPI/4.1.6-GCC-13.2.0 + buildable: false + pmix: + externals: + - spec: pmix@4.2.6 + modules: + - lib/PMIx/4.2.6-GCCcore-13.2.0 + buildable: false + slurm: + externals: + - spec: slurm@23.11.10 sysconfdir=/etc/slurm + prefix: /usr + buildable: false + ucx: + externals: + - spec: ucx@1.15.0 + modules: + - lib/UCX/1.15.0-GCCcore-13.2.0 + buildable: false + zlib: + externals: + - spec: zlib@1.2.13 + modules: + - lib/zlib/1.2.13-GCCcore-13.2.0 + buildable: false + +``` +!!! note " Defining CUDA as an External Package" + // i need to adjust that . + + Similarly, users can configure Spack to use a system-provided CUDA toolkit by adding the following example to the `packages.yaml` file. This helps Spack avoid rebuilding CUDA from source and ensures compatibility with the system GPU drivers and libraries: + ``` { .sh .copy } + packages: + cuda: + externals: + - spec: + modules: + - + buildable: false + ``` + + +## Installing softwares with Spack +!!! Note + In this section i will include examples and detailed instructions on how to install software using Spack and link to the relevant official documentation. + +### Spack Environments + +A Spack [environment](https://spack.readthedocs.io/en/latest/environments.html) lets users manage software and dependencies in an isolated and reproducible way. + +!!! info + On shared clusters, it's highly recommended to use Spack environments to keep installations clean, avoid conflicts, and and simplify sharing or reproduction. + + +To create and activate a Spack [environmen](https://spack.readthedocs.io/en/latest/environments.html): + +``` { .sh .copy } +spack env create test-env +spack env activate test-env +``` + +This command creates a Spack environment in the directory `$SPACK_ROOT/var/spack/environments/test-env`. It also generates a `spack.yaml` file—the main configuration file where users specify packages to install, compilers to use, and other settings specific to that `test-env` environment.For more details see the official [Spack Environment Tutorial](https://spack-tutorial.readthedocs.io/en/latest/tutorial_environments.html). + +### Spack Packages Installation: + +Spack makes it easy to install software [packages](https://spack-tutorial.readthedocs.io/en/pearc22/tutorial_packaging.html#what-is-a-spack-package) from its extensive repository. To [install any package](https://spack.readthedocs.io/en/latest/package_fundamentals.html#installing-and-uninstalling) listed by spack list, use the following command: `spack install ` + + +!!! details "Spack Packages Spec" + + Spack uses a specific syntax to describe [package](https://spack.readthedocs.io/en/latest/packaging_guide_creation.html#structure-of-a-package) configurations during installation. Each configuration is called a [spec](https://spack.readthedocs.io/en/latest/spec_syntax.html) — a concise way to define package versions, compiler choices, variants, and dependencies. + + ``` { .sh .copy } + spack install hdf5@1.10.7 +mpi ^mpich@3.3.2 ^zlib@1.2.11 %gcc@13.2.0 + ``` + + This installs `HDF5` package in version `1.10.7` with MPI support, explicitly specifying `mpich` version 3.3.2 and `zlib` version 1.2.11 as dependencies, all built with GCC 13.2.0. + + + + + +### Creating your own packages + + +### Spack Binary Cache diff --git a/docs/software/cae/fenics.md b/docs/software/cae/fenics.md index 40fabfbb7..551b6a0c6 100644 --- a/docs/software/cae/fenics.md +++ b/docs/software/cae/fenics.md @@ -1,162 +1,135 @@ [![](https://fenicsproject.org/pub/tutorial/sphinx1/_static/fenics_banner.png){: style="width:200px;float: right;" }](https://fenicsproject.org/) -[FEniCS](https://fenicsproject.org/) is a popular open-source (LGPLv3) computing platform for -solving partial differential equations (PDEs). -FEniCS enables users to quickly translate scientific models -into efficient finite element code. With the high-level -Python and C++ interfaces to FEniCS, it is easy to get started, -but FEniCS offers also powerful capabilities for more -experienced programmers. FEniCS runs on a multitude of -platforms ranging from laptops to high-performance clusters. - -## How to access the FEniCS through [Anaconda](https://www.anaconda.com/products/individual) -The following steps provides information about how to installed -on your local path. -```bash -# From your local computer -$ ssh -X iris-cluster # OR ssh -Y iris-cluster on Mac - -# Reserve the node for interactive computation with grahics view (plots) -$ si --x11 --ntasks-per-node 1 -c 4 -# salloc -p interactive --qos debug -C batch --x11 --ntasks-per-node 1 -c 4 - -# Go to scratch directory -$ cds - -/scratch/users/ $ Anaconda3-2020.07-Linux-x86_64.sh -/scratch/users/ $ chmod +x Anaconda3-2020.07-Linux-x86_64.sh -/scratch/users/ $ ./Anaconda3-2020.07-Linux-x86_64.sh - -Do you accept the license terms? [yes|no] -yes -Anaconda3 will now be installed into this location: -/home/users//anaconda3 - - - Press ENTER to confirm the location - - Press CTRL-C to abort the installation - - Or specify a different location below - -# You can choose your path where you want to install it -[/home/users//anaconda3] >>> /scratch/users//Anaconda3 - -# To activate the anaconda -/scratch/users/ $ source /scratch/users//Anaconda3/bin/activate - -# Install the fenics in anaconda environment -/scratch/users/ $ conda create -n fenicsproject -c conda-forge fenics - -# Install matplotlib for the visualization -/scratch/users/ $ conda install -c conda-forge matplotlib -``` -Once you have installed the anaconda, you can always -activate it by calling the `source activate` path where `anaconda` -has been installed. - -## Working example -### Interactive mode -```bash -# From your local computer -$ ssh -X iris-cluster # or ssh -Y iris-cluster on Mac - -# Reserve the node for interactive computation with grahics view (plots) -$ si --ntasks-per-node 1 -c 4 --x11 -# salloc -p interactive --qos debug -C batch --x11 --ntasks-per-node 1 -c 4 - -# Activate anaconda -$ source /${SCRATCH}/Anaconda3/bin/activate - -# activate the fenicsproject -$ conda activate fenicsproject - -# execute the Poisson.py example (you can uncomment the plot lines in Poission.py example) -$ python3 Poisson.py -``` - -### Batch script -```bash -#!/bin/bash -l -#SBATCH -J FEniCS -#SBATCH -N 1 -###SBATCH -A -###SBATCH --ntasks-per-node=1 -#SBATCH -c 1 -#SBATCH --time=00:05:00 -#SBATCH -p batch - -echo "== Starting run at $(date)" -echo "== Job ID: ${SLURM_JOBID}" -echo "== Node list: ${SLURM_NODELIST}" -echo "== Submit dir. : ${SLURM_SUBMIT_DIR}" - -# activate the anaconda source -source ${SCRATCH}/Anaconda3/bin/activate - -# activate the fenicsproject from anaconda -conda activate fenicsproject - -# execute the poisson.py through python -srun python3 Poisson.py -``` - -### Example (Poisson.py) -```bash -# FEniCS tutorial demo program: Poisson equation with Dirichlet conditions. -# Test problem is chosen to give an exact solution at all nodes of the mesh. -# -Laplace(u) = f in the unit square -# u = u_D on the boundary -# u_D = 1 + x^2 + 2y^2 -# f = -6 - -from __future__ import print_function -from fenics import * -import matplotlib.pyplot as plt - -# Create mesh and define function space -mesh = UnitSquareMesh(8, 8) -V = FunctionSpace(mesh, 'P', 1) - -# Define boundary condition -u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2) - -def boundary(x, on_boundary): - return on_boundary - -bc = DirichletBC(V, u_D, boundary) - -# Define variational problem -u = TrialFunction(V) -v = TestFunction(V) -f = Constant(-6.0) -a = dot(grad(u), grad(v))*dx -L = f*v*dx - -# Compute solution -u = Function(V) -solve(a == L, u, bc) - -# Plot solution and mesh -#plot(u) -#plot(mesh) - -# Save solution to file in VTK format -vtkfile = File('poisson/solution.pvd') -vtkfile << u - -# Compute error in L2 norm -error_L2 = errornorm(u_D, u, 'L2') - -# Compute maximum error at vertices -vertex_values_u_D = u_D.compute_vertex_values(mesh) -vertex_values_u = u.compute_vertex_values(mesh) -import numpy as np -error_max = np.max(np.abs(vertex_values_u_D - vertex_values_u)) - -# Print errors -print('error_L2 =', error_L2) -print('error_max =', error_max) - -# Hold plot -#plt.show() -``` + + + +[FEniCS](https://fenicsproject.org/) is a popular open-source computing platform for solving partial differential equations (PDEs) using the finite element method ([FEM](https://en.wikipedia.org/wiki/Finite_element_method)). Originally developed in 2003, the earlier version is now known as legacy FEniCS. In 2020, the next-generation framework [FEniCSx](https://docs.fenicsproject.org/) was introduced, with the latest stable [release v0.9.0](https://fenicsproject.org/blog/v0.9.0/) in October 2024. Though it builds on the legacy FEniCS but introduces significant improvements over the legacy libraries. FEniCSx is composed of the following libraries that support typical workflows: [UFL](https://github.com/FEniCS/ufl) → [FFCx](https://github.com/FEniCS/ffcx) → [Basix](https://github.com/FEniCS/basix) → [DOLFINx](https://github.com/FEniCS/dolfinx), which are the build blocks of it. And new users are encouraged to adopt [FEniCSx](https://fenicsproject.org/documentation/) for its modern features and active development support. + +FEniCSx can be installed on [ULHPC](https://www.uni.lu/research-en/core-facilities/hpc/) systems using [Easybuild](https://docs.easybuild.io) or [Spack](https://spack.io/), Below are detailed instructions for each method, + + + +### Building FEniCS With Spack + +Building FEniCSx with Spack on the [ULHPC](https://www.uni.lu/research-en/core-facilities/hpc/) system requires that Users already installed Spack and sourced its enviroment on the cluster. If Spack is not yet configured, follow the [spack documentation](../../environment/spack.md) for installation and configuration. + +!!! note + + Spack would be a good choice for building FEniCSx because it automatically manages complex dependencies, allows to isolates all installations in a dedicated environment, leverages system-provided packages in ~/.`spack/packages.yaml` for optimal performance, and simplifies reproducibility and maintenance across different systems. + +Create and Activate a Spack Environment: + +To maintain an isolated installation, create a dedicated Spack environment in a chosen directory. +The following example sets up a stable release of FEniCSx `v0.9.0` in the `fenicsx-test` directory inside the `home` directory: + + cd ~ + spack env create -d fenicsx-test/ + spack env activate fenicsx-test/ + +Add the core FEniCSx components and common dependencies: + + spack add py-fenics-dolfinx@0.9.0+petsc4py fenics-dolfinx+adios2+petsc adios2+python petsc+mumps + +!!! Additional + + The spack `add command` add abstract specs of packages to the currently active environment and registers them as root `specs` in the environment’s `spack.yaml` file. Alternatively, packages can be predefined directly in the `spack.yaml` file located in`$SPACK_ENV`. + + spack: + # add package specs to the `specs` list + specs: + - py-fenics-dolfinx@0.9.0+petsc4py + - fenics-dolfinx+adios2+petsc + - petsc+mumps + - adios2+python + + view: true + concretizer: + unify: true + !!! note + Replace `@0.9.0` with a different version if you prefer to install others release. + +??? question " why unify : true ? " + + `unify: true` ensures all packages share the same dependency versions, preventing multiple builds of the same library. Without it, each `spec` could resolve dependencies independently, leading to potential conflicts and redundant installations. + +Once Packages `specs` have been added to the current environment, they need to be concretized. + + spack concretize + spack install -j16 + +!!! note + + Here, [`spack concretize`](https://spack.readthedocs.io/en/latest/environments.html#spec-concretization) resolves all dependencies and selects compatible versions for the specified packages. In addition to adding individual specs to an environment, the `spack install` command installs the entire environment at once and `-j16` option sets the number of CPU cores used for building, which can speed up the installation. + Once installed, the FEniCSx environment is ready to use on the cluster. + +The following are also common dependencies used in FEniCS scripts: + + spack add gmsh+opencascade py-numba py-scipy py-matplotlib + +It is possible to build a specific version (git ref) of DOLFINx. +Note that the hash must be the full hash. +It is best to specify appropriate git refs on all components. + + # This is a Spack Environment file. + # It describes a set of packages to be installed, along with + # configuration settings. + spack: + # add package specs to the `specs` list + specs: + - fenics-dolfinx@git.4f575964c70efd02dca92f2cf10c125071b17e4d=main+adios2 + - py-fenics-dolfinx@git.4f575964c70efd02dca92f2cf10c125071b17e4d=main + + - py-fenics-basix@git.2e2a7048ea5f4255c22af18af3b828036f1c8b50=main + - fenics-basix@git.2e2a7048ea5f4255c22af18af3b828036f1c8b50=main + + - py-fenics-ufl@git.b15d8d3fdfea5ad6fe78531ec4ce6059cafeaa89=main + + - py-fenics-ffcx@git.7bc8be738997e7ce68ef0f406eab63c00d467092=main + + - fenics-ufcx@git.7bc8be738997e7ce68ef0f406eab63c00d467092=main + + - petsc+mumps + - adios2+python + view: true + concretizer: + unify: true + +It is also possible to build only the C++ layer using (Need to comment about why we add python depndencies?) + + spack add fenics-dolfinx@0.9.0+adios2 py-fenics-ffcx@0.9.0 petsc+mumps + +To rebuild FEniCSx from main branches inside an existing environment: + + spack install --overwrite -j16 fenics-basix py-fenics-basix py-fenics-ffcx fenics-ufcx py-fenics-ufl fenics-dolfinx py-fenics-dolfinx + +#### Testing the build + +Quickly test the build with + + srun python -c "from mpi4py import MPI; import dolfinx" + +!!! info "Try the Build Explicitly" + + After installation, the [FEniCSx](https://fenicsproject.org/documentation/) build can be tried explicitly by running the demo problems corresponding to the installed release version, as provided in the [FEniCSx documentation](https://docs.fenicsproject.org/). + For [DOLFINx](https://docs.fenicsproject.org/dolfinx/main/python/) Python bindings, see for example the demos in the [stable release v0.9.0](https://docs.fenicsproject.org/dolfinx/v0.9.0/python/demos.html). + + +#### Known issues + +Workaround for inability to find gmsh Python package: + + export PYTHONPATH=$SPACK_ENV/.spack-env/view/lib64/:$PYTHONPATH + +Workaround for inability to find adios2 Python package: + + export PYTHONPATH=$(find $SPACK_ENV/.spack-env -type d -name 'site-packages' | grep venv):$PYTHONPATH + + + + +### Building FEniCS With EasyBuild + + + ## Additional information FEniCS provides the [technical documentation](https://fenicsproject.org/documentation/), diff --git a/mkdocs.yml b/mkdocs.yml index f2c736140..4f777ff25 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -66,6 +66,7 @@ nav: - Overview: 'environment/index.md' - Modules: 'environment/modules.md' - Easybuild: 'environment/easybuild.md' + - Spack: 'environment/spack.md' - Containers: 'environment/containers.md' - Conda: 'environment/conda.md' ###########