Exa-DI: Facilitating the deployment of HPC applications with Package Managers
Exa-DI is proud to present its series of training courses for users of package managers, designed to optimise their user experience.
Deploying and porting applications on supercomputers remains a complex and time-consuming task. NumPEx encourages users to leverage package managers, allowing for precise and direct control of their software stack, with a particular focus on Guix and Spack.
A series of training courses and support events has been organised to assist users:
• Tutorial: Introduction to Guix – October 2025
• Tutorial @ Compass25: Guix-deploy – June 2025
• Coding session: Publishing packages on Guix-Science – May 2025
• Tutorial: Spack for beginners (online) – April 2025
• Tutorial: Using Guix and Spack for deploying applications on supercomputers – February 2025
Switching to new deployment methods takes time. NumPEx supports users by offering training, support, software packaging, tool improvements, and partnerships with computing centres to optimise the user experience.
For more information: https://numpex-pc5.gitlabpages.inria.fr/tutorials/webinar/index.html
Photo credit: Mohammad Rahmani / Unsplash
Strategy for the interoperability of digital scientific infrastructures
Find all the information about Exa-AtoW here.
The evolution of data volumes and computing capabilities is reshaping the scientific digital landscape. To fully leverage this potential, NumPEx and its partners are developing an open interoperability strategy connecting major instruments, data centers, and computing infrastructures.
Driven by data produced by large instruments (telescopes, satellites, etc.) and artificial intelligence, the digital scientific landscape is undergoing a profound transformation, fuelled by rapid advances in computing, storage and communication capabilities. The scientific potential of this inherently multidisciplinary revolution lies in the implementation of hybrid computing and processing chains, increasingly integrating HPC infrastructures, data centres and large instruments.
Anticipating the arrival of the Alice Recoque exascale machine, NumPEx’s partners and collaborators (SKA-France, MesoCloud, PEPR NumPEx, Data Terra, Climeri, TGCC, Idris, Genci) have decided to coordinate their efforts to propose interoperability solutions that will enable the deployment of processing chains that fully exploit all research infrastructures.
The aim of the work is to define an open strategy for implementing interoperability solutions, in conjunction with large scientific instruments, in order to facilitate data analysis and enhance the reproducibility of results.

Figure: Overview of Impact-HPC.
© PEPR NumPEx
Impacts-HPC: a Python library for measuring and understanding the environmental footprint of scientific computing
Find all the information about Exa-AToW here.
The environmental footprint of scientific computing goes far beyond electricity consumption. Impacts-HPC introduces a comprehensive framework to assess the full life-cycle impacts of HPC, from equipment manufacturing to energy use, through key environmental indicators.
The environmental footprint of scientific computing is often reduced to electricity consumption during execution. However, this only reflects part of the problem. Impacts-HPC aims to go beyond this limited view by also incorporating the impact of equipment manufacturing and broadening the spectrum of indicators considered.
This tool also makes it possible to trace the stages of a computing workflow and document the sources used, thereby enhancing transparency and reproducibility. In a context where the environmental crisis is forcing us to consider climate, resources and other planetary boundaries simultaneously, such tools are becoming indispensable.
The Impacts-HPC library covers several stages of the life cycle: equipment manufacturing and use. It provides users with three essential indicators:
• Primary energy (MJ): more relevant than electricity alone, as it includes conversion losses throughout the energy chain.
• Climate impact (gCO₂eq): calculated by aggregating and converting different greenhouse gases into CO₂ equivalents.
• Resource depletion (g Sb eq): reflecting the use of non-renewable resources, in particular metallic and non-metallic minerals.
This is the first time that such a tool has been offered for direct use by scientific computing communities, with an integrated and documented approach.
This library paves the way for a more detailed assessment of the environmental impacts associated with scientific computing. The next steps include integrating it into digital twin environments, adding real-time data (energy mix, storage, transfers), and testing it on a benchmark HPC centre (IDRIS).

Figure: Overview of Impact-HPC.
© PEPR NumPEx
Storing massive amounts of data: better understanding for better design and optimisation
Find all the information about Exa-DoST here.
A understanding of how scientific applications read and write data is key to designing storage systems that truly meet HPC needs. Fine-grained I/O characterization helps guide both optimization strategies and the architecture of future storage infrastructures.
Data is at the heart of scientific applications, whether it be input data or processing results. For several years, data management (reading and writing, also known as I/O) has been a barrier to the large-scale deployment of these applications. In order to design more efficient storage systems capable of absorbing and optimising this I/O, it is essential to understand how applications read and write data.
Thanks to the various tools and methods we have developed, we are able to produce a detailed characterisation of the I/O behaviour of scientific applications. For example, based on supercomputer execution data, we can show that less than a quarter of applications perform regular (periodic) accesses, or that concurrent accesses to the main storage system are less common than expected.
This type of result is decisive in several respects. For example, it allows us to propose I/O optimisation methods that respond to clearly identified application behaviours. Such characterisation is also a concrete element that influences the design choices of future storage systems, always with the aim of meeting the needs of scientific applications.

Figure: Step of data classification.
© PEPR NumPEx
A new generation of linear algebra libraries for modern supercomputers
Find all the information about Exa-SofT here.
Linear algebra libraries lie at the core of scientific computing and artificial intelligence. By rethinking their execution on hybrid CPU/GPU architectures, new task-based models enable significant gains in performance, portability, and resource utilization.
Libraries for solving or manipulating linear systems are used in many fields of numerical simulation (aeronautics, energy, materials) and artificial intelligence (training). We seek to make these libraries as fast as possible on supercomputers combining traditional processors and graphics accelerators (GPUs). To do this, we use asynchronous task-based execution models that maximise the utilisation of computing units.
This is an active area of research, but most existing approaches face the difficult problem of dividing the work into the ‘right granularity’ for heterogeneous computing units. Over the last few months, we have developed several extensions to a task-based parallel programming model called STF (Sequential Task Flow), which allows complex algorithms to be implemented in a much more elegant, concise and portable way. By combining this model with dynamic and recursive work partitioning techniques, we significantly increase performance on supercomputers equipped with accelerators such as GPUs, in particular thanks to the ability to dynamically adapt the granularity of calculations according to the occupancy of the computing units. For example, thanks to this approach, we have achieved a 2x speedup compared to other state-of-the-art libraries (MAGMA, Parsec) on a hybrid CPU/GPU computer.
Linear algebra operations are often the most costly steps in many scientific computing, data analysis and deep learning applications. Therefore, any performance improvement in linear algebra libraries can potentially have a significant impact for many users of high-performance computing resources.
The proposed extensions to the STF model are generic and can also benefit many computational codes beyond the scope of linear algebra.
In the next period, we wish to study the application of this approach to linear algebra algorithms for sparse matrices as well as to multi-linear algebra algorithms (tensor calculations).
Adapting granularity allows smaller tasks to be assigned to CPUs, which will not occupy them for too long, thus avoiding delays for the rest of the machine, while continuing to assign large tasks to GPUs so that they remain efficient.

Figure: Adjusting the grain size allows smaller tasks to be assigned to CPUs, which will not take up too much of their time, thus avoiding delays for the rest of the machine, while continuing to assign large tasks to GPUs so that they remain efficient.
© PEPR NumPEx
From Git repository to mass run: Exa-MA industrialises the deployment of NumPEx-compliant HPC applications
Find all the information about Exa-MA here.
By unifying workflows and automating key stages of the HPC software lifecycle, the Exa-MA framework contributes to more reliable, portable and efficient application deployment on national and EuroHPC systems.
HPC applications require reproducibility, portability and large-scale testing, but the transition from code to computer remains lengthy and heterogeneous depending on the site. The objective is to unify the Exa-MA application framework and automate builds, tests and deployments in accordance with NumPEx guidelines.
An Exa-MA application framework has been set up, integrating the management of templates, metadata and verification and validation (V&V) procedures. At the same time, a complete HPC CI/CD chain has been deployed, combining Spack, Apptainer/Singularity and automated submission via ReFrame/SLURM orchestrated by GitHub Actions. This infrastructure operates seamlessly on French national computers and EuroHPC platforms, with end-to-end automation of critical steps.
In the first use cases, the time between code validation and large-scale execution has been reduced from several days to less than 24 hours, without any manual intervention on site. Performance is now monitored by non-regression tests (high/low scalability) and will soon be enhanced by profiling artefacts.
The approach deployed is revolutionising the integration of Exa-MA applications, accelerating onboarding and ensuring controlled quality through automated testing and complete traceability.
The next phase of the project involves putting Exa-MA applications online and deploying a performance dashboard.

Figure: Benchmarking website page with views by application, by machine, and by use case.
© PEPR NumPEx
From urban data to watertight multi-layer meshes, ready for city-scale energy simulation
This highlight is based form the work of Christophe Prud'homme, Vincent Chabannes, Javier Cladellas, Pierre Alliez,
This research was carried by the Exa-MA project, in collaboration with CoE HiDALGO2, and projets Ktirio & CGAL. Find all the information about Exa-MA here.
How can we model an entire city to better understand its energy, airflow, and heat dynamics? Urban data are abundant — buildings, roads, terrain, vegetation — but often inconsistent or incomplete. A new GIS–meshing pipeline now makes it possible to automatically generate watertight, simulation-ready city models, enabling realistic energy and microclimate simulations at the urban scale.
Urban energy/wind/heat modeling requires closed and consistent geometries, while the available data (buildings, roads, terrain, hydrography, vegetation) are heterogeneous and often non-watertight. The objective is therefore to reconstruct watertight urban meshes at LoD-0/1, interoperable and enriched with physical attributes and models.
A GIS–meshing pipeline has been developed to automate the generation of closed urban models. It integrates data ingestion via Mapbox, robust geometric operations using Ktirio-Geom (based on CGAL), as well as multi-layer booleans ensuring the topological closure of the scenes. Urban areas covering several square kilometers are thus converted into consistent solid LoD-1/2 models (buildings, roads, terrain, rivers, vegetation). The model preparation time is reduced from several weeks to a few minutes, with a significant gain in numerical stability.
The outputs are interoperable with the Urban Building Model (Ktirio-UBM) and compatible with energy and CFD solvers.
This development enables rapid access to realistic urban cases, usable for energy and microclimatic simulations, while promoting the sharing of datasets within the Hidalgo² Centre of Excellence ecosystem.
The next step is to publish reference datasets — watertight models and associated scripts — on the CKAN platform (n.hidalgo2.eu). These works open the way to coupling between CFD and energy simulation, and to the creation of tools dedicated to the study and reduction of urban heat islands.
Figures: Reconstruction of the city of Grenoble within a 5 km radius, including the road network, rivers and bodies of water. Vegetation has not been included in order to reduce the size of the mesh, which here consists of approximately 6 million triangles — a figure that would at least double if vegetation were included.
© PEPR NumPEx
2025 InPEx workshop
Find all the presentation on InPEx website here
From April 14th to 17th, 2025, the InPEx global network of experts (Europe, Japan and USA) gathered in Kanagawa, Japan. Hosted by RIKEN-CSS and Japanese universities with the support of NumPEx, the InPEx 2025 workshop was dedicated to the challenges of the post-Exascale era.
Find all NumPEx contributions below:
- Introduction, with Jean-Yves Berthou, Inria director of NumPEx and representative for Europe
-
AI and HPC: Sharing AI-centric benchmarks of hybrid workflows
Co-chaired by Jean-Pierre Vilotte (CNRS) -
Software Production and Management
Co-chaired by Julien Bigot (CEA) -
AI and HPC: Generative AI for Science
Co-chaired by Alfredo Buttari (IRIT) and Thomas Moreau (Inria) -
Digital Continuum and Data Management
Co-chaired by Gabriel Antoniu (Inria)
If you want to know more, all presentations are available on InPEx website.
Photo credit: Corentin Lefevre/Neovia Innovation/Inria
NumPEx holds its first General Assembly
Bringing together 130 researchers, engineers, and partners at Inria Saclay, the 2025 NumPEx General Assembly was a key step for the future of NumPEx.
Over two days, participants engaged in discussions, workshops, and guest talks to explore the challenges of integrating Exascale computing into a broader digital continuum. The first day was marked by the live announcement that France had been selected to host one of the European AI Factories.
This General Assembly was also the perfect occasion to introduce YoungPEx to the entire PEPR community through a presentation and one of its first workshop. YoungPEx is a new initiative aimed at fostering collaboration among young researchers, including PhD students, post-docs, engineers, and volunteer permanent researchers. It will serve as a dynamic platform for networking, knowledge exchange, and interdisciplinary collaboration across the HPC and AI communities.
We were also pleased to welcome the TRACCS and Cloud research programs, which presented both ongoing and potential collaborations with NumPEx.
With this first General Assembly, NumPEx strengthens its community and continues its paths to Exascale and beyond.
© PEPR NumPEx
NumPEx newsletter n°2 - 2025 with NumPEx!
Redirection vers la newsletter... Si rien ne se passe, cliquez ici.














