Exa-DI: the first mini-application resulting from co-development is now available!
Find all the information about Exa-MA here.
Following the Exa-DI general meetings, working groups were formed to produce applications on four major themes. The first mini application on high-precision discretisation is now available.
Following the Exa-DI workshops, four working groups (WGs) were formed, bringing together all the players involved in co-design and co-development: Exa-DI’s Computational and Data Science (CDT) team, members of the various targeted NumPEx projects, and application demonstration teams. These groups focus on efficient discretisation, unstructured meshes, block-structured AMR, and AI applied to linear inverse problems at exascale, and are now actively moving forward.
Thanks to these WGs, the first shared mini-applications, representative of the technical challenges of exascale applications, are currently being developed. They integrate high value-added software components (libraries, frameworks, tools) provided by other NumPEx teams. In this context, the first mini-application on high-precision discretisation is now available, with others to follow soon.
A documentation hub, set up in early 2025, is gradually centralising tutorials and technical documents of general interest for NumPEx Exa-DI. It includes: the NumPEx software catalogue, webinars and training courses, documentation on co-design and CDT packaging, and much more.
Feel free to consult it to stay up to date on the tools and resources available.

Figure: Overview of Impact-HPC.
© PEPR NumPEx
Exa-DI: Facilitating the deployment of HPC applications with Package Managers
Exa-DI is proud to present its series of training courses for users of package managers, designed to optimise their user experience.
Deploying and porting applications on supercomputers remains a complex and time-consuming task. NumPEx encourages users to leverage package managers, allowing for precise and direct control of their software stack, with a particular focus on Guix and Spack.
A series of training courses and support events has been organised to assist users:
• Tutorial: Introduction to Guix – October 2025
• Tutorial @ Compass25: Guix-deploy – June 2025
• Coding session: Publishing packages on Guix-Science – May 2025
• Tutorial: Spack for beginners (online) – April 2025
• Tutorial: Using Guix and Spack for deploying applications on supercomputers – February 2025
Switching to new deployment methods takes time. NumPEx supports users by offering training, support, software packaging, tool improvements, and partnerships with computing centres to optimise the user experience.
For more information: https://numpex-pc5.gitlabpages.inria.fr/tutorials/webinar/index.html
Photo credit: Mohammad Rahmani / Unsplash
Exa-DI: the co-design and co-development in NumPEx is moving forward
Find all the information about Exa-DI here.
The implementation of the co-design and co-development process within NumPEx is one of Exa-DI’s objectives for the production of augmented and productive software. To this end, Exa-DI has organised three working groups open to all NumPEx members.
The Exa-DI project is responsible for implementing the co-design and co-development process within NumPEx, with the aim of producing augmented and productive exascale software that is science-driven. In this context, Exa-DI has already organised three workshops: one on “Efficient discretisation for exascale EDPs”, another on “Block-structured AMR at exascale” and a third on “Artificial intelligence for exascale HPC”. These two-day in-person workshops brought together Exa-DI members, members of other NumPEx projects, teams demonstrating applications from various sectors of research and industry, and experts.
Discussions focused on:
-
- Challenges related to the co-design and co-development process
- Key issues
- The most pressing issues for collective development and strengthening links between NumPEx and applications
- Initiatives promoting the sustainability of exascale software and performance portability.
A very interesting and stimulating result was the establishment of working groups focused on a set of shared and well-specified mini-applications representing the cross-cutting computational and communication patterns identified. Several application teams have expressed interest in participating in these groups. To date, four working groups are actively engaged in the co-design and co-development of mini-applications, with a view to integrating and evaluating the logical sets of software components developed in the NumPEx projects.
Strategy for the interoperability of digital scientific infrastructures
Find all the information about Exa-AtoW here.
The evolution of data volumes and computing capabilities is reshaping the scientific digital landscape. To fully leverage this potential, NumPEx and its partners are developing an open interoperability strategy connecting major instruments, data centers, and computing infrastructures.
Driven by data produced by large instruments (telescopes, satellites, etc.) and artificial intelligence, the digital scientific landscape is undergoing a profound transformation, fuelled by rapid advances in computing, storage and communication capabilities. The scientific potential of this inherently multidisciplinary revolution lies in the implementation of hybrid computing and processing chains, increasingly integrating HPC infrastructures, data centres and large instruments.
Anticipating the arrival of the Alice Recoque exascale machine, NumPEx’s partners and collaborators (SKA-France, MesoCloud, PEPR NumPEx, Data Terra, Climeri, TGCC, Idris, Genci) have decided to coordinate their efforts to propose interoperability solutions that will enable the deployment of processing chains that fully exploit all research infrastructures.
The aim of the work is to define an open strategy for implementing interoperability solutions, in conjunction with large scientific instruments, in order to facilitate data analysis and enhance the reproducibility of results.

Figure: Overview of Impact-HPC.
© PEPR NumPEx
Impacts-HPC: a Python library for measuring and understanding the environmental footprint of scientific computing
Find all the information about Exa-AToW here.
The environmental footprint of scientific computing goes far beyond electricity consumption. Impacts-HPC introduces a comprehensive framework to assess the full life-cycle impacts of HPC, from equipment manufacturing to energy use, through key environmental indicators.
The environmental footprint of scientific computing is often reduced to electricity consumption during execution. However, this only reflects part of the problem. Impacts-HPC aims to go beyond this limited view by also incorporating the impact of equipment manufacturing and broadening the spectrum of indicators considered.
This tool also makes it possible to trace the stages of a computing workflow and document the sources used, thereby enhancing transparency and reproducibility. In a context where the environmental crisis is forcing us to consider climate, resources and other planetary boundaries simultaneously, such tools are becoming indispensable.
The Impacts-HPC library covers several stages of the life cycle: equipment manufacturing and use. It provides users with three essential indicators:
• Primary energy (MJ): more relevant than electricity alone, as it includes conversion losses throughout the energy chain.
• Climate impact (gCO₂eq): calculated by aggregating and converting different greenhouse gases into CO₂ equivalents.
• Resource depletion (g Sb eq): reflecting the use of non-renewable resources, in particular metallic and non-metallic minerals.
This is the first time that such a tool has been offered for direct use by scientific computing communities, with an integrated and documented approach.
This library paves the way for a more detailed assessment of the environmental impacts associated with scientific computing. The next steps include integrating it into digital twin environments, adding real-time data (energy mix, storage, transfers), and testing it on a benchmark HPC centre (IDRIS).

Figure: Overview of Impact-HPC.
© PEPR NumPEx
Storing massive amounts of data: better understanding for better design and optimisation
Find all the information about Exa-DoST here.
A understanding of how scientific applications read and write data is key to designing storage systems that truly meet HPC needs. Fine-grained I/O characterization helps guide both optimization strategies and the architecture of future storage infrastructures.
Data is at the heart of scientific applications, whether it be input data or processing results. For several years, data management (reading and writing, also known as I/O) has been a barrier to the large-scale deployment of these applications. In order to design more efficient storage systems capable of absorbing and optimising this I/O, it is essential to understand how applications read and write data.
Thanks to the various tools and methods we have developed, we are able to produce a detailed characterisation of the I/O behaviour of scientific applications. For example, based on supercomputer execution data, we can show that less than a quarter of applications perform regular (periodic) accesses, or that concurrent accesses to the main storage system are less common than expected.
This type of result is decisive in several respects. For example, it allows us to propose I/O optimisation methods that respond to clearly identified application behaviours. Such characterisation is also a concrete element that influences the design choices of future storage systems, always with the aim of meeting the needs of scientific applications.

Figure: Step of data classification.
© PEPR NumPEx
A new generation of linear algebra libraries for modern supercomputers
Find all the information about Exa-SofT here.
Linear algebra libraries lie at the core of scientific computing and artificial intelligence. By rethinking their execution on hybrid CPU/GPU architectures, new task-based models enable significant gains in performance, portability, and resource utilization.
Libraries for solving or manipulating linear systems are used in many fields of numerical simulation (aeronautics, energy, materials) and artificial intelligence (training). We seek to make these libraries as fast as possible on supercomputers combining traditional processors and graphics accelerators (GPUs). To do this, we use asynchronous task-based execution models that maximise the utilisation of computing units.
This is an active area of research, but most existing approaches face the difficult problem of dividing the work into the ‘right granularity’ for heterogeneous computing units. Over the last few months, we have developed several extensions to a task-based parallel programming model called STF (Sequential Task Flow), which allows complex algorithms to be implemented in a much more elegant, concise and portable way. By combining this model with dynamic and recursive work partitioning techniques, we significantly increase performance on supercomputers equipped with accelerators such as GPUs, in particular thanks to the ability to dynamically adapt the granularity of calculations according to the occupancy of the computing units. For example, thanks to this approach, we have achieved a 2x speedup compared to other state-of-the-art libraries (MAGMA, Parsec) on a hybrid CPU/GPU computer.
Linear algebra operations are often the most costly steps in many scientific computing, data analysis and deep learning applications. Therefore, any performance improvement in linear algebra libraries can potentially have a significant impact for many users of high-performance computing resources.
The proposed extensions to the STF model are generic and can also benefit many computational codes beyond the scope of linear algebra.
In the next period, we wish to study the application of this approach to linear algebra algorithms for sparse matrices as well as to multi-linear algebra algorithms (tensor calculations).
Adapting granularity allows smaller tasks to be assigned to CPUs, which will not occupy them for too long, thus avoiding delays for the rest of the machine, while continuing to assign large tasks to GPUs so that they remain efficient.

Figure: Adjusting the grain size allows smaller tasks to be assigned to CPUs, which will not take up too much of their time, thus avoiding delays for the rest of the machine, while continuing to assign large tasks to GPUs so that they remain efficient.
© PEPR NumPEx
From Git repository to mass run: Exa-MA industrialises the deployment of NumPEx-compliant HPC applications
Find all the information about Exa-MA here.
By unifying workflows and automating key stages of the HPC software lifecycle, the Exa-MA framework contributes to more reliable, portable and efficient application deployment on national and EuroHPC systems.
HPC applications require reproducibility, portability and large-scale testing, but the transition from code to computer remains lengthy and heterogeneous depending on the site. The objective is to unify the Exa-MA application framework and automate builds, tests and deployments in accordance with NumPEx guidelines.
An Exa-MA application framework has been set up, integrating the management of templates, metadata and verification and validation (V&V) procedures. At the same time, a complete HPC CI/CD chain has been deployed, combining Spack, Apptainer/Singularity and automated submission via ReFrame/SLURM orchestrated by GitHub Actions. This infrastructure operates seamlessly on French national computers and EuroHPC platforms, with end-to-end automation of critical steps.
In the first use cases, the time between code validation and large-scale execution has been reduced from several days to less than 24 hours, without any manual intervention on site. Performance is now monitored by non-regression tests (high/low scalability) and will soon be enhanced by profiling artefacts.
The approach deployed is revolutionising the integration of Exa-MA applications, accelerating onboarding and ensuring controlled quality through automated testing and complete traceability.
The next phase of the project involves putting Exa-MA applications online and deploying a performance dashboard.

Figure: Benchmarking website page with views by application, by machine, and by use case.
© PEPR NumPEx
From urban data to watertight multi-layer meshes, ready for city-scale energy simulation
This highlight is based form the work of Christophe Prud'homme, Vincent Chabannes, Javier Cladellas, Pierre Alliez,
This research was carried by the Exa-MA project, in collaboration with CoE HiDALGO2, and projets Ktirio & CGAL. Find all the information about Exa-MA here.
How can we model an entire city to better understand its energy, airflow, and heat dynamics? Urban data are abundant — buildings, roads, terrain, vegetation — but often inconsistent or incomplete. A new GIS–meshing pipeline now makes it possible to automatically generate watertight, simulation-ready city models, enabling realistic energy and microclimate simulations at the urban scale.
Urban energy/wind/heat modeling requires closed and consistent geometries, while the available data (buildings, roads, terrain, hydrography, vegetation) are heterogeneous and often non-watertight. The objective is therefore to reconstruct watertight urban meshes at LoD-0/1, interoperable and enriched with physical attributes and models.
A GIS–meshing pipeline has been developed to automate the generation of closed urban models. It integrates data ingestion via Mapbox, robust geometric operations using Ktirio-Geom (based on CGAL), as well as multi-layer booleans ensuring the topological closure of the scenes. Urban areas covering several square kilometers are thus converted into consistent solid LoD-1/2 models (buildings, roads, terrain, rivers, vegetation). The model preparation time is reduced from several weeks to a few minutes, with a significant gain in numerical stability.
The outputs are interoperable with the Urban Building Model (Ktirio-UBM) and compatible with energy and CFD solvers.
This development enables rapid access to realistic urban cases, usable for energy and microclimatic simulations, while promoting the sharing of datasets within the Hidalgo² Centre of Excellence ecosystem.
The next step is to publish reference datasets — watertight models and associated scripts — on the CKAN platform (n.hidalgo2.eu). These works open the way to coupling between CFD and energy simulation, and to the creation of tools dedicated to the study and reduction of urban heat islands.
Figures: Reconstruction of the city of Grenoble within a 5 km radius, including the road network, rivers and bodies of water. Vegetation has not been included in order to reduce the size of the mesh, which here consists of approximately 6 million triangles — a figure that would at least double if vegetation were included.
© PEPR NumPEx
The 2025 annual meeting of Exa-Soft
The 2025 Exa-SofT Annual Assembly took place from 19 to 21 October, 2025, bringing together more than 60 researchers and engineers from academia and industry to discuss progress on scientific computing software, share results from work packages, and welcome the latest recruits.
Thursday, 19 October 2025
- General presentation of the project within its broader context (NumPEx)
by Raymond Namyst, professor at University of Bordeaux
and Alfredo Buttari, CNRS research scientist - Overview of each Work Package (WP)
by WP leaders
WP1 – Efficient and composable programming
models
By Marc Perache, CEA
Christian Perez, Inria
WP2 – Compilation and Automatic Code
Optimization
By Philippe Clauss, Inria
WP3 – Runtime Systems at Exascale
By Samuel Thibault, University of Bordeaux
WP4 – Numerical Libraries
By Marc Baboulin, Université Paris-Saclay
Abdou Guermouche, University of Bordeaux
WP5 – Performance analysis and
prediction
By François Trahay, Télécom SudParis
WP6 – Digital for Exascale
Energy management
By Georges Da Costa, Université de Toulouse
Amina Guermouche, Inria - Focus on 3 scientific results by recruits
by Ugo Battiston (WP1 & WP2)
Alec Sadler (WP2)
Erwan Auer (WP2)
Friday, 20 October 2025
-
Focus on 3 scientific results by recruits
by Raphaël Colin (WP2)
Thomas Morin (WP3)
Karmijn Hoogveld (WP4) -
Talk by David Goudin, Eviden
-
Introduction to our latest recruits
by Nicolas Ducarton (WP3)
Brieuc Nicolas (WP4)
Matthieu Robeyns (WP4)
Samuel Mendoza (WP4)
Jules Evans (WP6)
Assia Mighis (WP6)
3 minutes each to introduce themselves and their research -
3 presentations of mini-apps by Exa-DI
Proxy-Geos
by Henri Calandra, Total Energies
Dyablo
by Arnaud Durocher, CEA
Unstructured mesh generation for Exascale systems: a proxy application approach
by Julien Vanharen, Inria
20 minutes per mini-app: presentation + discussion on objectives, coding, status, needs, bottlenecks, and support from PC2 -
Breakout sessions for regular members / private exchanges for Board members
by Henri Calandra, Arnaud Durocher, Julien Vanharen
Technical discussions between Exa-Soft members and mini-app developers -
Board meeting feedback to Exa-Soft leaders
Session restricted to Board members, Exa-Soft leaders, and ANR
Saturday, 21 October 2025
-
Breakout sessions feedback
-
Focus on 3 scientific results by recruits
by Catherine Guelque (WP5)
Jules Risse (WP5 & WP6)
Albert d’Aviau (WP6) -
Final word: next milestones, deliverables
by Raymond Namyst, professor at University of Bordeaux
and Alfredo Buttari, CNRS research scientist
Attendees
- Emmanuel Agullo, Inria
- Erwan Auer, Inria
- Ugo Battiston, Inria
- Marc Baboulin, Université Paris-Saclay
- Vicenç Beltran Querol, BSC
- Jean-Yves Berthou, Inria
- Julien Bigot, CEA
- Jérôme Bobin, CEA
- Valérie Brenner, CEA
- Elisabeth Brunet, Telecom SudParis
- Alfredo Buttari, CNRS
- Henri Calandra, Total Energies
- Jérôme Charousset, CEA
- Philippe Clauss, Inria
- Raphaël Colin, Inria
- Albert d’Aviau de Piolant, Inria
- Georges Da Costa, Université de Toulouse
- Marco Danelutto, Université de Pise
- Stéphane de Chaisemartin, IFPEN
- Alexandre Denis, Inria
- Nicolas Ducarton, Inria
- Arnaud Durocher, CEA
- Assia Mighis, CNRS
- Bernd Mohr, Jülich
- Thomas Morin, Inria
- Jules Evans, CNRS
- Clémence Fontaine, ANR
- Nathalie Furmento, CNRS
- David Goudin, Eviden
- Catherine Guelque, Telecom SudParis
- Abdou Guermouche, Université de Bordeaux
- Amina Guermouche, Inria
- Julien Herrmann, CNRS
- Valentin Honoré, ENSIIE
- Karmijn Hoogveld, CNRS
- Félix Kpadonou, CEA
- Jerry Lacmou Zeutouo, Université de Picardie
- Sherry Li, Lawrence Berkeley National Laboratory
- Pérache Marc, CEA
- Théo Mary, CNRS
- Samuel Mendoza, Inria
- Julienne Moukalou, Inria
- Raymond Namyst, Université de Bordeaux
- Brieuc Nicolas, Inria
- Alix Peigue, INSA
- Christian Perez, Inria
- Lucas Pernollet, CEA
- Jean-Marc Pierson, IRIT
- Pierre-Etienne Polet, Inria
- Marie Reinbigler, Inria
- Vincent Reverdy, CNRS
- Jules Risse, Inria
- Matthieu Robeyns, IRIT
- Alexandre Roget, CEA
- Philippe Swartvaghe R, Inria
- Boris Teabe, ENSEEIHT
- Samuel Thibault, Université de Bordeaux
- François Trahay, Telecom SudParis
- Julien Vanharen, Inria
- Jean-Pierre Vilotte, CNRS
- Pierre Wacrenier, Inria
© PEPR NumPEx









