Read the orginal version here (French)
Computations and algorithms running on supercomputers must be optimized to achieve the best possible performance. Théo Mary, a CNRS research scientist at the LIP6 laboratory (CNRS/Sorbonne University), develops approximations that make it possible to solve problems involving up to hundreds of millions of unknowns in a reasonable time and with controlled loss of precision.
Exact sciences sometimes have much to gain from well-chosen approximations. Théo Mary, a research scientist at the LIP6 Laboratory, enhances algorithmic performance through such methods. His contributions span fields like approximate computing, scientific computing, and high-performance computing (HPC), which powers supercomputers.
“With increasingly powerful machines, scaling up computations presents a major challenge,” explains Théo Mary. “The methodology of approximate computing uses approximations and relaxes certain constraints on precision or robustness, in order to save time, storage, and energy.”
Indeed, the HPC community is preparing for the arrival of exascale machines—supercomputers capable of exceeding one exaflop, or a billion billion floating-point operations per second. Current algorithms must be optimized to run efficiently on such systems. France’s first exascale supercomputer is expected to go online in 2025.
Théo Mary focuses on two main types of approximations: computational and mathematical, which he develops and combines. Computational approximations involve switching from standard 64-bit number formats to more compact formats such as 32-bit or 16-bit. Mathematical approximations compress data at the algorithmic level, often by leveraging structured matrices—such as sparse or low-rank matrices. The acceptable level of precision loss is defined by users’ concrete needs.
Théo Mary has made several contributions—both theoretical and practical—to the field of approximate algorithms. On one hand, he studies their stability, ensuring they yield consistent results even when values are rounded or approximated. On the other hand, he implements these algorithms on parallel supercomputers and optimizes their performance. His work has been applied to geophysics projects, including seismic imaging analysis, as well as in fluid mechanics.
“I think, for instance, about how many significant digits we really need to retain in a calculation, knowing that traditional computations preserve sixteen digits after the decimal point,” explains Théo Mary. “For geophysical models, four digits are often enough. These compressions can lead to performance improvements by several orders of magnitude. If you can successfully approximate a quadratic-complexity function with a linear one, the speedup can be up to a thousandfold. In geophysics, within the WIND project, this allowed us to solve a problem involving 500 million unknowns—likely a world record—which required the use of 50,000 processor cores.”
“I bring to these projects the benefits of approximate computing.”
This major achievement was made using the open-source software MUMPS (Multifrontal Massively Parallel Sparse Direct Solver), in which Théo Mary is deeply involved. MUMPS is a function library that has solved sparse linear algebra problems for over 30 years. Théo has been contributing to it for about a decade, notably by introducing both numerical and computational compressions. He has also recently joined the NumPEx Exascale Computing Research Program (PEPR NumPEx). This large-scale initiative aims to optimize HPC, artificial intelligence, and high-performance data analytics (HPDA) to fully harness the potential of the exascale era.
These contributions have earned Théo Mary the CNRS Bronze Medal. “I’m deeply honored by this recognition,” he says. “I’m glad for the opportunity to highlight the topics I work on, and I firmly believe that approximate computing will offer promising solutions in the exascale era.”
Read the original version on CNRS Informatics
© Théo Mary
NumPEx Newsletter
Subscribe to our newsletter to stay informed on the latest breakthroughs in High-Performance Computing, Exascale research, and cutting-edge digital innovations.
You may also be interested in these articles
17/04/2025
2025 InPEx workshop
13/03/2025
NumPEx holds its first General Assembly
04/03/2025
The 2025 annual meeting of Exa-MA
30/01/2025