Postdoctoral position

Applications should be sent to Fabienne Jéquel, [email protected].

Context

Launched in 2023 for a duration of 6 years, The NumPEx PEPR aims to contribute to the design and development of numerical methods and software components that will equip future European Exascale and post-Exascale machines. NumPEx also aims to support scientific and industrial applications in fully exploiting their potentials.

Exa-MA aims to revolutionize methods and algorithms for exascale scaling: discretization, resolution, learning and order reduction, inverse problem, optimization and uncertainties. We are contributing to the software stack of future European computers.

Mission

Nowadays, most numerical simulations are performed in IEEE754 binary64 precision (double precision) [3]. But this approach can be costly in terms of computing time, memory transfer and energy consumption. A better strategy would be to use no more precision than needed to get the desired accuracy on the computed result. The challenge of using mixed precision is to find which variables may be represented in lower precision (such as single or half precision) and which ones should stay in higher precision (double precision).
Several precision auto-tuning tools, such as FloatSmith [5], Precimonious [6] and PROMISE [2,4, 1], have been proposed. From an initial user program, PROMISE2 (PRecision OptiMISEd) automatically modifies the precision of variables taking into account an accuracy requirement on the computed results. To estimate the numerical quality of results, PROMISE uses Discrete Stochastic Arithmetic (DSA) [7] that controls round-off errors in simulation programs. The search for a suitable precision configuration is performed with a reasonable complexity thanks to the Delta Debug algorithm [8] based on a hypothesis-trial-result loop.

During this PostDoc several directions will be explored to improve algorithms for precision auto-tuning and numerical validation.

We plan to design a novel autotuning algorithm that will automatically provide arbitrary precision codes, from a required accuracy on the computed results. Because of the number of possible type configurations, particular attention will be paid to the algorithm performance.
The type configuration produced will then enable one to improve storage cost, and also execution time taking into account the numerical formats available on the target architectures.
We plan to combine mixed precision algorithms and precision autotuning tools. Such automatic tools may be useful in the design of mixed precision linear algebra algorithms.
Conversely the performance of precision autotuning tools may be improved thanks to mixed precision algorithms. Linear algebra kernels could be automatically identified in simulation codes, and replaced by their mixed precision version, in order to reduce the exploration space for precision tuning.
The precision auto-tuning algorithms designed during this PostDoc will be validated on large scale programs developed by partners of the NumPEx project. Furthermore new methodologies will be proposed to perform autotuning of both numerical formats and performance parameters in collaboration with experts in coupled physics simulations.

Required Skills

Candidates must have a PhD in Computer Science, Applied Mathematics or other relevant fields, with good programming skills. Developments will be carried out in C++ and Python, so programming expertise in at least one of these languages is required. Good knowledge in
numerical algorithms and floating-point computation is also required.

More informations and references