The second co-design and co-development workshop of Exa-DI on "Block-structured AMR @Exascale"

The second co-design/co-development workshop of the Exa-DI project (Development and Integration) of the PEPR NumPEx was dedicated to the computation and communication motif “Block-structured AMR @Exascale”. It took place on February 6 and 7, 2024 at the “Grand Amphi” of the “Institut de Physique du Globe de Paris” in Paris.

This face-to-face workshop brought together, for two days, Exa-DI members, members of the other NumPEx projects (Exa-MA: Methods and Algorithms for Exascale, Exa-SoFT: HPC Software and Tools, Exa-DoST: Data-oriented Software and Tools for the Exascale and Exa-AToW: Architectures and Tools for Large-Scale Workflows), Application demonstrators (ADs) from various research and industry sectors and Experts to discuss advancements and future directions for block structured AMR at exascale.

 

This workshop is the second co-design/co-development workshops in the series whose main objective is to promote software stack co-development strategies to accelerate exascale development and performance portability of computational science and engineering applications. Discussions included challenges in co-design and co-development process, key questions and most urgent issues for collective exploration to build links across NumPEx and the applications, and initiatives promoting exascale software stack sustainability, emphasizing collaboration and innovation.

Key sessions included

  • Introduction and Context: Setting the stage for the workshop’s main theme.
  • Attendees Self-Introduction: Allowing attendees to introduce themselves and their interests.
  • Various Technical Sessions: These sessions featured talks on topics such as exascale performance evaluation and advancements in exascale simulations for different applications like astrophysics simulations, flame fronts and gas/liquid interfaces as well as long molecular dynamic simulations with polarizable force fields. In addition, two experts gave presentations on the Samurai and Hercule libraries and a developer of the massively parallel open-source WarpX Particle-In-Cell code presented his feedback on the implementation of AMReX framework.
  • Discussions and RoundTables: These sessions provided opportunities for attendees to engage in discussions and share insights on the presented topics.

Invited speakers

  • Jean-Pierre Vilotte from CNRS, member of Exa-DI who provided the introductory context for the workshop.
  • Maxime Delorme & Arnaud Durocher from CEA, presenting Dyablo, an AMR code for astrophysics simulations in the exascale era.
  • Loic Straffela from Ecole Polytechnique, discussing optimizing I/O performance for AMR Code.
  • Igor Chollet from Sorbonne Université, presenting ANKH, a scalable alternative to FFT-based approaches for energy computation on accelerator-based exascale architectures.
  • Loic Gouarin from Ecole Polytechnique, presenting SAMURAI: Structured Adaptive mesh and Multi Resolution on Algebra of Intervals
  • Luca Fedeli from CEA, discussing implementation of AMReX for WaprX, a Particule-In-Cell code for the exascale era.
  • Vincent Moureau from CNRS addressing Dynamic Mesh Adaptation of massive unstructured grids for the simulation of flame fronts and gas/liquid interfaces.

Outcomes and impacts

A very interesting and stimulating outcome that was discussed and decided during this workshop is the set-up of a working group addressing a suite of shared and well specified proxy-apps and mini-apps for this co-design computation and communication motif. Several teams of ADs have expressed their interest in participating in this working group which is being formed and whose first meeting should take place soon.

The discussions allowed us to determine the different goals of this working group. In particular, the criteria of the common mini-apps and proxy-apps that will be built was defined. They have to (i) represent algorithms, data structures and layouts, and other computational and communication characteristics across the different application demonstrators, (ii) leverage and integrate logical suites of software components (libraries, frameworks, tools), (iii) measure interoperability levels, performance gain and/or trade-off between components, performance portability, scalability and software quality and (iv) develop collaborative and shared continuous integration and benchmarking methodologies with standardized performance tools to guide optimizations, together with reference meta-data and specifications models.

The second main goal of this working group, that is also a main goal of the workshop series, is to identify the human resources and expertise in the Computational and Data Team (CDT) that Exa-DI needs to deploy. In the co-design/co-development process, the CDT will ensure the interface between the NumPEx projects and the AD teams to support the co-design and co-development of the mini-apps and proxy-apps suite, together with reference data models for sharing specifications and benchmarking/testing results.

Attendees

  • Jean-Pierre Vilotte, CNRS and member of Exa-DI
  • Valérie Brenner, CEA and member of Exa-DI
  • Jérôme Bobin, CEA and member of Exa-DI
  • Mark Asch, Université Picardie and member of Exa-DI
  • Julien Bigot, Inria and member of Exa-DI
  • Karim Hasnaoui, CNRS and member of Exa-DI
  • Christophe Prud’homme, Université de Strasbourg and member of Exa-MA
  • Hélène Barucq, Inria and member of Exa-MA
  • Isabelle Ramière, CEA and member of Exa-MA
  • Vincent Faucher, CEA and member of Exa-MA
  • Christian Perez, Inria and member of Exa-MA
  • Raymon Namyst, Université de Bordeaux and  member of Exa-SoFT
  • Alfredo Butari, CNRS and member of Exa-SoFT
  • Marius Garenaux, Université de Rennes and member of Exa-AToW
  • Olivier Martineau, Université de Rennes and member of Exa-AToW
  • Vincent Moureau, CNRS and application demonstrator
  • Maxime Delorme, CEA and application demonstrator
  • Arnaud Durocher, CEA and application demonstrator
  • Allan Sacha, CEA and application demonstrator
  • Damien Chapon, CEA and application demonstrator
  • Grégoire Doeble, CEA and application demonstrator
  • Dominique Aubert, Université de Strasbourg and application demonstrator
  • Olivier Marchal, Université de Strasbourg and application demonstrator
  • Igor Cholet, Université Paris 13 and application demonstrator
  • Jean Philippe Piquemal, Sorbonne Université and application demonstrator
  • Louis Lagardère, Sorbonne Université and application demonstrator
  • Olivier Adjoua, Sorbonne Université and application demonstrator
  • Stefano Frambati, Total Energies and application demonstrator
  • Luca Fedeli, CEA
  • Loic Strafella, École polytechnique
  • Loic Gouarin, CNRS
  • Marc Massot, École polytechnique
  • Pierre Matalon, École polytechnique
  • Geoffroy Lesur, CNRS and member of the PEPR Origins


Illustration for hiring scientist on the NumPEx exascale project no people

The first co-design and co-development workshop of Exa-DI on "Efficient Discretisation for PDE@Exascale"

The first co-design/co-development workshop of the Exa-DI project (Development and Integration) of the PEPR NumPEx had the topic “Efficient Discretisation for PDE@Exascale” and took place on November 7 and 8, 2023 at the Amphithéâtre J. Talairach (Neurospin) at CEA Saclay in Gif-sur-Yvette.

This face-to-face workshop brought together for two days Exa-DI members, members of the other NumPEx projects (Exa-MA: Methods and Algorithms for Exascale, Exa-SofT: HPC Software and Tools, Exa-DoST: Data-oriented Software and Tools for the Exascale and Exa-AToW: Architectures and Tools for Large-Scale Workflows), Application demonstrators (ADs) from various research and industry sectors and Experts to discuss advancements and future directions for efficient discretisation of physics-based partial differential equations (PDEs) at exascalein discretizing partial differential equations (PDEs) efficiently for exascale applications.

 

This workshop is the first co-design/co-development workshops in the series whose main objective is to promote co-software stack development strategies to accelerate exascale development and performance portability of computational science and engineering applications. Discussions included challenges in co-design and co-development process, key questions and most urgent issues for collective exploration building links across NumPEx and the applications, and initiatives promoting exascale software stack sustainability, emphasizing collaboration and innovation.

Key sessions included

  • Introduction and Context: Setting the stage for the workshop’s main theme.
  • Attendees Self-Introduction: Allowing attendees to introduce themselves and their interests.
  • Various Technical Sessions: These sessions featured talks on topics like exascale performance evaluation and advancements in exascale simulations for different applications like durable aircraft prototype, CO2 sequestration, turbomachinery, Earth dynamo simulations, dynamic energy simulation for urban buildings, structural and fluid mechanics simulations, geoscience simulations and finally plasma turbulence simualtions. In addition, an expert does a presentation of Kokkos.
  • Discussions and RoundTables: These sessions provided opportunities for attendees to engage in discussions and share insights on the presented topics.

Invited speakers

  • Jean-Pierre Vilotte from CNRS, member of Exa-DI who provided the introductory context for the workshop.
  • Eric Savin from ONERA, discussing exascale performance evaluation for a durable aircraft prototype.
  • Henri Calandra from TotalEnergies, discussing exascale multiphysics simulators for CO2 sequestration and monitoring.
  • Christian Trott from SNL, presenting on Kokkos.
  • Julien Vanharen & Loic Marechal from Inria, addressing exascale simulations for turbomachinery.
  • Nathanaël Schaeffer & Hugo Frezat from CNRS, exploring machine learning applications in Earth dynamo simulations.
  • Vincent Chabannes & Christophe Prud’homme from Université de Strasbourg, discussing dynamic energy simulation for urban buildings.
  • Olivier Jamond from CEA, presenting a new generation HPC PDE solver targeting industrial applications in structural and fluid mechanics, the MANTA project.
  • Soleiman Yousef from IFP Energies nouvelles, discussing performance issues in geoscience applications.
  • Virginie GrandGirard from CEA, discussing the GYSELA code for plasma turbulence simulations.

Outcomes and impacts

A very interesting and simulating outcome that was discussed and decided during this workshop is the set-up of a working group addressing a suite of shared and well specified proxy-apps and mini-apps for this co-design computation and communication motif. Several teams of ADs have expressed their interest in participating in this working group which is being formed and whose first meeting should take place next January.

The discussions allowed us to determine the different goals of this working group. In particular, the criteria of the common mini-apps and proxy-apps that will be build was defined. They have to (i) represent algorithms, data structures and layouts, and other computational and communication characteristics across the different application demonstrators, (ii) leverage and integrate logical suites of software components (libraries, frameworks, tools), (iii) measure interoperability levels, performance gain and/or trade-off between components, performance portability, scalability and software quality and (iv) develop collaborative and shared continuous integration and benchmarking methodologies with standardized performance tools to guide optimizations, together with reference meta-data and specifications models.

The second main goal of this working group that is also a main goal of the workshop series is to identify the human resources and expertise in the CDT (Computational and Data Team) that Exa-DI needs to deploy. In the co-design/co-development process, the CDT will ensure the interface between the NumPEx projects and the ADs teams to support the co-design and co-development of the mini-apps and proxy-apps suite, together with reference data models for sharing specifications and benchmarking/testing results.

Attendees

  • Jean-Pierre Vilotte, CNRS and member of Exa-DI
  • Valérie Brenner, CEA and member of Exa-DI
  • Jérôme Bobin, CEA and member of Exa-DI
  • Mark Asch, Université Picardie and member of Exa-DI
  • Julien Bigot, Inria and member of Exa-DI
  • Karim Hasnaoui, CNRS and member of Exa-DI
  • Christophe Prud’homme, Université de Strasbourg and member of Exa-MA
  • Hélène Barucq, Inria and member of Exa-MA
  • Guillaume Latu, CEA and member of Exa-MA
  • Raymond Namyst, Université de Bordeaux and member de Exa-SoFT
  • Joshua Bowen, Inria and member of Exa-DoST
  • Christian Robert Trott, Sandia National Laboratories
  • Virginie Grandgirard, CEA and application demonstrator
  • Youssef Soleiman, IFPEN and application demonstrator
  • Stéphane de Chaisemartin, IFPEN and application demonstrator
  • Ani Anciaux Sedrakian, IFPEN and application demonstrator
  • Julien Vanharen, Inria and application demonstrator
  • Loic Marechal, Inria and application demonstrator
  • Nathanael Saeffer, CNRS and application demonstrator
  • Hugo Frezat, CNRS and application demonstrator
  • Savin Eric, Onera and application demonstrator
  • Denis Gueyffier, Onera and application demonstrator
  • Henri Calandra, Total Energies and application demonstrator
  • Stefano Frambati, Total Energies and application demonstrator
  • Olivier Jamon, CEA and application demonstrator
  • Nicolas Lelong, CEA and application demonstrator
  • Vincent Chabanne, Université de Strasbourg and application demonstrator


The world's most powerful supercomputer coming soon? Elon Musk's optimistic bet with Dojo

Article originally published on the "L'Usine nouvelle" website here

Jules Verne, Frontier, Jupiter… They all have one thing in common: they’re supercomputers, capable of performing 1 billion billion floating-point operations per second. And they will soon be joined by Dojo, Elon Musk’s supercomputer, via his Tesla company. Dojo is meant to be the world’s most powerful supercomputer and will be used to train the artificial intelligence models behind the autopilot systems in Tesla vehicles.  But with commissioning scheduled for late 2024, will the billionaire’s ambition be achievable?

In this L’usine nouvelle article, Inria researcher Jean-Yves Berthou and CEA researcher Jérôme Bobin, both program directors of the NumPEx PEPR, discuss the subject. For the two researchers, the project is not impossible, but will undoubtedly involve compromises. Will Tesla be able to assemble all the hardware needed to build the supercomputer by October 2024? What will the supercomputer’s real capabilities be?

Photo credit Imgix / UnSplash


The return of the supercomputer race between the United States, China, and Europe

Article originally published on the "Le Monde" website here

Today’s booming digital technologies require the processing and storage of vast amounts of data. For this, supercomputers are essential. The development of artificial intelligence, machine learning and quantum computing all depend on these structures. Whether it’s the rapid development of medicines and vaccines, improving the aerodynamics of aircraft and other vehicles to reduce energy consumption, or simulating a nuclear explosion for deterrence purposes, the applications are diverse. But above all, they raise political and societal stakes for the major powers: Europe, the USA and China are engaged in a frantic race for supercomputers.

© Christian MOREL / LISN / CNRS Images


NumPEx BOF@SC23

The International Conference for High Performance Computing, Networking, Storage, and Analysis SuperComputing 2023 will take place in Denver from November 12 to 17, 2023.

sc23_homeDuring this conference, a birds of a Feather (BoF) connected to the NumPEx program is scheduled, allowing conference attendees to openly discuss current topics of interest to the HPC community.

Several (trans-)national initiatives have recognized the crucial importance of co-design between hardware, software and application stakeholders in the path to Exascale and beyond. It is seen as indispensable for the efficient exploitation of Exascale computing resources in the development of large-scale application demonstrators but also to prepare complex applications to fully exploit the full capacity of Exascale and post-Exascale sytems.  Among these projects, we can cite the French NumPEX project (41M€) but also the EuroHPC program, or the ECP project in the US and FugakuNEXT project in Japan.  However, these efforts are somehow disconnected while the community would benefit by sharing return of experience, common know-how, and advancing on the new problems arising as exascale machines become more and more available.

Building on earlier efforts of the International Exascale Software Project (IESP), the European EXtreme Data and Computing Initiative (EXDCI), and the BDEC community, we will work on the implementation of an international, shared, high quality computing environment that focuses on co-design principles and practices. US, European and Japanese partners have already met and identified a set of areas for international coordination (among others):

  • Software production and management: packaging, documentation, builds, results, catalogs, continuous integration, containerization, LLVM, parallel tools, etc.
  • Software sustainability
  • Future and disruptive SW&HW technologies and usages (investments and roadmaps)
  • Mapping of missing capabilities (driven by both Apps and SW)
  • Roadmap of near-term HW targets
  • HPC/AI convergence: ML, open models and datasets for AI training,
  • FAIR data stewardship
  • Digital Continuum and Data management
  • Benchmarks and evaluation, co-design (HW, SW, applications)
  • Energy and environmental impact and sustainability
  • Collaboration/Partnership factory: establish collaborations at international level
  • Training

The proposed BOF will offer an overview of the different Exascale programs and initiatives from the perspective of co-design (mixing application, software-stack and hardware perspective). Then, international partners representing major computing centers from Europe, USA and Japan, will expose and discuss the common issues and questions.

BoF leaders will seek feedback from attendees on the objectives of co-design and the efficient exploitation and coordination of existing efforts in Europe, the US and Japan, dedicated to exascale. BOF participants will be invited to share their views on the above problems and issues. We will also discuss candidates for collaborative application demonstrators, of which the LHC project of CERN, IPCC climate models, and the SKA project are some representative examples. Finally, we will call for contributions from participants.

At the BoF, a panel of experts from NumPEx, ECP, Riken-CC, BSC, JSC and from selected application communities, will raise issues and solicit input in each of their respective areas. Representatives of all the involved funding agencies will be invited to participate and contribute.

The ultimate goal of this BOF is to launch a new series of workshops dedicated to international collaborations between Europe, USA and Japan on Exascale and post-Exascale computing.

 

 

 


GENCI

Unleashing the Power of Exascale Computing: Genci's 2022 Activity Report

We are thrilled to present the highly anticipated activity report of Genci for the year 2022.

As a leading organization responsible for providing powerful computational and data processing resources, Genci has been instrumental in driving scientific research and innovation at both the national and European levels.

With a mission to promote the use of supercomputing coupled with artificial intelligence, Genci has made significant strides in benefitting scientific research communities, academia, and industrial sectors. Join us as we explore the remarkable achievements showcased in this 68-page report.

Launching Innovative Programs and Initiatives:

Genci’s commitment to pushing the boundaries of computational capabilities is evident through the launch of several groundbreaking programs and initiatives. The report highlights key projects, such as:

  1. NumPEx: The NumPEx initiative aims to harness the power of supercomputing and AI to drive scientific progress. By providing researchers with cutting-edge computational resources, Genci enables them to tackle complex challenges across various scientific domains.
  2. Jules Verne Consortium for Exascale: Genci’s partnership in the Jules Verne Consortium demonstrates their dedication to advancing exascale computing. This collaboration fosters innovation and propels research in areas that were once unimaginable.
  3. CLUSSTER Project: The CLUSSTER project focuses on integrating cloud computing solutions into Genci’s infrastructure. By embracing the cloud, Genci enhances flexibility and scalability, enabling researchers to tackle data-intensive workloads with ease.
  4. New Supercomputer “Adastra”: Genci’s introduction of the state-of-the-art supercomputer “Adastra” marks a significant milestone. With its remarkable computational power, Adastra empowers researchers to tackle complex simulations, accelerate data analysis, and drive scientific breakthroughs.

Driving Quantum Computing Advancements:

Genci recognizes the immense potential of quantum computing and has made significant progress in this field. The report highlights notable achievements, including:

  1. National Hybrid Quantum Computing Platform: Genci has played a pivotal role in launching of this platform. This initiative fosters collaboration and enables researchers to explore the capabilities of quantum computing in solving real-world problems.
  2. Integration of Quantum Systems: Genci has acquired its first quantum systems, marking a significant step towards enabling researchers to harness the power of quantum computing. These systems pave the way for groundbreaking research and innovation in quantum-enabled applications.
  3. The Quantum Package: Genci’s Quantum Package (PAck Quantique) provides researchers with the necessary tools and resources to explore hybrid quantum computing systems. This initiative promotes the development of novel algorithms and applications that bridge classical and quantum computing.

Advancements in Artificial Intelligence:

Genci has embraced the transformative potential of artificial intelligence, as highlighted in the report:

Bloom Model: Genci’s Bloom Model showcases their efforts to develop cutting-edge AI algorithms and frameworks. By combining supercomputing with AI, Genci facilitates breakthrough research in machine learning, deep learning, and data analytics.

Contributing to Scientific Research and Industry:

Genci is dedicated to supporting scientific research communities, academia, and industrial sectors through different initiatives, as exemplified by their efforts in:

  1. Reusing Waste Heat: Genci’s innovative approach includes the valorization of waste heat generated by the Jean Zay supercomputer. This environmentally friendly initiative showcases Genci’s dedication to sustainability and efficient resource utilization.
  2. Grands Challenges: Genci actively supports researchers in tackling grand challenges, providing them with the computational resources needed to address complex problems across diverse scientific disciplines.
  3. Exemplary Simulations: The report presents compelling examples of simulations conducted with Genci’s resources, showcasing the impactful discoveries and advancements made possible through their support.
  4. Community of Large Industrial Groups: Genci’s collaboration with large industrial groups highlights their commitment to bridging the gap between academia and industry. By fostering partnerships, Genci facilitates the transfer of cutting-edge research and technological advancements into real-world applications.

Genci’s Regional and European Ecosystem:

The report highlights Genci’s active involvement in regional and European initiatives:

  1. Regional Initiatives: Genci actively contributes to regional development through initiatives like SiMSEO, Competence Center, and MesoNet. These programs encourage cooperation among research institutions and industries, which promotes innovation and contributes to economic growth..
  2. European Collaborations: Genci’s participation in European collaborations, such as PRACE, EuroHPC, EUPEX, and EPI SGA, underscores their commitment to establishing a strong European ecosystem for high-performance computing. These collaborations facilitate knowledge exchange, resource sharing, and foster a vibrant European research community.

The 2022 Activity Report by Genci demonstrates their commitment to empowering scientific research and driving innovation by integrating exascale computing, artificial intelligence, and quantum computing.

Through the launch of groundbreaking programs, the introduction of cutting-edge technologies, and collaborations with research communities and industry, Genci has made significant contributions to advancing scientific frontiers.

Their commitment to sustainable practices and regional and European partnerships further solidifies their position as a leading provider of computational resources.

As we look to the future, Genci continues to pave the way for transformative discoveries and breakthroughs in scientific research and technological innovation.


numpex-plage_KO23

NumPEx Launches into Action with an Ambitious Kick-Off Agenda in Perros-Guirrec

In a series of dynamic sessions hosted from June 26th to 28th in the charming town of Perros-Guirrec, NumPEx embarked on an intensive kick-off event, setting the stage for a transformative journey in Exascale computing. Leaders, experts, and collaborators convened to delve into an agenda rich with insights,workshops, and collaborative initiatives.

numpex-plage_KO23
All the NumPEx Kick-Off participants

The kick-off began with a comprehensive introduction, outlining the objectives and significance of the NumPEx program, aiming to establish a common vision and foster collaboration to implement a coherent software stack and related processes by 2025, benefiting not only France but also Europe, in preparation for the Exascale machine. Key figures such as Jerome Bobin, Michel Dayde, and Jean-Yves Berthou elaborated on the program's goals and organizational structure. Board members shared their perspectives on the Exascale vision and roadmaps:

GENCI's Exascale Vision and Roadmap:

  • Presentation of GENCI's role and missions, including hosting the Exascale project for EuroHPC.
  • European HPC initiative partnership with EuroHPC and others, leveraging PRACE and GEANT.
  • Introduction of the Jules Verne consortium, highlighting international and industrial partnerships.
  • Vision of the European Exascale machine: addressing societal challenges, fostering innovation, and emphasizing HPC/IA data-centric convergence.
  • Collaboration plans with NumPEx, including building a functional program, benchmark development, and product promotion.

Eviden Exascale Vision and Roadmap:

  • Eviden's complex approach involving HPC, HPDA, IA, and quantum technologies with a focus on sovereign and European components.
  • Involvement in the European integrated processor for Exascale machines (SiPearl) and collaborations with various technology projects.
  • Collaboration with CEPP for application support and participation in technology projects related to Exascale, quantum, cloud, and more.

National and European Ecosystem:

  • Introduction of EUPEX, a 4-year project with a budget similar to NumPEx, aiming to deploy a modular Exascale system using the OpenSequana architecture.
  • Collaboration with NumPEx, potential for shared experiments and results, and exploration of common dissemination.
  • Presentation of Data Direct Network (DDN) with a focus on AI and Lustre parallel file system, highlighting challenges and the importance of understanding NumPEx applications.

The afternoon continued with a tour of the five projects (PCs) within the NumPEx program:

  • Exa-MA, which aims to design scalable algorithms and numerical methods for forthcoming exascale machines. Led by Christophe Prudhomme (Université de Strasbourg) and Helene Barucq (Inria).
  • Exa-Soft, to develop a coherent, portable, efficient, and resilient software stack for exascale. Led by Raymond Namyst (Inria) and Alfredo Buttari (CNRS - Centre national de la recherche scientifique).
  • Exa-DoST, to overcome challenges relating to data, notably storage, I/O, in situ processing, and smart analytics, in exascale supercomputers. Led by Gabriel Antoniu (Inria) and Julien Bigot (CEA).
  • Exa-ATOW, to deal with large-scale workflows involving exascale machines. Led by François Bodin (Université de Rennes), Mark Asch (Université de Picardie Jules Verne (UPJV)), and Thierry Deutsch (CEA).
  • Exa-DI, to ensure transverse co-design and software productivity for exascale supercomputers. Led by Jean-Pierre Vilotte (CNRS) and Valérie Brenner (CEA).

The day concluded with an emphasis on the collaborative efforts between NumPEx and other initiatives, with a focus on benchmark development, software-hardware links, and the overall goal of preparing for the challenges of the Exascale era.

KO_NumPEx_23_1

The second day kicked off with an invigorating early morning jog along the seashore, setting a vibrant tone for a day filled with thematic workshops. Participants engaged in focused discussions on energy synergies, GPU integration, applications, co-design, gender/diversity/equity, software production integration, training, resilience, international collaborations, and artificial intelligence. Thematic workshops, led by domain experts, fostered collaboration within smaller groups, emphasizing the program's commitment to a transverse approach to Exascale challenges.

 

KO_NumPEx_23_2

The final day commenced with a synthesis of workshop outcomes, highlighting the depth of discussions within each thematic area. Workshop leaders consolidated insights, offering a panoramic view of challenges and opportunities. Here is an overview of the key insights and strategic actions discussed during these workshops:

GPU Accelerators Workshop

In a dedicated workshop on GPU Accelerators, experts emphasized the pivotal role of Graphics Processing Units (GPUs) in achieving exascale computing. With 90-99% of large machine performance attributed to GPU acceleration, the workshop highlighted the need for applications to explore the potential of these powerful processors. Challenges discussed included new programming paradigms, code portability, data management, and the hardware landscape driven by gaming and artificial intelligence. The workshop outlined a comprehensive plan, including future workshops, analysis papers, tutorials, hackathons, and examples of successfully ported mini-apps.

Energy Workshop

The Energy Workshop focused on achieving Exascale computing within a power consumption limit of 20MW. Experts delved into environmental, scientific, technical, and societal dimensions, providing a roadmap for the HPC community. Key challenges identified included modeling system consumption, real-time measurement tools, resource prioritization based on societal impact, and the broader environmental impact of research activities. The action plan involves developing a performance and consumption model, optimization strategies, tools for users, and fostering links with external entities to incorporate energy considerations.

Gender Equity and Diversity Seminar

The action plan includes the establishment of a Code of Conduct, assessment of gender distribution, creation of a web platform for resources, education and training initiatives, awareness and outreach programs, and dedication to accessibility and recognition. NumPEx aims to create an inclusive and collaborative future, inviting all stakeholders to contribute to the initiatives.

AI Workshop

The AI Workshop explored the critical intersection of HPC and AI, addressing challenges and outlining a strategic plan for collaborative exploration. Key discussions included decision support tools for AI applications in HPC, optimizing runtimes for AI models, and converging HPC and AI usages. The action plan involves establishing an AI Working Group, conducting transversal workshops, and developing fundamental building blocks for a convergent future.

Training Strategies Workshop

The Training Strategies Workshop addressed the complexities of training in the context of the impending exascale era. Discussions included the scope and subjects of training programs, the creation of sustainable training models, and economic considerations in training initiatives. The workshop emphasized collaborative and inclusive training initiatives to prepare the scientific community for the challenges and opportunities of exascale computing.

International Collaborations Workshop

The International Collaborations Workshop focused on identifying challenges and setting objectives for enhanced collaborative frameworks on a European and global scale. Discussions covered scientific and technological challenges, the design and development of the exascale software stack, and strategic action plans. The outlined roadmap includes hosting workshops, exchanging insights and experiences, and strengthening collaborations with international entities.

National Centers Integration Workshop

The National Centers Integration Workshop aimed to align NumPEx with HPC infrastructures, emphasizing operational elements between computing centers and NumPEx 's targeted projects. Discussions covered operational assessment, cybersecurity, job profiling, and traceability. The workshop set a plan for regular video conferences, ensuring ongoing communication and collaboration.

Software Production Workshop

The Software Production Workshop focused on streamlining software development practices in the HPC domain. Challenges discussed included bridging silos, enforcing good practices, and amplifying impact. Insights and conclusions highlighted diverse development practices, sustainability models, and the deployment of continuous integration and certification. NumPEx 's commitment to advancing software production practices aims to foster innovation, collaboration, and sustainable development in HPC.

Exascale Resilience Workshop

The Exascale Resilience Workshop navigated complexities associated with exascale application deployment. Discussions covered diverse approaches across NumPEx PCs, key challenges, and strategic choices. The action plan involves listing and analyzing application needs, analyzing barriers to library adoption, and scrutinizing international solutions. NumPEx aims to foster collaborative solutions for enhanced application resilience at a global scale.

Applications and Co-Design Workshop

The Applications and Co-Design Workshop promoted co-development strategies for advanced application development. Discussions included challenges in co-design, key questions for collective exploration, building connections, and initiatives for sustainability. The workshop set the stage for upcoming co-development project workshops, emphasizing collaboration and innovation.

As the leaders bid farewell to Perros-Guirrec, NumPEx looks ahead to transforming shared visions and insights into tangible actions in the realm of Exascale computing. The kick-off marked the initiation of a collaborative journey, and NumPEx is poised to lead the charge in scientific innovation.

For the latest updates and progress on the NumPEx program, stay tuned to our news section. The journey to Exascale has begun, and NumPEx is at the forefront of this pioneering expedition.

KO_NumPEx_23_4KO_NumPEx_23_4KO_NumPEx_23_3


The French and Dutch governments welcome the decision of the EuroHPC joint venture to host and operate a new European Exascale supercomputer in France

Article originally published on the enseignementsup website here

After its acquisition by EuroHPC, this supercomputer will be hosted at the TGCC of CEA by the end of 2025. The Jules Verne consortium aims to deploy a world-class Exascale supercomputer, based on European technologies. It will address major societal and scientific challenges such as climate change, new materials, and personalized medicine. The total cost amounts to 542 million euros, funded by EuroHPC, France, and the Netherlands. The NumPEx program will contribute to software development for these machines.

“The approval by EuroHPC of the Jules Verne consortium’s application is excellent news for French and European research. It marks a significant step forward in securing funding for an Exascale-class supercomputer, with a total value of 542 million euros.

These computing resources will be necessary to tackle the scientific and technological challenges ahead of us, such as climate change, energy transition, or healthcare.”

Said Sylvie Retailleau, French Minister of Higher Education and Research.
And Robbert Dijkgraaf, Dutch Minister of Education, Culture and Science, to conclude:

It’s excellent news that the European scientific community, led by France and the Netherlands, has joined forces to build the supercomputer proposed by the Jules Verne consortium. Europe is thus reaffirming its position in the global research arena. […] With this immense computing power, scientists have a glimpse of the future, enabling them to help solve fundamental societal problems in areas such as healthcare or the fight against climate change.”

Photo credit : Chris Liverani/Unsplash


Super computer abstract futuristic design

What is exascale ?

In today’s world, information has become an essential resource. Massive amounts of data are produced every day, from various sources such as social networks, sensors, scientific simulations, and many more. To efficiently process this data and meet the complex challenges of our time, it is crucial to have powerful computing capabilities.

This is where exascale comes in. Exascale is a measure of computing power that represents one trillion (10^18) floating point operations per second, or one million billion calculations per second. This performance is simply astounding and far exceeds that of all existing supercomputers.

Discover the exascale: The computing power of the future

Numpex reserch project Exascale

The race to exascale :

Since the first electronic computers, the computing power of machines has grown exponentially thanks to the advancement of technologies. As computational demands grew more complex, researchers and engineers set themselves the goal of achieving exascale. This has given rise to a veritable race for innovation in the field of supercomputers.

 

Technological challenges :

Achieving exascale is not just about increasing the speed of processors. This requires a multidimensional approach that integrates several research areas. One of the main challenges is to design more energy-efficient processors capable of processing billions of calculations while minimizing power consumption.

In addition, the architecture of supercomputers must be redesigned to fully exploit the performance of processors. Parallel and distributed architectures, as well as the use of specialized processors like graphics accelerators (GPUs), play a key role in achieving exascale.

 

Exascale applications :

The exascale opens the way to many possibilities in various fields. In science and research, it will enable more accurate and faster simulations, enabling significant advances in fields such as medical research, meteorology, materials physics, astrophysics and many more.

Exascale is also essential for the development of artificial intelligence and machine learning. Deep learning models, which require massive amounts of data and computation, will be able to be trained much faster, enabling faster advancements in these areas.


CNRS photo Exascale interview

PEPR NumPEx : High-Performance Computing and European Sovereignty

Article originally published on the CNRS website here

The Digital Exploratory Exascale Priority Research Program (PEPR), led by CEA, CNRS, and Inria, aims to develop software for future exascale supercomputers, crucial for scientists and industries to leverage these powerful machines.

With a budget of €40.8 million over eight years, PEPR is part of the Exascale France Project and the EuroHPC initiative, which seeks to create a leading supercomputer ecosystem in Europe. In an interview, Michel Daydé, former co-director of the program, explains that PEPR focuses on adapting algorithms and developing new ones to meet the architectural demands of exascale computing, addressing energy consumption challenges, and creating a robust French software stack. This effort will significantly impact fields like climate science, medicine, and industrial applications, bolstering European scientific and industrial competitiveness.