Le supercalculateur américain Frontier a pour la première fois de l'histoire passé le barre symbolique de 1 exaflop en juin 2022.

The world's most powerful supercomputer coming soon? Elon Musk's optimistic bet with Dojo

Article originally published on the "L'Usine nouvelle" website here

Tesla’s “Dojo” supercomputer, construction of which began this summer, is introduced as the world’s most powerful future supercomputer… by far. Elon Musk has announced that, with this cutting-edge equipment, he will be able to reach 100 exaflops before the end of 2024, thanks to a billion-dollar investment. An ambitious gamble: today, the most advanced computer is 100 times less powerful.

The American supercomputer Frontier passed the symbolic 1 exaflop mark for the first time in history in June 2022.

Invest twice as much to do a hundred times better? That’s Elon Musk’s challenge with “Dojo”. This supercomputer, whose construction announced by Tesla on July 19, will benefit from an investment of $1 billion over three years. The ambition: to create the world’s most powerful computer. And by far. With 100 exaflops expected for October 2024, the computer will be a 100 times more powerful than the world’s most powerful computer to date. It is destined to drive artificial intelligence models behind self-driving cars.

Surpassed for the first time in 2022 by the American computer Frontier, the exaflop barrier corresponds to the execution of 1 billion of billion floating-point operations per second. Germany and France are set to join the coveted club of exaflop supercomputers, in 2024 and 2025 respectively, with the European supercomputers Jupiter and Jules Verne. Production of the American computer began this summer, with the aim of entering the world’s top 5 at over 0.2 exaflops by January 2024. But the technical reality promises to be more complex.

A tight deadline

“Reaching 100 exaflops by the end of 2024 seems ambitious”, says Jean-Yves Berthou, Director of the Centre Inria in Saclay, France. It took Frontier almost a year to pass the exaflop mark. “Here, we would go from a situation with no machines to 100 exaflops in one year. It’s not impossible, but it’s very optimistic”, agrees François Bodin, professor at the Université de Rennes. The scientist highlights, for example, to the uncertainties surrounding the supply of GPU graphics chips, pantagrual data storage or the supply of electricity. Indeed, the average consumption of a supercomputer would be 20 megawatts, according to the CEA.

Tesla’s announcement also keeps the nature of these 100 exaflops unclear. The company has not specified whether this is a theoretical or actual capacity. The difference can be significant: corrected for factors such as memory latency or inter-node communication, Frontier achieves an actual capacity of 1.1 exaflops for a theoretical peak of 1.7 exaflops. To evaluate computer performance under real-life conditions, the supercomputers run the “Linpack” a test bench that brings together several programs and software libraries. “It’s not the same thing to 100 exaflops by running the Linpack algorithm producing 100 exaflops by running theLinpack algorithm than having a machine with a theoretical peak power producing 100 exaflops by running the Linpack algorithm than having a machine with a theoretical peak power of 100 exaflops”, sums up François Bodin.

A supercomputer like no other

Nevertheless, the major strength of Dojo’s computing power lies in its specialization. Frontier, Jupiter or Jules Verne have a wide range of applications – research in the fields of space, climate, pharmaceuticals, energy and many more. Dojo, on the other hand, has just one: training artificial intelligence models for autonomous cars. This type of calculation requires less precision than scientific calculations.

This difference enables Tesla to build a specialized processor architecture, with a lower number of bits (and therefore a lower precision of calculation). precision). “In all likelihood, they will run on 8-bit computation, rather than 64-bit as in classical scientific computing”, explains Jean-Yves Berthou. They will therefore gain a factor of 8 in power.” In other words, by agreeing to divide its calculation precision by 8, Dojo will be able to carry out 8 times as many instructions. We shouldn’t think that Europe is lagging behind, because it would invest half as much for a hundred times less performance”, argues the scientist. The exaflop metric is simply not used in the same context here.

Tesla’s gamble has already had a significant impact: the rise in its stock market price. A Morgan Stanley report estimates that Tesla’s market capitalization could rise by almost $600 billion (562 billion euros) thanks to the potential of its supercomputer, which is expected to help create robot cabs. As is often the case, Elon Musk’s announcements seduce investors. But beware : the billionaire already said he was “very confident” about the appearance of robot cabs in… 2020. His ambitions for Dojo could be just as over-optimistic.


25th anniversary of the High Performance Computing Center Stuttgart

The return of the supercomputer race between the United States, China, and Europe.

Article originally published on the "Le Monde" website here

In the era of electronic miniaturization, supercomputers are making a comeback, driven by the rise of artificial intelligence. A strategic battle is unfolding among major powers.

They were once thought to be obsolete, but supercomputers are back in action. Today, they are benefiting from a thriving global market, thanks to the exploitation of the 21st-century black gold: data for artificial intelligence (AI). They have even become a matter of sovereignty, to the extent that the United States, in its trade war with China, goes as far as denying its Asian rival microprocessors intended for these machines. In Europe, the survivors are emerging, starting with the French state, which is closely monitoring the negotiations surrounding the dismantling of the Atos Group. In its subsidiary Eviden, there is indeed a gem to be cherished, among the few that survive in Europe: supercomputers stemming from the acquisition of Bull in 2013. A new factory is even expected to be built in Angers by 2027.

These behemoths are being called upon to develop medicines and vaccines more rapidly, particularly against Covid-19, refine weather forecasts in the face of climate change, improve the aerodynamics of aircraft and other vehicles to consume less energy, combat increasingly formidable cyberattacks, or simulate a nuclear explosion in the name of deterrence. Machine learning and quantum computing (massive simultaneous calculations at the atomic scale) need them.

Long confined to academic research or industrial and nuclear simulation (defense), the lineage of supercomputers is gaining strength with artificial intelligence. “Large-scale AI models are growing very rapidly, and new buyers are starting to use high-capacity machines, with selling prices ranging from tens of millions to hundreds of millions of dollars each. Fighting against cyberattacks will also require major computing power,” predicts Earl Joseph, CEO of Hyperion Research, an American research company specializing in the global HPC (high-performance computing) market. Because the more powerful the computers, the more expensive they are.


NumPEx BOF@SC23

The International Conference for High Performance Computing, Networking, Storage, and Analysis SuperComputing 2023 will take place in Denver from November 12 to 17, 2023.

sc23_homeDuring this conference, a birds of a Feather (BoF) connected to the NumPEx program is scheduled, allowing conference attendees to openly discuss current topics of interest to the HPC community.

Several (trans-)national initiatives have recognized the crucial importance of co-design between hardware, software and application stakeholders in the path to Exascale and beyond. It is seen as indispensable for the efficient exploitation of Exascale computing resources in the development of large-scale application demonstrators but also to prepare complex applications to fully exploit the full capacity of Exascale and post-Exascale sytems.  Among these projects, we can cite the French NumPEX project (41M€) but also the EuroHPC program, or the ECP project in the US and FugakuNEXT project in Japan.  However, these efforts are somehow disconnected while the community would benefit by sharing return of experience, common know-how, and advancing on the new problems arising as exascale machines become more and more available.

Building on earlier efforts of the International Exascale Software Project (IESP), the European EXtreme Data and Computing Initiative (EXDCI), and the BDEC community, we will work on the implementation of an international, shared, high quality computing environment that focuses on co-design principles and practices. US, European and Japanese partners have already met and identified a set of areas for international coordination (among others):

  • Software production and management: packaging, documentation, builds, results, catalogs, continuous integration, containerization, LLVM, parallel tools, etc.
  • Software sustainability
  • Future and disruptive SW&HW technologies and usages (investments and roadmaps)
  • Mapping of missing capabilities (driven by both Apps and SW)
  • Roadmap of near-term HW targets
  • HPC/AI convergence: ML, open models and datasets for AI training,
  • FAIR data stewardship
  • Digital Continuum and Data management
  • Benchmarks and evaluation, co-design (HW, SW, applications)
  • Energy and environmental impact and sustainability
  • Collaboration/Partnership factory: establish collaborations at international level
  • Training

The proposed BOF will offer an overview of the different Exascale programs and initiatives from the perspective of co-design (mixing application, software-stack and hardware perspective). Then, international partners representing major computing centers from Europe, USA and Japan, will expose and discuss the common issues and questions.

BoF leaders will seek feedback from attendees on the objectives of co-design and the efficient exploitation and coordination of existing efforts in Europe, the US and Japan, dedicated to exascale. BOF participants will be invited to share their views on the above problems and issues. We will also discuss candidates for collaborative application demonstrators, of which the LHC project of CERN, IPCC climate models, and the SKA project are some representative examples. Finally, we will call for contributions from participants.

At the BoF, a panel of experts from NumPEx, ECP, Riken-CC, BSC, JSC and from selected application communities, will raise issues and solicit input in each of their respective areas. Representatives of all the involved funding agencies will be invited to participate and contribute.

The ultimate goal of this BOF is to launch a new series of workshops dedicated to international collaborations between Europe, USA and Japan on Exascale and post-Exascale computing.

 

 

 


GENCI

Unleashing the Power of Exascale Computing: Genci's 2022 Activity Report

We are thrilled to present the highly anticipated activity report of Genci for the year 2022.

As a leading organization responsible for providing powerful computational and data processing resources, Genci has been instrumental in driving scientific research and innovation at both the national and European levels.

With a mission to promote the use of supercomputing coupled with artificial intelligence, Genci has made significant strides in benefitting scientific research communities, academia, and industrial sectors. Join us as we explore the remarkable achievements showcased in this 68-page report.

Launching Innovative Programs and Initiatives:

Genci’s commitment to pushing the boundaries of computational capabilities is evident through the launch of several groundbreaking programs and initiatives. The report highlights key projects, such as:

  1. NumPEx: The NumPEx initiative aims to harness the power of supercomputing and AI to drive scientific progress. By providing researchers with cutting-edge computational resources, Genci enables them to tackle complex challenges across various scientific domains.
  2. Jules Verne Consortium for Exascale: Genci’s partnership in the Jules Verne Consortium demonstrates their dedication to advancing exascale computing. This collaboration fosters innovation and propels research in areas that were once unimaginable.
  3. CLUSSTER Project: The CLUSSTER project focuses on integrating cloud computing solutions into Genci’s infrastructure. By embracing the cloud, Genci enhances flexibility and scalability, enabling researchers to tackle data-intensive workloads with ease.
  4. New Supercomputer “Adastra”: Genci’s introduction of the state-of-the-art supercomputer “Adastra” marks a significant milestone. With its remarkable computational power, Adastra empowers researchers to tackle complex simulations, accelerate data analysis, and drive scientific breakthroughs.

Driving Quantum Computing Advancements:

Genci recognizes the immense potential of quantum computing and has made significant progress in this field. The report highlights notable achievements, including:

  1. National Hybrid Quantum Computing Platform: Genci has played a pivotal role in launching of this platform. This initiative fosters collaboration and enables researchers to explore the capabilities of quantum computing in solving real-world problems.
  2. Integration of Quantum Systems: Genci has acquired its first quantum systems, marking a significant step towards enabling researchers to harness the power of quantum computing. These systems pave the way for groundbreaking research and innovation in quantum-enabled applications.
  3. The Quantum Package: Genci’s Quantum Package (PAck Quantique) provides researchers with the necessary tools and resources to explore hybrid quantum computing systems. This initiative promotes the development of novel algorithms and applications that bridge classical and quantum computing.

Advancements in Artificial Intelligence:

Genci has embraced the transformative potential of artificial intelligence, as highlighted in the report:

Bloom Model: Genci’s Bloom Model showcases their efforts to develop cutting-edge AI algorithms and frameworks. By combining supercomputing with AI, Genci facilitates breakthrough research in machine learning, deep learning, and data analytics.

Contributing to Scientific Research and Industry:

Genci is dedicated to supporting scientific research communities, academia, and industrial sectors through different initiatives, as exemplified by their efforts in:

  1. Reusing Waste Heat: Genci’s innovative approach includes the valorization of waste heat generated by the Jean Zay supercomputer. This environmentally friendly initiative showcases Genci’s dedication to sustainability and efficient resource utilization.
  2. Grands Challenges: Genci actively supports researchers in tackling grand challenges, providing them with the computational resources needed to address complex problems across diverse scientific disciplines.
  3. Exemplary Simulations: The report presents compelling examples of simulations conducted with Genci’s resources, showcasing the impactful discoveries and advancements made possible through their support.
  4. Community of Large Industrial Groups: Genci’s collaboration with large industrial groups highlights their commitment to bridging the gap between academia and industry. By fostering partnerships, Genci facilitates the transfer of cutting-edge research and technological advancements into real-world applications.

Genci’s Regional and European Ecosystem:

The report highlights Genci’s active involvement in regional and European initiatives:

  1. Regional Initiatives: Genci actively contributes to regional development through initiatives like SiMSEO, Competence Center, and MesoNet. These programs encourage cooperation among research institutions and industries, which promotes innovation and contributes to economic growth..
  2. European Collaborations: Genci’s participation in European collaborations, such as PRACE, EuroHPC, EUPEX, and EPI SGA, underscores their commitment to establishing a strong European ecosystem for high-performance computing. These collaborations facilitate knowledge exchange, resource sharing, and foster a vibrant European research community.

The 2022 Activity Report by Genci demonstrates their commitment to empowering scientific research and driving innovation by integrating exascale computing, artificial intelligence, and quantum computing.

Through the launch of groundbreaking programs, the introduction of cutting-edge technologies, and collaborations with research communities and industry, Genci has made significant contributions to advancing scientific frontiers.

Their commitment to sustainable practices and regional and European partnerships further solidifies their position as a leading provider of computational resources.

As we look to the future, Genci continues to pave the way for transformative discoveries and breakthroughs in scientific research and technological innovation.


numpex-plage_KO23

NumPEx Launches into Action with an Ambitious Kick-Off Agenda in Perros-Guirrec

In a series of dynamic sessions hosted from June 26th to 28th in the charming town of Perros-Guirrec, NumPEx embarked on an intensive kick-off event, setting the stage for a transformative journey in Exascale computing. Leaders, experts, and collaborators convened to delve into an agenda rich with insights,workshops, and collaborative initiatives.

numpex-plage_KO23
All the NumPEx Kick-Off participants

The kick-off began with a comprehensive introduction, outlining the objectives and significance of the NumPEx program, aiming to establish a common vision and foster collaboration to implement a coherent software stack and related processes by 2025, benefiting not only France but also Europe, in preparation for the Exascale machine. Key figures such as Jerome Bobin, Michel Dayde, and Jean-Yves Berthou elaborated on the program's goals and organizational structure. Board members shared their perspectives on the Exascale vision and roadmaps:

GENCI's Exascale Vision and Roadmap:

  • Presentation of GENCI's role and missions, including hosting the Exascale project for EuroHPC.
  • European HPC initiative partnership with EuroHPC and others, leveraging PRACE and GEANT.
  • Introduction of the Jules Verne consortium, highlighting international and industrial partnerships.
  • Vision of the European Exascale machine: addressing societal challenges, fostering innovation, and emphasizing HPC/IA data-centric convergence.
  • Collaboration plans with NumPEx, including building a functional program, benchmark development, and product promotion.

Eviden Exascale Vision and Roadmap:

  • Eviden's complex approach involving HPC, HPDA, IA, and quantum technologies with a focus on sovereign and European components.
  • Involvement in the European integrated processor for Exascale machines (SiPearl) and collaborations with various technology projects.
  • Collaboration with CEPP for application support and participation in technology projects related to Exascale, quantum, cloud, and more.

National and European Ecosystem:

  • Introduction of EUPEX, a 4-year project with a budget similar to NumPEx, aiming to deploy a modular Exascale system using the OpenSequana architecture.
  • Collaboration with NumPEx, potential for shared experiments and results, and exploration of common dissemination.
  • Presentation of Data Direct Network (DDN) with a focus on AI and Lustre parallel file system, highlighting challenges and the importance of understanding NumPEx applications.

The afternoon continued with a tour of the five projects (PCs) within the NumPEx program:

  • Exa-MA, which aims to design scalable algorithms and numerical methods for forthcoming exascale machines. Led by Christophe Prudhomme (Université de Strasbourg) and Helene Barucq (Inria).
  • Exa-Soft, to develop a coherent, portable, efficient, and resilient software stack for exascale. Led by Raymond Namyst (Inria) and Alfredo Buttari (CNRS - Centre national de la recherche scientifique).
  • Exa-DoST, to overcome challenges relating to data, notably storage, I/O, in situ processing, and smart analytics, in exascale supercomputers. Led by Gabriel Antoniu (Inria) and Julien Bigot (CEA).
  • Exa-ATOW, to deal with large-scale workflows involving exascale machines. Led by François Bodin (Université de Rennes), Mark Asch (Université de Picardie Jules Verne (UPJV)), and Thierry Deutsch (CEA).
  • Exa-DI, to ensure transverse co-design and software productivity for exascale supercomputers. Led by Jean-Pierre Vilotte (CNRS) and Valérie Brenner (CEA).

The day concluded with an emphasis on the collaborative efforts between NumPEx and other initiatives, with a focus on benchmark development, software-hardware links, and the overall goal of preparing for the challenges of the Exascale era.

KO_NumPEx_23_1

The second day kicked off with an invigorating early morning jog along the seashore, setting a vibrant tone for a day filled with thematic workshops. Participants engaged in focused discussions on energy synergies, GPU integration, applications, co-design, gender/diversity/equity, software production integration, training, resilience, international collaborations, and artificial intelligence. Thematic workshops, led by domain experts, fostered collaboration within smaller groups, emphasizing the program's commitment to a transverse approach to Exascale challenges.

 

KO_NumPEx_23_2

The final day commenced with a synthesis of workshop outcomes, highlighting the depth of discussions within each thematic area. Workshop leaders consolidated insights, offering a panoramic view of challenges and opportunities. Here is an overview of the key insights and strategic actions discussed during these workshops:

GPU Accelerators Workshop

In a dedicated workshop on GPU Accelerators, experts emphasized the pivotal role of Graphics Processing Units (GPUs) in achieving exascale computing. With 90-99% of large machine performance attributed to GPU acceleration, the workshop highlighted the need for applications to explore the potential of these powerful processors. Challenges discussed included new programming paradigms, code portability, data management, and the hardware landscape driven by gaming and artificial intelligence. The workshop outlined a comprehensive plan, including future workshops, analysis papers, tutorials, hackathons, and examples of successfully ported mini-apps.

Energy Workshop

The Energy Workshop focused on achieving Exascale computing within a power consumption limit of 20MW. Experts delved into environmental, scientific, technical, and societal dimensions, providing a roadmap for the HPC community. Key challenges identified included modeling system consumption, real-time measurement tools, resource prioritization based on societal impact, and the broader environmental impact of research activities. The action plan involves developing a performance and consumption model, optimization strategies, tools for users, and fostering links with external entities to incorporate energy considerations.

Gender Equity and Diversity Seminar

The action plan includes the establishment of a Code of Conduct, assessment of gender distribution, creation of a web platform for resources, education and training initiatives, awareness and outreach programs, and dedication to accessibility and recognition. NumPEx aims to create an inclusive and collaborative future, inviting all stakeholders to contribute to the initiatives.

AI Workshop

The AI Workshop explored the critical intersection of HPC and AI, addressing challenges and outlining a strategic plan for collaborative exploration. Key discussions included decision support tools for AI applications in HPC, optimizing runtimes for AI models, and converging HPC and AI usages. The action plan involves establishing an AI Working Group, conducting transversal workshops, and developing fundamental building blocks for a convergent future.

Training Strategies Workshop

The Training Strategies Workshop addressed the complexities of training in the context of the impending exascale era. Discussions included the scope and subjects of training programs, the creation of sustainable training models, and economic considerations in training initiatives. The workshop emphasized collaborative and inclusive training initiatives to prepare the scientific community for the challenges and opportunities of exascale computing.

International Collaborations Workshop

The International Collaborations Workshop focused on identifying challenges and setting objectives for enhanced collaborative frameworks on a European and global scale. Discussions covered scientific and technological challenges, the design and development of the exascale software stack, and strategic action plans. The outlined roadmap includes hosting workshops, exchanging insights and experiences, and strengthening collaborations with international entities.

National Centers Integration Workshop

The National Centers Integration Workshop aimed to align NumPEx with HPC infrastructures, emphasizing operational elements between computing centers and NumPEx 's targeted projects. Discussions covered operational assessment, cybersecurity, job profiling, and traceability. The workshop set a plan for regular video conferences, ensuring ongoing communication and collaboration.

Software Production Workshop

The Software Production Workshop focused on streamlining software development practices in the HPC domain. Challenges discussed included bridging silos, enforcing good practices, and amplifying impact. Insights and conclusions highlighted diverse development practices, sustainability models, and the deployment of continuous integration and certification. NumPEx 's commitment to advancing software production practices aims to foster innovation, collaboration, and sustainable development in HPC.

Exascale Resilience Workshop

The Exascale Resilience Workshop navigated complexities associated with exascale application deployment. Discussions covered diverse approaches across NumPEx PCs, key challenges, and strategic choices. The action plan involves listing and analyzing application needs, analyzing barriers to library adoption, and scrutinizing international solutions. NumPEx aims to foster collaborative solutions for enhanced application resilience at a global scale.

Applications and Co-Design Workshop

The Applications and Co-Design Workshop promoted co-development strategies for advanced application development. Discussions included challenges in co-design, key questions for collective exploration, building connections, and initiatives for sustainability. The workshop set the stage for upcoming co-development project workshops, emphasizing collaboration and innovation.

As the leaders bid farewell to Perros-Guirrec, NumPEx looks ahead to transforming shared visions and insights into tangible actions in the realm of Exascale computing. The kick-off marked the initiation of a collaborative journey, and NumPEx is poised to lead the charge in scientific innovation.

For the latest updates and progress on the NumPEx program, stay tuned to our news section. The journey to Exascale has begun, and NumPEx is at the forefront of this pioneering expedition.

KO_NumPEx_23_4KO_NumPEx_23_4KO_NumPEx_23_3


nuage cloud data

The French and Dutch governments welcome the decision of the EuroHPC joint venture to host and operate a new European Exascale supercomputer in France

Article originally published on the enseignementsup website here

The European joint venture EuroHPC announces today that it has selected, for the future European supercomputer Exascale, the project carried out in France by the Jules Verne consortium, which brings together France, represented by the Large National Equipment for Intensive Calculation (GENCI) as hosting entity, in collaboration with the French Alternative Energies and Atomic Energy Commission (CEA) as the hosting site, and the Netherlands, represented by SURF, the Dutch National Supercomputing Center.

nuage cloud data

After being acquired by the EuroHPC joint venture, this supercomputer will therefore be hosted at the end of 2025 at the CEA’s Very Large Computing Center (TGCC). It will benefit from the expertise of the latter’s High Performance Computing (HPC) division in the operation of large-scale systems such as Joliot-Curie (GENCI, for open research) and Topaze (CCRT, Center for Computing Research and Technology , for industrial research).

The main objective of the Jules Verne consortium is to deploy a world-class Exascale supercomputer, based on European hardware and software technologies. It will make it possible to respond to the major societal and scientific challenges via the convergence at the scale of digital simulations, the analysis of massive data and artificial intelligence.

Indeed, this project responds to major societal and global challenges corresponding to the national strategies of the Netherlands and France, in particular within the framework of France 2030 for the latter. The supercomputer will act as a sovereign accelerator in the finer modeling of the effects of climate change, in the development of new materials, energies and low-carbon mobility solutions, in the creation of digital twins of the human body allowing personalized medicine or still in training the next generation of generative AI or multimodal models. It will also address the challenges related to the explosion of data generated by scientific instruments (such as telescopes, satellites, sequencers, microscopes, sensor networks, etc. by IoT/Internet devices or by large simulations This avalanche of data makes the use of these supercomputers crucial for science, industry and decision-makers, in order to process this data in competitive timeframes and in the most energy-efficient way possible.

After the deployment of EuroHPC systems such as JUPITER (in Germany), the first Exascale system in Europe in 2024, Jules Verne will provide European, French and Dutch researchers with an unprecedented computing capacity of more than 1 Exaflop/s – one billion billion (“1” followed by 18 zeros) of operations per second, equivalent to over 5 million modern laptops, and over 300 PB of boot storage.

Beyond the machine itself, the Jules Verne consortium, in conjunction with other EuroHPC consortia, will provide support to European researchers for the porting and optimization of their applications on the supercomputer, as well as for training. In this perspective, the Jules Verne consortium will collaborate with all European Centers of Excellence (CoE) and end users for the implementation of the system. It has already established relationships with national R&D Exascale projects (such as the France 2030 NumPEx research program). As a reminder, the NumPEx program aims to design and develop software components that will equip future Exascale machines and prepare the major fields of scientific and industrial applications to fully exploit the capabilities of these machines. The NumPEx program has a budget of 40.8 million euros over 5 years.

The total cost of acquiring and operating the supercomputer for 5 years amounts to 542 million euros. Of this total, 271 million euros are provided by EuroHPC JU, 8 million euros by the Dutch Ministry of Culture, Education and Science and 263 million euros provided by the French Government. ONERA and IFPEN have expressed their intention to join the French part of the consortium, paving the way for other research institutes and French industrialists.

Beyond France and the Netherlands, the Jules Verne consortium is ready to welcome other countries, as partners sharing the same vision in the service of science, innovation and sovereign technologies.

Quotes

EuroHPC’s approval of the Jules Verne consortium’s application is excellent news for French and European research. This is another important step in securing financing for an Exascale-class supercomputer, worth a total of €542 million.

These means of calculation will be necessary to meet the scientific and technological challenges that await us, such as climate change, energy transition or health. The supercomputer will therefore play a key role in guaranteeing our technological sovereignty and our industrial competitiveness, and I hope that new public and private partners will join the consortium in the coming weeks.

Sylvie Retailleau, French Minister for Higher Education and Research

It is excellent news that the European scientific community, with France and the Netherlands in the lead, is joining forces to produce the supercomputer proposed by the Jules Verne consortium. Europe is thus reaffirming its position on the global research scene. Author Jules Verne has piqued our curiosity with stories about a technological future where one can travel to the Moon or the deep sea. Thanks to this supercomputer, we are doing it again. With this immense computing power, scientists have a glimpse of the future, allowing them to help solve fundamental societal problems in areas such as health or the fight against climate change.

Robbert Dijkgraaf, Dutch Minister of Education, Culture and Science

A billion billion operations per second to accelerate the advent of the future. GENCI is delighted with the announcement by EuroHPC of the selection of the Franco-Dutch consortium Jules Verne to host and operate an Exascale class supercomputer. It is an international recognition of French scientific and technical expertise in combining the applications of digital simulation, massive data analysis, artificial intelligence and soon hybrid quantum computing and by implementing hardware and European software.

Above all, these are the first steps in the era of the Exascale which will allow our national research communities to realize the dream of simulating complex phenomena to solve historical scientific puzzles as well as the possibility of being able to be creative in devices to meet the industrial and societal challenges of energy, innovative materials and health, such as the treatment of neurodegenerative diseases.

Philippe Lavocat, CEO of GENCI

This supercomputer will be an exceptional instrument for European research at the service of European society and sovereignty. It will enable major advances in many fields that are at the heart of CEA’s research activities, such as high-resolution climate modelling, fusion for energy, innovative materials, human digital twins and personalized medicine. It will provide our researchers and industrialists with world-class computing resources to exploit the deluges of data linked to the deployment of new digital systems, autonomously, and thus remain in the global race. The CEA has a long experience of designing and implementing pre-exascale supercomputers in state-of-the-art computing centers.

We will put all our expertise in the design and operation of computing centers at the service of this project, with the objective of performance and control of energy consumption.

François Jacq, General Administrator of the CEA

We are proud to work together in the Jules Verne Consortium to significantly advance research on societal challenges. This supercomputer will help Dutch researchers perform complex simulations in areas ranging from climate science to medicine and astronomy. We are very proud that our experts in large-scale computing, SURF, can contribute to this and thus help researchers in their work.

Jet de Ranitz, CEO of SURF


Super computer abstract futuristic design

What is exascale ?

In today’s world, information has become an essential resource. Massive amounts of data are produced every day, from various sources such as social networks, sensors, scientific simulations, and many more. To efficiently process this data and meet the complex challenges of our time, it is crucial to have powerful computing capabilities.

This is where exascale comes in. Exascale is a measure of computing power that represents one trillion (10^18) floating point operations per second, or one million billion calculations per second. This performance is simply astounding and far exceeds that of all existing supercomputers.

Discover the exascale: The computing power of the future

Numpex reserch project Exascale

The race to exascale :

Since the first electronic computers, the computing power of machines has grown exponentially thanks to the advancement of technologies. As computational demands grew more complex, researchers and engineers set themselves the goal of achieving exascale. This has given rise to a veritable race for innovation in the field of supercomputers.

 

Technological challenges :

Achieving exascale is not just about increasing the speed of processors. This requires a multidimensional approach that integrates several research areas. One of the main challenges is to design more energy-efficient processors capable of processing billions of calculations while minimizing power consumption.

In addition, the architecture of supercomputers must be redesigned to fully exploit the performance of processors. Parallel and distributed architectures, as well as the use of specialized processors like graphics accelerators (GPUs), play a key role in achieving exascale.

 

Exascale applications :

The exascale opens the way to many possibilities in various fields. In science and research, it will enable more accurate and faster simulations, enabling significant advances in fields such as medical research, meteorology, materials physics, astrophysics and many more.

Exascale is also essential for the development of artificial intelligence and machine learning. Deep learning models, which require massive amounts of data and computation, will be able to be trained much faster, enabling faster advancements in these areas.


CNRS photo Exascale interview

PEPR NumPEx : High-Performance Computing and European Sovereignty

Article originally published on the CNRS website here

The Digital Exploratory Exascale Priority Research Program and Equipment (PEPR) – piloted by the CNRS, the CEA and Inria – aims to design and develop software bricks that will equip future exascale machines. It thus contributes to preparing future users, scientists and industrialists, to exploit the capabilities of these machines. The program has a budget of 40.8 million euros over 8 years. Explanations with Michel Daydé, co-director of the program for the CNRS.

The PEPR NumPEx – which you coordinate with Jean-Yves Berthou (for Inria) and Jérôme Bobin (for the CEA) – is an integral part of the Exascale France Project itself in coordination with the European EuroHPC project. Can you tell us about the challenges of these projects and the particular role PEPR will play in this context?

Michel Daydé :

The EuroHPC program is a joint initiative between the European Union, European countries and private partners to develop a world-class supercomputer ecosystem in Europe by 2025. It will rely for this on two exascale computers, that is i.e. capable of performing 1 billion billion operations per second. And this, with the energy constraint not to exceed a consumption of 20 megawatts. The first machine will be in Germany, France responded to the second call for projects to host the second. To this end, the French community has structured itself around a national exascale project led by GENCI.

However, the impact of these supercomputers will entirely depend on the applications that will be using them. There is therefore a strong need to build an ecosystem of applications and people who will adapt these applications to new machines. Indeed, exascale computers represent a major architectural evolution with tens of thousands of graphics processing units (GPUs) which will massively accelerate calculations. This therefore requires modifying existing algorithms or even developing new ones. PEPR NumPEx responds to this specific need through interdisciplinary research bringing together mathematicians, computer scientists and researchers from different fields of application.

How does PEPR plan to address these challenges?

M.D :

The change in architecture of these ultra-powerful computers means that the entire software stack1 has to be adapted or created. A paradigm shift is taking place. In this sense, the skeleton of the PEPR is based on fundamental projects around the development of methods, algorithms, software and data processing tools adapted to exascale. There are also issues specific to the energy consumption of exascale, which are all the more significant today. In other words, it is necessary to deploy applications that consume the least possible energy to arrive at the solution of a given problem. All of this research will integrate demonstrators that will cover a representative number of major fields of application.

Through this PEPR, we therefore aim to develop a French software stack with new resolution methods and new tools on aspects of calculations, data processing, artificial intelligence, execution support and monitoring which could be adopted, as far as possible, at French and European levels. Our ambition is to set up a coherent and efficient software environment ranging from runtime support to applications.

How will the results be transferred and to which application areas?

M.D : During this work, we will identify concerns or needs common to various applications. This may concern, for example, algorithmic approaches common to several applications or storage devices with particular protocols. The idea is to implement cross-cutting solutions in several areas. A PEPR unit will be responsible for helping the application teams to integrate the innovations of the targeted projects into demonstrators and, more broadly, to train users in their use.

Manufacturers have already expressed their interest, notably Atos Bull and SiPearl, which are heavily involved in the EuroHPC program on the design and manufacture of future European processors and machines. Several centers of excellence also participate in this PEPR with their applications. We therefore have a good guarantee of transfer to the scientific and industrial communities. We also rely on the areas identified in the Exascale France Project report. This represents approximately 80 applications related to the sciences of the universe, high energy and particle physics, life sciences, energy, the industry of the future and fundamental research.

What will this race for power bring to our society?

M.D : High-performance computing is a discovery engine in research. It allows you to get closer to complex physical phenomena. It is useful for advancing knowledge on large-scale challenges: climate change, prediction of natural disasters, energy saving; but also on issues of societal resilience and industrial competitiveness.

The development of new materials, personalized medicine, drug design, etc., are all applications in which supercomputers will play a major role. By accelerating the calculations associated with critical areas and issues, we support business competitiveness and the sovereignty of our society.

Especially since high performance computing is currently at the heart of important geopolitical issues.

M.D : Indeed, in the HPC sector, exascale is the next step to take. It is the subject of major competition between the United States, Japan, China, and Europe. It is a real strategic competition linked to the aforementioned societal challenges to which it will make it possible to respond, as well as through its potential for sensitive applications such as for defense.
Furthermore, the Chips Act, which aims to rebuild a semiconductor industry in Europe, and the recent Covid-19 and energy crises have highlighted Europe’s dependencies. So many findings that reinforce the importance of coordination between the EuroHPC project and its national versions via the exascale plan and this PEPR.


Notes

  1. A group of programs that work together to produce a result or achieve a common goal.


49e forum ORAP NumPEx

49th ORAP Forum : The PEPR Numpex “Digital for Exascale”

29 novembre 2022, Maison de la simulation

The Digital PEPR for Exascale (NUMPEX) aims to design and develop the software components that will equip future exascale machines and prepare the major application areas to fully exploit the capabilities of these machines.

Major fields of application that relate to both scientific research and the industrial sector.

discover the program of the 49th ORAP forum on NumPEx
49e forum ORAP NumPEx