Article originally published on the "L'Usine nouvelle" website here

Jules Verne, Frontier, Jupiter… They all have one thing in common: they’re supercomputers, capable of performing 1 billion billion floating-point operations per second. And they will soon be joined by Dojo, Elon Musk’s supercomputer, via his Tesla company. Dojo is meant to be the world’s most powerful supercomputer and will be used to train the artificial intelligence models behind the autopilot systems in Tesla vehicles.  But with commissioning scheduled for late 2024, will the billionaire’s ambition be achievable?

In this L’usine nouvelle article, Inria researcher Jean-Yves Berthou and CEA researcher Jérôme Bobin, both program directors of the NumPEx PEPR, discuss the subject. For the two researchers, the project is not impossible, but will undoubtedly involve compromises. Will Tesla be able to assemble all the hardware needed to build the supercomputer by October 2024? What will the supercomputer’s real capabilities be?

Photo credit Imgix / UnSplash


NumPEx Newsletter

Subscribe to our newsletter to stay informed on the latest breakthroughs in High-Performance Computing, Exascale research, and cutting-edge digital innovations.