Nvidia, which recently acquired ARM, is in the news once again after announcing that it will power the world’s fastest AI supercomputer, Leonardo. The Italy-based supercomputer will be a part of an expansive network of similar machines across Europe.
Leonardo is being developed by industry leaders at CINECA, an association of various Italian universities and research centers. It will come with 14,000 Ampere GPUs manufactured by Nvidia.
The supercomputer Leonardo will deliver FP16 AI performance of up to 10 exaflops and handle complex AI and HPC (High-Performance Computing) tasks. Its purpose is to solve big-scale problems of multiple disciplines, ranging from high-energy physics to climate change.
CINECA’s Sanzio Bassini says, “The Leonardo supercomputer is the result of our long-term commitment to pushing the boundaries of what a modern exascale supercomputer can be.”
Apart from Nvidia, Atos is also providing integral components for the system. Leonardo will use Atos’s BullSequana XH2000 supercomputer nodes that come equipped with 4 Nvidia GPU units and an Intel CPU.
Nvidia-powered Leonardo Is Part Of A Bigger Plan
Leonardo is another piece of the puzzle for the European countries working closely to set up a supercomputer network across the continent. The European Union and the participating governments plan to emerge as a global force in exascale supercomputing.
Moreover, under the EuroHPC collaboration, the European Commission via the Italian Ministry of University and Research is funding Leonardo’s development.
According to Nvidia’s Mark Hamilton, “The EuroHPC technology roadmap for exascale in Europe is opening doors for rapid growth and innovation in HPC and AI.” The American firm is also providing its Mellanox HDR Infiniband, with speeds up to 200 Gb/s, for the supercomputer network.
Leonardo is one of four new supercomputers in Europe, including MeluXina (Luxembourg), Vega (Slovenia), and EURO_IT41 (Czech Republic). In the future, four more systems in Spain, Portugal, Finland, and Bulgaria will join the lineup.