Intel has been working on the “OneAPI Project” for a long time now. It is a unified programming model that aims to simplify application development across different computing architectures.
However, Nvidia CEO Jen-Hsun Huang doubts the viability of Intel’s unreleased OneAPI model. At Nvidia’s Q2 2019 earning reveal, Huang slammed Intel’s ongoing project.
Huang said that he doesn’t understand how one programming approach or API can “make seven different types of weird things work together” because “programming isn’t as simple as a PowerPoint slide.”
This comment came from the CEO at the event, when he was asked about the movement of competitors in datacentre software stacks. And Huang had a quite a handful of opinion on Intel’s unified programming model.
Further discussing the hurdles of a unified programming approach for different computing architectures, Huang said, “going from single CPU to multi-core CPUs was a great challenge.”
Continuing further, he explained how the jump from multi-core CPUs to multi-node multi-core CPUs was a huge challenge too.
Find your dream job
“And yet, when we create a CUDA, in our GPUs, we went from one processor core to a few, to now in the case of large scale systems, millions of processor cores, and how do you program such a computer across multi-GPU multi-node…it’s a concept that is not easy to crack,” he further added.
Huang says he has a hard time wrapping his head around how a single programming approach could fit seven different types of processors. He says that even though something like this has never happened before, only time can tell if this is a viable method.
Intel OneAPI and Nvidia’s CUDA
With the increasing usage of complex CPUs, FPGAs, GPUs, and accelerators, there is an ever-growing need for overarching software that can cater to the needs of modern data centers.
Keeping this in mind, Intel unveiled the OneAPI project at its Architecture Day last year. Under the OneAPI initiative, Intel also plans to launch a single, cross-industry programming language (Data Parallel C++) — which sort of rubs Nvidia and its CUDA platform in the wrong way.
The purpose behind Nvidia’s proprietary CUDA platform is to help users gain the most out of Nvidia GPUs. There are all sorts of API tools available to developers to harness maximum power out of accelerators in GPGPU workloads.
The increasing demand for parallel processing in GPUs in datacentres and computing tasks by AI, Nvidia’s CUDA API, and libraries have become an indispensable tool.
Whether Intel’s OneAPI changes the game remains to be seen yet. It will be one hell of a victory if the company can actually pull it off. Hopefully, we will get to hear about it soon enough as OneAPI enters the beta phase later this year.