Last week, we brought you tell of IBM’s stunning announcement at CES — namely, they’ve built and are soon to offer the first quantum computer for private, enterprise applications. The IBM Q System One really could be a game changer, not just in artificial intelligence, but at many or most of the outer limits of computational possibilities. A computer with that kind of power in a relatively modest form factor could be the first step in an arms race that remakes our digital future (yet again, it seems like). But, not content to be outdone, Intel has announced its own AI breakthrough at CES — call sign ‘Nervana’.
At CES last week, Intel announced the Nervana Neural Network Processor (NNP-I), which is an AI chip designed for inference-based workloads that fits into a GPU-like form factor.
For a great primer on inference, check out this article from Nvidia themselves.
When it comes time to transition an AI system out of learning mode into practical applications thereof, that’s where inference comes in. And with Intel’s announcement, they’re signaling they want to be at this particular frontier with hardware in tow (or for sale in this case, but you get the idea). Much like Intelligence Processing Units (IPUs, for short) could supersede traditional CPU+GPU node architecture to tackle AI problems with an AI-specific chip built precisely for AI purposes, so too could an inference chip like Nervana do the same for the second stage of practical AI system operation.
Nervana is optimized for image recognition, Intel said at the press event at CES. Apparently, Nervana’s architecture is pretty distinct from other chips, too: “It lacks a standard cache hierarchy, and on-chip memory is managed by software directly”, VentureBeat reported. “Additionally, because of its high-speed on- and off-chip interconnects, it’s able to distribute neural network parameters across multiple chips, achieving very high parallelism.”
What does that mean in the real world? Well, according to the same VentureBeat article, “Nervana’s neural network processor — the processor announced today — can reportedly deliver up to 10 times the AI training performance of competing graphics cards.”
Navin Shenoy, who is Intel’s executive vice president in the Data Center Group, said Nervana will go into production this year. He also mentioned the company expects to have a neural network processor for training, code-named “Spring Crest,” available later this year.
“For Intel, Nervana is yet another step toward its ambitious goal of capturing the $200 billion AI market,” VentureBeat concluded. And Shenoy seems to agree with that assessment:
“After 50 years, this is the biggest opportunity for the company,” Shenoy said at the company’s Data Centric Innovation Summit this year. “We have 20 percent of this market today … Our strategy is to drive a new era of data center technology.”
The AI market at these levels is a huge a growing opportunity that neither hardware nor software folks can ignore — it will, almost assuredly, dictate the digital economy of the near future sooner than most realize. And as such, it definitely comes in handy to have an innovation expert at your side (nudge nudge, wink wink).