When it comes to AI processing, Nvidia has been the ‘leader in the clubhouse’, as it were, for a few years now. They had a significant first-mover advantage built in, what with their advanced GPU hardware research, development and manufacturing being much better suited to AI tasks than traditional CPUs (even though they were originally building it to help render video games, not power the AI revolution, but who’s counting?). To that end, we write about Nvidia a lot because they really do set the tempo and tenor of the AI arms race of the present, and the soon-to-be present. Their GPU arrays, GAN experiments, etc. have shown many the way when it comes to advancing AI beyond theories into real-world applications and uses. But on the horizon is a new player — Graphcore — that claims it, and not Nvidia, will be the ones to conquer AI chip manufacturing now and into the future.
As I’m sure you’ll be shocked to read, we’ve written about Graphcore before, but in a wholly different context: Graphcore has taken the idea of AI-specific hardware architecture a step further by engineering a new chip architecture altogether for AI applications. This ‘Intelligence Processing Unit’, or what they dub IPUs, is both great branding from a sales perspective but also a better descriptor of what these companies are really trying to build — next gen processors designed specifically for AI applications.
But does that mean they’ll be able to actually compete with Nvidia?
A lot of major financial backers certainly seem to think so.
Just this month, they took on another $200M+ in VC funding from the likes of BMW and Microsoft after having taken down two additional, independent 8-figure private investments during last year alone. According to ZDNet, “Graphcore is now officially a unicorn, with a valuation of $1.7 billion. Graphcore’s partners such as Dell, the world’s largest server producer, Bosch, the world’s largest supplier of electronics for the automotive industry, and Samsung, the world’s largest consumer electronics company, have access to its chips already.”
That certainly sounds like Graphcore is large enough and appropriately poised to take on Nvidia alright. And, when listening to Graphcore’s CEO, he makes a pretty compelling case to either a casual (or obsessed, in our case) observer to that very end:
“When Graphcore began there was no TensorFlow or PyTorch, but it was clear that in order to target this emerging world of knowledge models we had to rethink the traditional microprocessor software stack. The world has moved from developers defining everything in terms of vectors and scalars to one of graphs and tensors.
In this new world, traditional tool chains do not have the capabilities required to provide an easy and open platform for developers. The models and applications of Compute 2.0 are massively parallel and rely on millions of identical calculations to be performed at the same time.
These workloads dictate that for maximum efficiency models must stay resident and must allow the data to stream through them. Existing architectures that rely on streaming both application code and data through the processor to implement these models are inefficient for this purpose both in hardware construct and in the methodologies used in the tool chains that support them.”
Graphcore and their purpose-built IPUs truly could be the real deal. Wallstreet already thinks so given Graphcore’s current $1.7b valuation and recent influx of major capital. And with industry clamoring for access to these chips, it’s no wonder Nvida might be feeling a bit of the heat.
We’re certainly not going to give a call for which competitor will eventually emerge victorious, but it appears it may no longer be a one horse race after all.