In the world of advanced computing and next generation software development and deployment, any time Nvidia makes a move or releases a new product is noteworthy. Given that much of the newer, more computing intensive, more powerful artificial intelligence, neural network or deep learning functionalities increasingly fall on GPUs in addition to (or in lieu of) CPUs, when Nvidia changes their game, they essentially change the entire game.
Welp. Nvidia’s newest release has been ten (10) years in the making. So, yeah… I’d say this is a pretty big one.
Just last month, Nvidia released their newest graphics architecture, and it’s a complete departure from what’s come before it from the company. Take a look:
The concept is simple enough — it’s no longer just a game of packing the most computing power into a processor or processing cores into a machine to achieve maximal performance. To truly see next level performance increases (and enable new functionalities), the top companies have to completely rethink how CPU and GPU are architected, how they work in concert together, how cacheing can be optimized, etc. “Nvidia, Intel, AMD, Samsung, Apple, and many others will increasingly need to do more with the existing transistors on chips instead of continuing to shrink their size. Nvidia has clearly realized this inevitability, and it’s time for a change of pace,” Tom Warren writes for The Verge.
So that’s just what Nvidia did.
Nvidia is calling their new GPU architecture ‘Turing’ after the father of modern computing. The GPU sports both Tensor cores and RT cores — RT dedicated to ray tracing and tensor cores focused on AI processing. Turing also comes with major upgrades to the GPU’s caches as well. “Nvidia has moved to a unified memory structure with larger unified L1 caches and double the amount of L2 cache,” Warren continued. “It’s essentially a rewrite of the memory architecture, and the company claims the result is 50 percent performance improvement per core.”
So why does this all matter beyond some basic performance enhancements?
Because ray tracing is the holy grail of hyperrealistic gameplay, and RTX 2080 could be a breakthrough in that direction.
Ray tracing, for the uninitiated (which to be honest, is probably 99+% of people) is a rendering technique used by movie studios to generate light reflections and cinematic effects. Essentially, if there’s an explosion in Avengers: Infinity War, light doesn’t just come directly from the flames. It bounces off the shiny space ships, reflects of water in a given atmosphere, etc, etc.
For movies you can do this because the rendering doesn’t have to happen in real time. You could program that into the graphical edit, let your beast machine chomp on the render over night, and then export a beautiful, cinema-quality final cut with ray tracing included.
Doing it in a video game is a completely different animal. The GPU has to track your movement, POV, gameplay, variable action, and on and on while it attempts to render ray traced, hyperrealistic imagery… IN REAL TIME. That’s no joke.
But if Nvidia’s RTX 2080 delivers on what the company promises, we might be there.
But that’s not all! Nvidia also is taking its deep reservoir of AI expertise and applying it to their new chip architecture and U/X. From The Verge again:
Nvidia Deep Learning Super-Sampling (DLSS) could be the most important part of the company’s performance improvements. DLSS is a method that uses Nvidia’s supercomputers and a game-scanning neural network to work out the most efficient way to perform AI-powered antialiasing. The supercomputers will work this out using early access copies of the game, with these instructions then used by Nvidia’s GPUs. Think of it like the supercomputer working out the best way to render graphics, then passing that hard-won knowledge onto users’ PCs. It’s a complex process, but the end result should be improved image quality and performance whether you’re playing online or offline.
What all that amounts to is that Nvidia is making a play to change the way we think about both GPUs, games, gaming, as well as what AI computing could look like well into the future. If the RTX 2080 and its brethren (and new and improved Turing architected GPUs to come) can deliver the goods, it very well could change the entire face of modern computing once more.
Start a conversation with our talented team today!
Subscribe to our mailing list to receive updates about new posts to the ENO8 blog