8 things I learned about NVIDIA's new tech at CES 2025
DLSS 4, Blackwell, neural rendering, digital humans - what does it all mean?
I was one of a couple of hundred media attendees at NVIDIA's Editor's Day at CES 2025, which was a full day of showcases, speeches and demonstrations of the various aspects of the new suite of graphics cards coming from NVIDIA in 2025, the new RTX 50 series.
This includes the GeForce RTX 5070, 5070 Ti, 5080 and the new flagship model, the 5090, all of which are likely to bother our list of the best graphics cards over the coming year.
Now, as with seemingly every single exhibitor at CES, NVIDIA has focused a lot on AI capabilities in its launch announcements, and after spending a day listening to, looking at and even getting to test some of that hardware and software myself, I came away a lot more informed, in many ways impressed and in some ways a little wary...
These are the 8 most important things I learned about the new GeForce RTX hardware and software at CES 2025 and NVIDIA's Editor's Day.
Neural rendering looks real impressive
The day started with an overview of NVIDIA's current developments in graphics rendering, and how the company has arrived at this point.
Exploring the advancements in graphics rendering technologies, the first part of the day focused on programmable shaders, neural shading, and RTX innovations. I learned a lot about the evolution of shaders, the introduction of neural shading with the new Blackwell architecture used in the latest generation of GeForce RTX cards, and the impact of Cooperative Vectors API on accessing Tensor Cores (which is the central 'AI' element of NVIDIA's hardware). The session also covered Neural Radiance Cache, RTX Skin for real-time subsurface scattering (more realistic skin effects), and RTX Mega Geometry for handling complex scenes. Additionally, RTX Remix's influence on the modding community is discussed, highlighting its integration with industry-standard tools.
All of these elements, and more, make up NVIDIA's neural rendering approach with the new Blackwell architecture.
Get the Creative Bloq Newsletter
Daily design news, reviews, how-tos and more, as picked by the editors.
Programmable shaders allow developers to customise the appearance of pixels on the screen, moving beyond fixed function shaders. Since they first appeared in GeForce 3 a long, long time ago, they've come a long way, where today, we're seeing neural shading, which involves using neural networks to enhance graphics rendering, allowing for more realistic textures and materials.
This is obviously most pertinent to game developers, for the purposes of creating more realistic, more immersive, more convincing game worlds, but the demos we saw gave us a glimpse that we could be on the verge of a big inflexion point not just for gamers, but for most 3D modelling. Let's look at the main elements...
Neural Radiance Cache makes lighting more realistic
The 50 series of GeForce RTX cards will support Neural Radiance Cache, a technology that trains in real-time using the gamer's GPU to create a model that caches light transport throughout a scene. In English, as I understand it at least, it means it can learn how ray-tracing and path-tracing behaves in the 3D world you're inhabiting and will improve those paths to a point where you can have effectively infinite bounces of light to make every piece of lighting more realistic, with eye-catching improvements in shading, texture lighting and scene ambiance in the demos we saw.
RTX Neural Materials promise film-quality shading
RTX Neural Materials uses NVIDIA's AI cores to compress complex shader code typically reserved for offline materials and built with multiple layers. The examples we saw on screen included tricky materials such as porcelain and silk. The material processing is claimed to be up to 5x faster, making it possible to render film-quality assets at game-ready frame rates.
The above are aimed at gamers and game developers, but as we're seeing more and more crossover between creative and game use for software (Duncan Jones' Rogue Trooper using Unreal Engine for its animation is just one example) this will have an effect outside the game-dev space very soon, I imagine.
RTX Mega Geometry will handle more complex scenes than ever
Another game-changer in ray- and path-tracing, I suspect, will be RTX Mega Geometry. It allows for handling complex scenes with high polygon counts in ray tracing and path tracing, by way of enabling the use of full-resolution meshes without proxy meshes (which have been used to save memory due to the high number of triangles/polygons in any complex 3D scene). It also efficiently compresses and caches clusters over time, which NVIDIA claims will speed up both gameplay and time on the development side.
This tech is coming soon to the NVIDIA RTX Branch of Unreal Engine (NvRTX), so developers can use Nanite and fully ray trace every triangle in their projects.
Does DLSS 4 mean we don't need higher VRAM specs?
I noticed something interesting both at the event and afterwards during chats with some people who are a lot smarter than I am. The new NVIDIA graphics cards don’t seem to show a huge jump in VRAM compared to the last generation. For example, the 5090 tops out at 32GB, while a lot of the NVIDIA laptop GPUs in the 50-series come with either 12GB or 16GB of VRAM.
So, why are the numbers so low, I hear you groan.
From what I gathered, DLSS 4 could be a big reason. DLSS, or Deep Learning Super Sampling, has just entered its fourth phase at NVIDIA. This new version really aims to boost performance and image quality in real-time graphics using AI. With DLSS 4, the trade-offs between image quality, smoothness, and responsiveness in rendering graphics might become a lot less important.
By using AI to predict and render graphics more efficiently—based on the game’s data—DLSS cuts down on the need for high computational power. It essentially takes advantage of redundancy in rendering workloads, which means less VRAM might be needed while still delivering great performance.
The demos we were shown displayed a before-and-after approach to some AAA games, with DLSS 4 switched off and then on. With DLSS 4 switched on, the game's visual detail was noticeably improved, especially any motion, due to the multi-frame generation offered by DLSS 4 basically having several frames ready and waiting for you depending on where you moved the camera (using fancy AI trickery).
But not only did it improve the graphics performance and make it smoother, it also showed an increase in framerate, so it was running the game more efficiently while showing more detail.
That's where the Blackwell architecture development seems to have been focused; on more efficiency rather than brute force or just pumping in more horsepower.
Video rendering is about to take a big leap
While NVIDIA did touch upon the widespread adoption of generative AI (whether I like it personally or not), and how the latest iteration of their graphics cards will aid generative extension of sequences and reframing shots, perhaps eliminating the need for costly re-takes within filmmaking in some cases, what intrigued me most were the developments in video rendering.
With multi-camera set-ups for shows, interviews, video podcasts and even on-site reports (we saw an example from a vlogger's racetrack visit with his nine cameras) on the rise, the need for more streamlined video rendering and editing is rising fast. And with improvements brought by the new Blackwell architecture, normal 4K video rendering is going to be up to 40% faster, and the addition of support for 4-2-2 cameras, we will now be able to render and edit multiple videos concurrently and in almost real-time, which constitutes and up-to-11-times faster rendering and editing workflow for those videos.
In addition, we saw improvements made to voice and image enhancements, with Studio Voice helping cut out noise and improving voice quality for podcasts and videography, and Virtual Keylight helping balance out unevenly or unflatteringly lit scenes, especially for streamers or video creators who may not have the luxury of a full lighting kit for their recordings.
Digital humans are still a little too uncanny
One thing computers, CGI and AI-generated videos have struggled with, and continue to struggle with, is creating convincing-looking and naturally moving humans.
At the NVIDIA showcase, and indeed at several places throughout the expo, I saw 'digital humans', 'autonomous game characters', 'neural faces', and even an 'intelligent streaming assistant' from Logitech, to use as your 'companion' during game streams. All of these are different approaches to create a 'UI for your AI', and while remarkable improvements have been made in some respects, we're still deep in the Uncanny Valley. To their credit, NVIDIA acknowledge this, admitting that rendering human faces convincingly is just about the hardest thing you can do in any digital space.
One thing that's moving the needle for NVIDIA this year is RTX Skin, a part of the Neural Rendering suite of developments. Most rendering methods we've seen don't accurately simulate how light interacts with human skin, which is partly translucent, and that can, and frequently does, result in a plastic-like look. Subsurface Scattering (SSS, get ready to note down all those acronyms in your notebook, there will be a test) simulates how light penetrates beneath the surface of translucent materials and scatters internally, creating a softer, more natural appearance.
The example we saw showed fantastic improvements in how ears and other translucent elements of human skin are rendered, along with more convincing skin and more natural and realistic lighting effects as it bounces of the 'human' skin, but it's abundantly clear we are still not there when it comes to natural facial movements, so I'm not gonna get replaced by a digital avatar in the workplace just yet...
I wouldn't panic about 'lazy devs' just yet
As we were shown a developer demo from the makers of Doom: The Dark Ages, we saw how DLSS 4, neural rendering, improved path-tracing and ray-tracing and other introductions from NVIDIA have helped them create a more immersive, photorealistic and convincingly lived-in (and died-in, ey?) world.
An argument I'm seeing in several pieces around the interwebz is that AI rendering of frames, multi-frame generation and other tools being introduced will lead to lazy devs pushing out poorly developed AI-reliant slop. And yes, that's definitely gonna happen. But we also get lazy devs making poorly programmed slop now. And we also got lazy devs making poorly programmed slop in 1996, when I was just getting into PC gaming. We've always had those. Thankfully, most of that work gets forgotten and buried.
But in the end, it's not them who matter. Who matters are the talented, hard-working visionary devs who see these developments for what they are; potential tools to create new, richer, bigger and more immersive worlds, whether in gaming, filmmaking or the 3D modelling/graphic-design space. And I'm excited to explore all of those.
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Erlingur is the Tech Reviews Editor on Creative Bloq. Having worked on magazines devoted to Photoshop, films, history, and science for over 15 years, as well as working on Digital Camera World and Top Ten Reviews in more recent times, Erlingur has developed a passion for finding tech that helps people do their job, whatever it may be. He loves putting things to the test and seeing if they're all hyped up to be, to make sure people are getting what they're promised. Still can't get his wifi-only printer to connect to his computer.
Related articles
- What to expect from Blender in 2025 - what's next for the popular 3D art app?
- Doom swaps blood for fine wine in this jokey remake
- The creator of Trek to Yomi's new VR game is an "extremely indie project" that celebrates of 80s kung fu cinema and 90s arcades
- Call of Duty’s $700m dev budget shows the cost of making games is out of control