This site may earn affiliate commissions from the links on this folio. Terms of use.

At SIGGRAPH this week, both AMD and Nvidia are announcing diverse hardware and software technologies. SIGGRAPH is an annual show that focuses on computer graphics and advances in rendering techniques. At the show this year, Nvidia showcased the ways AI could exist used to improve gaming or to create extremely realistic images, without the enormous computational horsepower that would exist required to brute forcefulness certain visual standards.

This final bit is of more than incidental business concern. The trouble is simple: If yous compare a top-shelf character animation from 2017 versus the best hardware of 2005, you'll plain notice the difference. At the same fourth dimension, still, you lot're unlikely to exist fooled into thinking that fifty-fifty the most astonishing CG is really a movie. Slowing silicon advances make it less and less probable that nosotros'll ever be able to only computationally forcefulness the result. Perhaps more to the point, even if we could, brute-forcing a solution is rarely the all-time way to solve it.

To be clear, this is an ongoing inquiry projection, not a signal that Nvidia will be launching the new GTX 1100 AI Serial in a few weeks. But some of the demos Nvidia has released are quite impressive in their own right, including a few that propose at that place might be a style to integrate ray tracing into gaming and existent-time 3D rendering much more smoothly than what we've seen in the past.

A new blog post from the company illustrates this point. Aaron Lefohn reports on how Nvidia worked with Remedy entertainment to train GPUs on how to produce facial animations straight from actor videos. He writes:

Instead of having to perform labor-intensive data conversion and touch-upwards for hours of actor videos, NVIDIA's solution requires only v minutes of grooming information. The trained network automatically generates all facial animation needed for an entire game from a simple video stream. NVIDIA'southward AI solution produces animation that is more than consequent and retains the same fidelity equally existing methods.

Simply drawing animations isn't the merely thing Nvidia thinks AI can do. I of the reasons why ray tracing has never been adopted as a primary method of drawing graphics in figurer games is because information technology's incredibly computationally expensive. Ray tracing refers to the practice of creating scenes by tracing the path of lite as it leaves a (false) calorie-free source and interacts with other objects nearby.

A realistic ray traced scene requires a very large number of rays. Performing the calculations to the caste required to make ray tracing preferable to the technique used today, known every bit rasterization, has generally been beyond modern GPU hardware. That's not to say that ray tracing is never used, but information technology's typically deployed in limited ways or using hybrid approaches that blend some aspects of ray tracing and rasterization together. The piece of work Nvidia is showing at SIGGRAPH this week is an example of how AI tin can take a relatively crude image (the image on the left, meridian) and predict its last course much more than chop-chop than really performing plenty ray traces to generate that effect through brute force.

AI-Denoise

Using AI to de-noise an epitome.

Ray tracing isn't the only field that could do good from AI. Equally shown above, it's possible to use AI to remove noise from an image — something that could be incredibly useful in the future, for example, if watching lower-quality video or playing games at a low resolution due to panel limitations.

ai-antialiasing

AI can apparently also be used for antialiasing purposes.

In fact, AI can too be used to perform antialiasing more accurately. If you follow the topic of AA at all, you're likely aware that every method of performing antialiasing — which translates to "smoothing out the jagged pixels that drive yous nuts" — has drawbacks. Supersampled antialiasing (SSAA) provides the best overall prototype quality, but sometimes renders an prototype blurry depending on the grid order and imposes a huge functioning penalty. Multisample antialiasing reduces the performance impact, merely doesn't fully supersample the unabridged image. Other methods of simulating AA, like FXAA or SMAA, are much less computationally expensive merely too don't offer the same level of visual comeback. If Nvidia is right about using AI to generate AA, it could solve a problem that'due south vexed GPU hardware designers and software engineers for decades.