NVIDIA Turing GPUs

Not about OpenMW? Just about Morrowind in general? Have some random babble? Kindly direct it here.
Post Reply
User avatar
Ravenwing
Posts: 335
Joined: 02 Jan 2016, 02:51

NVIDIA Turing GPUs

Post by Ravenwing »

Hey, so I’ve been seeing stuff about the new Turing GPUs and not really understanding the fuss as a whole. I get that ray tracing is expensive, but what is typically used instead and how could something be computationally cheaper if it still looks realistic? I’m also not really understanding what they’ve done to the hardware that they couldn’t do before that allows it. Lastly, I’d be interested in how this can help us and gaming in general in the next few years.

Figured this would be the place to ask since we have a nice selection of graphics nerds :geek: If you have any less technical links that still get into details that would be awesome too! Just didn’t really know where to look.

Thanks!
User avatar
AnyOldName3
Posts: 2666
Joined: 26 Nov 2015, 03:25

Re: NVIDIA Turing GPUs

Post by AnyOldName3 »

Ray tracing requires that for every object in the scene, you have a function that takes a ray and tells you if it hits the object, then you shoot a ray from the camera, through each pixel, and remember the nearest object which it hits. It produces good results, especially for effects like reflections and refractions as it's no more complicated to bounce or bend a ray when it hits something. However, it's slow as everything needs to be processed individually (although if you have a bazillion cores, you could do it in parallel).

Most software like games uses a different approach involving rasterisation and a z-buffer. You feed the GPU all the vertices for each object, and then it processes them according to a vertex shader to work out where they appear on the screen. The GPU also gets told which vertices join together to make triangles, lines and quadrilaterals, so now it knows where the vertices are that make up each shape, it actually draws them on the screen (rasterises them). This also involves interpolating the properties of vertices along the surface of the shapes so that every pixel gets an appropriate value. The depth of each pixel of each shape gets compared against a z-buffer (something with as many pixels as the screen, but they each store a depth value) to see if they're closer to the camera than the last value. You also colour in each pixel with a fragment shader based on the interpolated vertex values and then move onto the next object. It's faster because all the vertices for an object can be processed at once on a SIMD/vector processor (which does the same thing for lots of different data at once), the rasterisation step can be done quickly by fixed-function hardware, and the fragment shader for each object can also be done in parallel by a SIMD processor. However, it can't do lots of effects like reflections without cheating, and only looks good because graphics programmers have had lots of practice at cheating.

Right now, I'm running with a two-month lag on tech news, but knowing Nvidia, this isn't going to be all that impressive when it comes to consumers for quite some time, and it's something we'd have had five years ago if their anti-competitive and anti-consumer behaviour (plus ATI/AMD's mistakes) hadn't had left them with an effective monopoly and no need to compete except with their past selves.
raven
Posts: 66
Joined: 26 May 2016, 09:54

Re: NVIDIA Turing GPUs

Post by raven »

They've added a bunch of dedicated ray-triangle intersection units to the GPU. And implementing algorithms in hardware is always much more efficient than running them on generic processing units.

Btw Nvidia are not the first to offer hardware accelerated ray tracing. PowerVR had ray tracing units in their GPUs for years. But they don't do desktop graphics, so it didn't see much use if any (afaik).
User avatar
Ravenwing
Posts: 335
Joined: 02 Jan 2016, 02:51

Re: NVIDIA Turing GPUs

Post by Ravenwing »

Thanks guys! It is making quite a bit more sense now. I also found this video, which isn't about Turing, but still relevant and I thought it was helpful too. https://www.youtube.com/watch?v=lFnWy0Odsh8 It certainly helped with the "why" of everything.

I think I was actually more confused about how stuff is done currently and didn't know it. It's surprising to me because the way things are currently done sounds way more complicated and ray tracing seems more intuitive. I'm honestly surprised that this wasn't (successfully) done before now in the professional realm since basically all prerendered stuff has been doing this for years. It will be interesting to see where this goes for gaming.
Chris
Posts: 1625
Joined: 04 Sep 2011, 08:33

Re: NVIDIA Turing GPUs

Post by Chris »

Ravenwing wrote: 19 Aug 2018, 02:16 It's surprising to me because the way things are currently done sounds way more complicated and ray tracing seems more intuitive. I'm honestly surprised that this wasn't (successfully) done before now in the professional realm since basically all prerendered stuff has been doing this for years. It will be interesting to see where this goes for gaming.
There's a few issues with why things are the way they are. First is inertia. When computer graphics first started becoming a thing, triangle rasterization was the only real way to do it in real-time speeds. Ray-tracing required way too much time and resources to pull off with a decent amount of quality. Over time, improvements were made to triangle rasterization to make it even more efficient and look better. People got used to triangle rasterization as "the way to do things", so kept further improving it to make it even better looking. More time and research was put into triangle rasterization over ray-tracing because the former had more practical application -- people knew and could use triangle rasterization, we have a bunch of tools and production pipelines all designed around it, we have hardware specifically designed to do it, and it could be used to create good looking visuals now rather than later -- and taking a bit of time to further research and improve it was a better than tossing it all out and restarting with ray-tracing.

The second issue is processing capabilities. The concept of ray-tracing is more intuitive and physically accurate, but that accuracy comes at the cost of needing to process a lot more data. To even have a hope of it working in a real-time scenario required skimping out on various details, such as diffuse reflections and soft shadows (each "bounce" of a ray requires an exponentially increasing number of rays to correctly render dynamic geometry). I remember one QuakeCon where John Carmack mentioned some tricks ray-tracing would employ to get good-looking lighting in pre-rendered videos, by 'focusing' higher-order bounces toward known light sources, because even if you're not generating frames in real-time, the less time you take for a frame means less money needs to be spent on it (making 4 videos a month with some small compromises is generally considered better than making 1 video a month with no compromises).

Ray-tracing has been technically possible for a long time, but it requires either slower-than-real-time rendering (having predetermined scenes and camera angles can also help avoid exposing visual oddities) or not looking as good as the method promises. It scales really poorly with resolution too, as each new row of pixels to render is a whole new row of rays that need processing. There's also issues with aliasing as a result of each ray being a discrete spatial sampling (generally requiring something akin to SSAA to properly smooth out). Even today, tricks need to be employed to deal with the processing costs; for instance I somewhat recently saw a video about using machine learning to apply a post-process and clean up real-time ray-traced frames based on what it knows about how the scene should look. And even then you could still see some artifacts.
User avatar
Ravenwing
Posts: 335
Joined: 02 Jan 2016, 02:51

Re: NVIDIA Turing GPUs

Post by Ravenwing »

Chris wrote: 19 Aug 2018, 04:18 I remember one QuakeCon where John Carmack mentioned some tricks ray-tracing would employ to get good-looking lighting in pre-rendered videos, by 'focusing' higher-order bounces toward known light sources, because even if you're not generating frames in real-time, the less time you take for a frame means less money needs to be spent on it (making 4 videos a month with some small compromises is generally considered better than making 1 video a month with no compromises).
I imagine this is an exciting time for the two sides of the graphics industry. Before they were almost entirely separated by the necessities of what they were trying to accomplish. With time being the most valuable resource for real-time stuff, and fidelity being the most important for pre-rendered stuff. Now that the technology and research in each side is maturing in a way that allows them to start reaching across and using the other's accumulated knowledge. Hopefully the cheats the pre-rendered side has figured out speeds things along for gaming. Even in the video the Nvidia guy said they're going to have to rely on hybrid approaches for foreseeable future.
Chris wrote: 19 Aug 2018, 04:18 Ray-tracing has been technically possible for a long time, but it requires either slower-than-real-time rendering (having predetermined scenes and camera angles can also help avoid exposing visual oddities) or not looking as good as the method promises. It scales really poorly with resolution too, as each new row of pixels to render is a whole new row of rays that need processing. There's also issues with aliasing as a result of each ray being a discrete spatial sampling (generally requiring something akin to SSAA to properly smooth out). Even today, tricks need to be employed to deal with the processing costs; for instance I somewhat recently saw a video about using machine learning to apply a post-process and clean up real-time ray-traced frames based on what it knows about how the scene should look. And even then you could still see some artifacts.
This is something I didn't know about until now, but it makes sense. I think the video touches on this, as part of Turing's significance is it's hardware accelerated neural network stuff. It reminds me of the neural network tools for upscaling textures that I tried out a few months ago. Fill in the missing information more intelligently. It's fun to watch this stuff mature! Just wish it would happen faster :?

On a partially unrelated note, just learned about a new feature in Photoshop that allows you to "uncrop" images by intelligently extending similar image information outside of what was taken. It looked pretty impressive, but I'm sure it mostly only works on visually bland areas like sand and clouds. I think it was to help reduce the amount you have to crop out of pictures when correcting rotation slightly.
User avatar
afritz1
Posts: 47
Joined: 05 Sep 2016, 01:18
Contact:

Re: NVIDIA Turing GPUs

Post by afritz1 »

My engine for Arena used to do ray tracing on the GPU but I removed that renderer in favor of a simpler CPU-based, 2.5D one that's easier to prototype and maintain. I don't have to deal with shader compiling, platform incompatibilities, version mismatches, etc., etc.. The ray tracer had pretty nice dynamic hard shadows, though :(

Ray tracing does take a lot of computation. Each ray is linearly independent, so the ability to interpolate or predict results between two rays is a gray area. There's cone tracing and beam tracing, but I think the general consensus is that those are hard to implement and are too expensive for real-time applications. Ray differentials are a thing, but I haven't delved into that yet. I think it's just for texture filtering.

There are all sorts of techniques for increasing the efficiency of ray tracing without sacrifices:
- SIMD/vector operations: do calculations on multiple rays with one instruction
- Packets and tiling: adjacent rays often hit the same object = potential for cache coherency and work balancing
- Multi-threading: very easy to throw dozens/hundreds of threads at the problem
- Stochastic sampling: mix random and uniform sampling to get the best of both worlds -- predictable samples without visible repetition to reduce the number of rays required for the same result
- Spreading work out over several frames
- Adaptive anti-aliasing: testing intermediate results to avoid doing extra work on something like a clear sky or solid color
- Acceleration structure: testing only the shapes a ray comes close to

I did some experiments with adaptive anti-aliasing a couple years ago (images attached). While the solution I found probably isn't very good in the general case, it might be useful in games that use no texture filtering, like TES: Arena and Minecraft. One version of it colors a pixel blue if it needs anti-aliasing, and the other colors a pixel anywhere from black to white depending on how much anti-aliasing it needs. I don't remember how large or small of an improvement in performance it was, but it was an interesting experiment nonetheless :P

Moral of the story is that there are many ways to optimize a ray tracer so it is not as outrun by a rasterizer, and that using a ray tracer in a game is not fiction anymore :). However the fact remains that much of ray tracing relies on approximating integrals which operate over a non-local range in 3D space (hence the global nature), and it's hard to fit all that global data into cache ;)
Attachments
Arena adaptive AA blue.PNG
Arena adaptive AA intensity.PNG
User avatar
Tes96
Posts: 232
Joined: 29 Feb 2012, 03:45
Location: Off-grid

Re: NVIDIA Turing GPUs

Post by Tes96 »

So is ray tracing something the OpenMW engine could utilize if the computer setup was beefy enough?
User avatar
AnyOldName3
Posts: 2666
Joined: 26 Nov 2015, 03:25

Re: NVIDIA Turing GPUs

Post by AnyOldName3 »

There's an awful lot that would need to be changed before it's even something we could consider, and right now, the only real-time raytracing libraries have incompatible licences with OpenMW's GPL. Something may materialise in a few years (as in ten to fifteen) compatible with games built for OpenMW, but it's unlikely that it'll ever be a thing usable in Morrowind even if it does as all the assets would need remaking with more modern materials.
Post Reply