Figured this would be the place to ask since we have a nice selection of graphics nerds

Thanks!
There's a few issues with why things are the way they are. First is inertia. When computer graphics first started becoming a thing, triangle rasterization was the only real way to do it in real-time speeds. Ray-tracing required way too much time and resources to pull off with a decent amount of quality. Over time, improvements were made to triangle rasterization to make it even more efficient and look better. People got used to triangle rasterization as "the way to do things", so kept further improving it to make it even better looking. More time and research was put into triangle rasterization over ray-tracing because the former had more practical application -- people knew and could use triangle rasterization, we have a bunch of tools and production pipelines all designed around it, we have hardware specifically designed to do it, and it could be used to create good looking visuals now rather than later -- and taking a bit of time to further research and improve it was a better than tossing it all out and restarting with ray-tracing.Ravenwing wrote: ↑19 Aug 2018, 02:16It's surprising to me because the way things are currently done sounds way more complicated and ray tracing seems more intuitive. I'm honestly surprised that this wasn't (successfully) done before now in the professional realm since basically all prerendered stuff has been doing this for years. It will be interesting to see where this goes for gaming.
I imagine this is an exciting time for the two sides of the graphics industry. Before they were almost entirely separated by the necessities of what they were trying to accomplish. With time being the most valuable resource for real-time stuff, and fidelity being the most important for pre-rendered stuff. Now that the technology and research in each side is maturing in a way that allows them to start reaching across and using the other's accumulated knowledge. Hopefully the cheats the pre-rendered side has figured out speeds things along for gaming. Even in the video the Nvidia guy said they're going to have to rely on hybrid approaches for foreseeable future.Chris wrote: ↑19 Aug 2018, 04:18I remember one QuakeCon where John Carmack mentioned some tricks ray-tracing would employ to get good-looking lighting in pre-rendered videos, by 'focusing' higher-order bounces toward known light sources, because even if you're not generating frames in real-time, the less time you take for a frame means less money needs to be spent on it (making 4 videos a month with some small compromises is generally considered better than making 1 video a month with no compromises).
This is something I didn't know about until now, but it makes sense. I think the video touches on this, as part of Turing's significance is it's hardware accelerated neural network stuff. It reminds me of the neural network tools for upscaling textures that I tried out a few months ago. Fill in the missing information more intelligently. It's fun to watch this stuff mature! Just wish it would happen fasterChris wrote: ↑19 Aug 2018, 04:18Ray-tracing has been technically possible for a long time, but it requires either slower-than-real-time rendering (having predetermined scenes and camera angles can also help avoid exposing visual oddities) or not looking as good as the method promises. It scales really poorly with resolution too, as each new row of pixels to render is a whole new row of rays that need processing. There's also issues with aliasing as a result of each ray being a discrete spatial sampling (generally requiring something akin to SSAA to properly smooth out). Even today, tricks need to be employed to deal with the processing costs; for instance I somewhat recently saw a video about using machine learning to apply a post-process and clean up real-time ray-traced frames based on what it knows about how the scene should look. And even then you could still see some artifacts.