For any engineer worth their salt working with AR/VR/XR, product design or simulation, ray tracing is a technology with which they should become familiar. This is because it is one of the most significant advances in graphics since the beginning of 3D itself and it is about to move from the rarefied world of movies and advertising, into embedded categories like mobile, wearables and automotive as new, more efficient ways of processing ray tracing come to market. Kristof Beets, Senior Director Product Management, Imagination Technologies explains.
If you look at any 3D scene, the level of realism is greatly dependent on the lighting. In traditional graphics rendering (rasterisation), light maps and shadow maps are pre-calculated and then applied to the scene to simulate how the scene should look. However, while great looking results can be achieved, it is always limited by the fact that these techniques merely simulate light. Ray tracing mimics how light behaves in the real world to create more accurate and procedural images of reflection, transparency, illumination and materials.
In real life, a virtual beam of light travels from the source onto an object. The light will then interact with that object and, depending on the surface properties of that object, will then bounce onto another surface. The light will then carry on bouncing around, creating light and shade.
In ray tracing on a computer, or more accurately ‘path tracing’, the process is reversed from how light travels in the real world. The light is virtually emitted by the camera viewpoint onto objects in the scene, and algorithms calculate how that light would interact with the surface it has hit depending on its properties. It then traces the path of each ray onto each object back to the light source. The result is a scene lit just like it would be by the sun in the real world: with realistic reflections and shadows.
Traditionally it’s not been possible to do this in real-time on embedded devices (or even high-end workstations) as the computational load was too high.
While ray tracing in gaming is very new, many people are familiar with it from animated 3D movies. The light reflecting in the rain puddles in the opening shot of Toy Story 4, is one recent example. However, these scenes take months to render on dedicated server farms, which doesn’t quite work for games, and must be generated in real-time at a minimum of 30 frames per second, and preferably double that or more.
Real-time ray tracing in gaming hasn’t been possible before because of the huge computational cost involved but this has changed thanks to hybrid approaches that combine the speed of rasterisation with the visual accuracy of ray tracing. How great would it be to have ray traced games on your mobile? That is something that will become a reality in the not-too-distant future.
One issue is that while real-time ray tracing solutions for mobile have been around for some time there hasn’t been an ecosystem to support it – but this is changing. In 2018 NVIDIA released hybrid real-time ray tracing capable hardware for the desktop PC market, especially gamers. However, even NVIDIA didn’t have games at launch, highlighting just how difficult it is creating an eco-system for a new technology. Today however many game developers are on-board, such as Bethesda and Unity, and next generation games consoles in 2020 will include some ray tracing features. Not long after than ray tracing will start to appear in other markets too - such as AR/VR.
The upshot is, with ray tracing back on the agenda and people starting to experience it on their PC, they are going to want it, and indeed expect it, on their mobile devices, VR headsets and consoles. Therefore, to keep up, engineers need to start familiarising themselves with the latest advancements with this industry-changing technology.