Photorealism – the future of video game visuals

Photorealism – the future of video game visuals

Imagine looking into the eyes of a video game character and knowing that they have lied to you, or that they’re scared, or that they love you.

Right now, even with the astonishing power of current multi-core processors and graphics chipsets, the people we encounter in visually beautiful games like Far Cry 4, Assassin’s Creed: Unity and Tomb Raider lack something in their faces, some spark of humanity. The phenomenon has a well-known name, the Uncanny Valley, coined by robotics professor Masahiro Mori. His hypothesis, first put forward in 1970, was that as human reproductions get closer to authenticity, the tiny inaccuracies become increasingly disturbing. Video game characters look so real, but not real enough, and we recoil from them.

Video game worlds are similarly abstracted. The city of Los Santos in Grand Theft Auto V; the bustling Paris of Assassin’s Creed: Unity … all the surface details are there, but these are just virtual film sets. Most of the doors are locked, and if you point GTA’s most powerful rocket launcher at any building, the explosive impact will do no damage at all. The computational cost of simulating collapsing masonry is huge.

The problems of reality

These are the challenges facing the makers of video game graphics cards, the chunks of complex hardware that slot into your computer or console, and that work alongside the central processing unit to produce those breathtaking visuals. And things are changing. “Game developers are increasingly interested in highly simulated and dynamic worlds that allow players to create, destroy, and interact with the environment in interesting ways,” says Tony Tamasi, senior vice president of content and technology at graphics hardware specialist Nvidia. “By having more dynamic experiences, or simulation driven design, the worlds become more alive and involving, without necessarily having to be carefully – and expensively – modelled, animated and scripted.”

Instead of constructing environments as glorified 3D film-sets, game developers are now building them as groups of simulated physical objects that react to player actions. There have been moves in this direction for several years: titles like Red Faction, Far Cry, Crysis and Breach have all featured destructible scenery, but it’s been very controlled, and isolated to specific areas. Even Battlefield 4’s visually impressive Levolution system, which creates authentic moments of massive scenic destruction, operates only with certain buildings. However, it seems forthcoming titles like Microsoft’s reboot of open-world action adventure Crackdown, and multiplayer strategy shooter Rainbow Six Siege will both feature large scale destruction which completely alters the environment for players.

This has been allowed by advances in real-time physics calculations made by both graphics card manufacturers and the coders behind advanced 3D games engines like Cryengine and Unreal Unreal Engine 4. We’re heading into a new era of physically-based rendering, where the properties of individual textiles and materials are simulated to produce natural, reactive environments. “We’ve had to develop GPU friendly, highly parallel physical simulation algorithms, which in many cases has required us to rethink algorithms at a fundamental level,” says Tamasi. “We’re at the point where these things can produce truly involving experiences. The Batman and Borderlands franchises have really taken this capability to heart.”

Switching on the lights

Another key part of this move toward naturalistic simulated environments is lighting. For many years, light and shadow sources were “baked” into video game scenes, meaning they were drawn onto the landscape by artists. Now, games feature realistic physical light sources that simulate the interaction between light rays and objects. Characters cast shadows as they pass under street lamps; flames glint off metal swords. The holy grail is “global illumination”, essentially a system that accurately simulates how light reflects and refracts between different surfaces creating an array of indirect lighting sources. This is common in animated movies, but they can render every scene in advance, using whole farms of powerful computers; games have to render every moment in real-time on a single PC or console.

“We’ve made amazing advances in real time rendering capability over the years,” says Tamasi. “Shadows have gone from simple blobs, to hard-edge cast shadows, to soft edged shadow maps, to a form of global illumination called ambient occlusion. Many films with extensive computer graphics work use advanced forms of rendering called path tracing. These simulate the actual light propagation, shooting billions of rays into the scene, bouncing them around, and physically simulating that light’s interaction with a surface. These techniques can produce great results, but are incredibly computationally expensive. They can be done on GPUs though; Nvidia has Iray, a physically correct, photo-realistic rendering solution and OptiX, a ray tracing engine elevating you to a new level of interactive realism.”

We’ll see the fruits of these advances throughout 2015. The likes of Witcher 3, Bloodborne and Project Cars all feature complex real-time lighting and shadow effects, and promise to innovate on elements such as dynamic weather, day/night cycles and subtle particle effects. “Imagine truly simulation-based volumetric dust clouds, kicked up by a landing chopper, with player characters moving through those swirls of dust, interacting with them,” says Tamasi. “Or characters blowing holes in the walls of a castle, allowing the sunlight to stream in from outside, bouncing from the floor, to the walls, to the drapes.”

The human question

But there is still that big question hovering over the inhabitants of those environments. While the faces of Joel and Ellie in the Last of Us, and Kevin Spacey in Call of Duty: Advanced Warfare, have moments of genuine humanity, there is still a haunting dislocation. A gap.

Nvidia sees it. “Skin is a computationally hard problem,” says Phil Scott, Nvidia’s lead technical evangelist in Europe. “Skin will accept light from the environment and then the light scatters around within the first few millimetres of flesh. Then it reemerges, coloured by what it has encountered. If I was to hold my hand up to the light, you’ll see a red glow around the edges: in games, we’re really getting into those important subtleties now.”

Hair too, has always been a problem – rendering thousands of shafts to create something that doesn’t look like a plastic-moulded wig has tried many teams. AMD, another graphics card manufacturer and Nvidia’s main rival in the space, has developed its TressFX technology to help solve the problem – it produced Lara Croft’s convincingly bouncy pony tail in the Tomb Raider game.

It seems like peripheral stuff, but it isn’t. “If video games are to tell more emotional stories then this is important,” says Scott. “Better skin makes characters more believable; and the eyes are the gateway to the soul. As humans we look at each other in the eyes, we communicate through the eyes; so even though they take a tiny portion of the screen space, developers need to spend a lot of rendering effort on making them realistic.”

Jon Jansen, Nvidia’s developer technology lead for Europe picks up the theme. “Eyes are so complex because of the way they interact with light,” he says. “There are a lot of internal reflections, because it’s a lens system. Look at the way that the automotive industry tries to render headlights in their simulations – that’s a real test for ray-tracing because you have so many internal reflections. There’s something of that going on with eyes.

“There’s also the issue of sub-surface scattering of light within the cornea, because it is not a perfectly opaque surface. There are all these subtle, physical phenomena going on in the eye, and if you devote time to getting that right, you’re rewarded – you can get stunning results.”

Emotions as game design

Of course, destructible worlds with realistic physical properties, and beautiful human characters with doleful eyes, won’t make bad games good or boring stories exciting, but they may allow developers to explore more subtle narratives. Rockstar’s 2011 crime drama LA Noire, which made extensive use of facial motion capture, asked players to work out if suspects were lying during interrogation by reading their expressions and body language. The emotional fidelity wasn’t quite there, but it could be in a few years time. Late last year, Supermassive Games released a demo of its upcoming Until Dawn title, showing its impressive use of contextual facial animation:

Maybe we will one day look into the physically rendered eyes of game characters, and understand, from the way light is glinting off their corneas, that they are upset or desperate. Maybe emotional intelligence will one day be as important a game skill as hand-eye coordination.

But will we ever be at the stage where we mistake game environments, and game characters, for the real thing? “We’re a long way from photorealism in real-time rendering,” says Tamasi. “We’re just getting to the point in movies where limited amounts of human CGI passes [as photo-realistic] for most audiences. Those are movies with effectively infinite render times per frame, and artists carefully scanning, crafting and animating a single performance.

“Real-time graphics is probably a decade or more behind film in terms of what can be conveyed visually, and film isn’t ‘done’ yet either. I’d say we’ve got decades more innovation to come.”

Born to Engineer Weekly

Get the latest Engineering news delivered to your inbox every Monday morning