A couple of days ago, we had the pleasure of interviewing Crytek’s Technical Director, Rok Erjavec. We talked with Rok about DX12, Vulkan, Async Compute, physically-based rendering, the Global Illumination solution of CRYENGINE, the possibility of a full ray-tracing lighting system, and the future of Crytek’s engine. Enjoy the interview after the jump!
DSOGaming: Before we begin, please introduce yourselves to our readers.
Rok Erjavec: Hi, I’m Rok Erjavec, Technical Director at Crytek. I’ve been in the games industry since 1999 and have worked on games spanning across PlayStation 2, PlayStation 3, and PC. Currently I’m overseeing our game development teams in Frankfurt for the VR titles The Climb and Robinson: The Journey, as well as the previously-announced HUNT.
DSOGaming: A lot of engines already support DX12, and CRYENGINE is one of them. Have you experimented with this new API and what are the performance benefits of it? What is your general opinion on DX12?
Rok Erjavec: Yes, the engine team at Crytek actually started doing work on DX12 code-path sometime mid-last year, and we’re already seeing various benefits from this work in the recent release. The key aspect of working with lower-level APIs is explicit developer control over the rendering pipeline, which allows for better utilization of multi-processor resources, lower-overhead on traditionally expensive draw call and driver overhead for PC platforms, and it opens up possibilities for new workflows that were not viable with higher-abstraction APIs. One of the big differences from a game-development perspective is that it brings aspects of the PC graphic pipeline closer to what we have been able to leverage on consoles with their close-to-hw APIs already, effectively allowing us to change how we approach cross-platform development.
DSOGaming: The first DX12 that have been released on the PC were a bit underwhelming (in terms of performance and visuals compared to DX11). Is that normal? When can PC gamers expect to see games that will take advantage of DX12?
Rok Erjavec: As with every new technology stack, there’s a learning curve that we’re currently seeing and working through. DX10-11 pipelines have had the benefit of multiple years of focused driver optimization and work-arounds specific to their workflow, and the paradigm shift that comes with DX12 effectively erases some of the benefits of this legacy.
In conjunction with the fact that the early DX12 titles have effectively been ports of DX11-optimized assets and codebases, we are not yet seeing software that really targets DX12 strengths, and in some ways, teams have to specifically re-learn how to fix things that used to be the responsibility of driver development teams in DX11. The situation will continue to improve throughout 2016, and as more titles shift toward targeting low-level APIs from the start of development, the differences and benefits will become more obvious.
DSOGaming: Asynchronous Compute is a feature that has been in the spotlight since the announcement of DX12. Have you experimented with it and how would this feature benefit future games?
Rok Erjavec: Currently we’re using Asynchronous Compute in VR development, where a technique known as Async-Timewarp/Reprojection is used by VR APIs to reduce latency and offset the negative effect of occasional frame drops. We’re also in the process of evaluating and experimenting with other uses for it. The possibilities are interesting, but at least for the time being, it also introduces non-trivial added complexity in PC space due to lack of standardization and the relative immaturity of the hardware that supports it, where many GPUs come with sometimes problematic restrictions on pre-emption granularity, etc.
DSOGaming: Apart from the performance boosts, what visuals improvements can PC gamers expect from DX12 games?
Rok Erjavec: I would say this is less about a specific list of new features and more about possibilities that are opened up when working with closer access to the hardware. Certainly for many teams this can just mean leveraging performance gains – more densely populated environments with more dynamic and interactive elements. But I have no doubt we will see a lot more novel approaches and deviations from established rendering pipelines, something that has already been happening to an extent.
There’s a wealth of academic research that was ill-suited to practical application through a combination of hardware and software abstraction limitations. With the latter getting out of the way, and hardware flexibility continually improving – more than things just getting iteratively “prettier” – we’re looking at a rise in experimentation again. In a way, the industry has circled back to a place similar to where we were in the early-mid 90s, when software rasterization didn’t really impose any restraints on creativity beyond the 17ms barrier.
DSOGaming: What’s your opinion on UWP? Do you see it as something that will limit your future audience in case Microsoft’s locks DX12 behind it?
Rok Erjavec: UWP as a platform offers interesting possibilities, including the potential to expand beyond traditional PC audiences depending on how Microsoft’s execution follows through. While DX12 currently reaches only a portion of the PC user base, it’s already shared with other devices like Xbox One.
DSOGaming: Does CRYENGINE support Vulkan and have you experimented with it? If you had to choose one, would you go with DX12 or Vulkan?
Rok Erjavec: The current trajectory of Vulkan provides a path to reach PC users across all the widely used OS platforms, including legacy Windows versions, as well as mobile devices, so if I was shipping a title in 2017 or beyond, Vulkan looks like an appealing choice.
If we implemented Vulkan in CRYENGINE, we wouldn’t have to choose one, as titles built with our tech would work seamlessly with both, and thus leave this choice with the users instead. 😉
DSOGaming: Physically-based rendering is the next big ‘graphical’ thing. Can you explain the benefits of using this rendering technique to both our advanced-tech readers and the average Joe?
Rok Erjavec: PBR brings benefits on multiple sides – for development, it’s a workflow that allows our content-creation teams to utilize real-world data directly, with real-time output matching the behaviors and properties of real-world materials, light sources, etc. This removes a big part of the guesswork from artists’ hands and also allows them to directly adopt things from other industry domains (like film/offline rendering) and to do so more cost-effectively. For the end-users, obviously this leads to content that resembles photo-realism more than ever, which has been one of the big driving forces behind real-time rendering research pretty much since the beginning.
To go a bit more in-depth, the driving principle behind PBR is that the entire pipeline (rendering and asset creation workflow) respects energy conservation. Which takes away a lot of the “cheats” that artists used in the past where values could be used in “any” range as long as the output looked “good” to the artist’s eyes – but this came with the side effect that most of the time, such hacks would only work in a specific lighting environment or with specific camera views. A classic example of this is manual control over Fressnel – which dictates the amount of reflected light based on the viewer angle. In PBR this is a fixed property of real materials and no longer at the discretion of the artist. The benefit being that with access to a database of these values for real materials, it can be as simple as looking at a real object to decide whether the material suits your needs or not.
I’d add here that PBR is not always the correct answer for computer graphics – not every visual style needs to be physically based, or physically correct for that matter, but there’s a clear benefit to making this a binary choice rather than spend the rest of development trying to emulate it.
DSOGaming: Let’s talk about ray-tracing. Does CRYENGINE support ray-tracing? How far away are we from a fully real-time ray-tracing (or path-tracing) lighting system in video games?
Rok Erjavec: While various subsystems use a variety of ray-marching and ray-casting algorithms, and we can technically render the entire scene that way, CRYENGINE does not support ray-tracing in the traditional definition of the term. For the second part of the question: a big part of the answer is actually how we define having achieved “a fully real-time ray-tracing lighting system in video games.” If we define it as “the majority of games are using it,” it is still a very long way away, in part because the majority of games made today are not even targeting high-end visuals. If we define it as “some games are using it,” that has already happened, even though the count is very limited so far.
What generally gets glossed over in this discussion is that with the decades of research that have gone into improving real-time rasterization, the quality bar for alternate approaches is already quite high. We could ship a reasonably complex modern game ray-traced ”today” – if we were willing to make quality concessions on the end result.
So the question really should be, at what point will we achieve performance that allows real-time ray-tracing to generate visually competitive results in comparison to traditional rasterization? The answer to this will somewhat depend on adoption of new emerging technologies like VR, where latency dominance forces us to avoid certain rendering approaches all over again, as well as the direction in which graphics hardware evolves. But I would wager a guess that the time where we could hope for mass adoption of real-time ray-tracing (especially against traditionally built polygonal assets) is still at least a decade away.
With all this said, progress is iterative, and using a variety of acceleration structures, more and more techniques are now at least partly in the domain of tracing rays. Our new Global Illumination solution using voxel tracing, for instance, is a great example of using it to get a part of the solution.
DSOGaming: Photogrammetry is a technique that has impressed everyone these past few years. Does CRYENGINE support it and what’s your opinion on it?
Rok Erjavec: We’re currently using photogrammetry in all of our announced titles (The Climb, Robinson, and HUNT) so yes, we fully support it in CRYENGINE. Like PBR, it’s been a great addition for our art-teams to produce higher quality assets more effectively. Also like PBR, a big part of what makes photogrammetry work is adopting the right workflow, as well as having the right capture equipment.
DSOGaming: Can you share more tech details about CRYENGINE’s Global Illumination solution?
Rok Erjavec: As of last year, we have a new GI solution available that uses voxel tracing to light the scene, which was first released as an experimental feature sometime in summer 2015. This algorithm works in two steps, an async-job that continually voxelizes relevant scene geometry, and the actual lighting stage which traces rays against the generated voxel data to compute the light bounces. Without going into too much detail (you can read more on the CRYENGINE documentation site), default configuration provides AO and indirect light contribution – without the usual quality compromises that come with screen-space methods, and still with performance usable with midrange graphics hardware, including the current generation of game consoles.
Having this data set also opens additional possibilities – we can add traced-reflections and cone-traced shadows, which depending on the use-case, complement or even supplant existing real-time shadow and reflection methods.
As everything in CE, this being real-time with no reliance on pre-computed data – it allows switching between the old and new system at will by the artists, giving them a quick overview of how things change when using it and also greatly helps with isolating and fixing any issues that only arise in practical production use of the technique.
DSOGaming: Let’s talk about the future of CRYENGINE. What are the biggest future tech features that PC gamers can expect from CRYENGINE (apart from VR)?
Rok Erjavec: The team is always hard at work on new improvements, from optimizations to feature requests, and of course a big part of our focus has been to improve accessibility of the tech as well, especially now that it’s been made available to everyone. Unfortunately, I can’t currently share much in way of details regarding the most recent roadmap developments, but we have a number of initiatives under way that aim to raise the bar in graphic and animation fidelity, among other things.
DSOGaming: Pop-in of objects is an issue that has not been addressed yet. While some games have higher LOD levels, every game – more or less – suffers from it. What’s really puzzling us is that while dynamic tessellation basically solves this problem by varying the level of detail on the fly, no one has ever used it. What’s your opinion on dynamic tessellation and have you experimented with it in order to eliminate object/environmental pop-ins?
Rok Erjavec: Well – to begin with, dynamic tessellation is a single term that encapsulates a long list of different algorithms and methods, all of which work on a subset of the problem domain we’re trying to address. We’ve actually had access to hardware-accelerated tessellation since the early 2000s in some consumer-grade hardware, but most tessellation algorithms and their associated adaptive methods to determine required detail density are highly topology specific and do not work as a general-purpose solution for most in-game assets. This results in the use-cases being selective, and applied in very different ways in a subset of total games released.
The problem is also a little bit analogous to the ray-tracing question – except here we’re not answering the question of only the rendering costs, but also the costs of revamping and retraining the entire art production to work in a pipeline specifically designed for scalable assets. These costs generally greatly outweigh any perceived benefits to image quality.
Therefore, you do see dynamic tessellation used, but commonly only for things where we have good, mature solutions that don’t require big, disruptive workflow changes – like CRYENGINE’s own terrain for instance.
DSOGaming: A lot of PC gamers rated the first Crysis game as the best in the franchise, and have been asking for a remaster in the latest version of CRYENGINE. Is this something that Crytek considers?
Rok Erjavec: I agree that the first part of the Crysis series is a great game. While we would never rule out going back to it, at the moment we are focused on our current VR projects The Climb and Robinson: The Journey.
DSOGaming: Crytek is currently working on The Climb. Apart from this VR game, can PC gamers expect a new game that will push the graphical boundaries as much as Crysis did back when it was originally released?
Rok Erjavec: We are developing HUNT: Horrors of the Gilded Age for PC at the moment. We will be able to share more information on this game at a later stage, but I can tell you that it looks really impressive already.
DSOGaming: Lately we’ve seen a number of games being downgraded from their E3/GDC showcases. What’s your opinion on this? Should game developers showcase games that are not possible even on high-end PCs?
Rok Erjavec: I think in many cases, the criticism is misplaced. Trade-show demos are often carefully crafted experiences where a highly-polished single area is shown during the demo. It’s easy to blame hardware-targets when the final game isn’t quite the same – but what you are shown is a 15-minute part of the game that was built in three to six months. Meanwhile, the rest of the game’s 10+ hours are built over something in the range of 12 to 24 months. In many cases there was no downgrade as such – just the reality of producing much more content over what effectively amounts to (much) less time.
I do find gameplay showcases that aren’t running on actual hardware targets or are entirely pre-recorded a bit pointless, since games are fundamentally about interactive experiences.
DSOGaming: Thank you very much for the interview