id Software Tech Interview: DX12, Vulkan, Mega-Textures, PBR, Global Illumination & more


A couple of weeks ago, we had the pleasure of interviewing id Software’s Tiago Sousa and Axel Gneiting. We talked with them about some of the key features of the id Tech 6, the evolution of their Mega-Textures tech, as well as all both DirectX 12 and Vulkan. Enjoy the interview after the jump!

DSOGaming: Before we begin, please introduce yourselves to our readers.

Tiago Sousa: My name is Tiago Sousa, Lead Renderer Programmer at id Software and DOOM. Before id, I was at Crytek for over a decade working on CryENGINE and games like Far Cry and Crysis.

Axel Gneiting: Hello, my name is Axel Gneiting and I’ve been working on engine technology at id Software for about one year. Before that I was part of the team that shipped Ryse at Crytek Frankfurt. At id I have been involved in general engine work, optimizing it to run at 60 Hz on both CPU and GPU and I implemented most of the Vulkan renderer.

DSOGaming: DOOM is the first game that is powered by the id Tech 6, and we have to say that we were really amazed by it. Can you share more details about the key graphical/tech features of the id Tech 6?

Tiago: Sure. Besides the renderer, there was a fairly huge refactor / update across the entire engine. Renderer side, we are now a Hybrid Forward and Deferred renderer, supporting dynamic lighting, reflections, shadows and decalling solution across all surface types including transparencies.

High Dynamic Range Rendering and Physically based Shading were also relatively big introductions into idTech 6 and also art-wise – if you remember previous engine iterations were LDR with less shading happening. Then there are things like advanced post processing and anti-aliasing.

I guess you could say the key tech “feature” is more how everything is designed to look, perform and scale well at +60hz across all platforms.

DSOGaming: Does DOOM use Mega-texturing? And how were you able to overcome the previous engine limitations regarding dynamic lighting and multi-GPU support that affected all id Tech 5 games?

Tiago: For supporting dynamic lighting and to improve texture quality, we evolved Mega-Textures into a simpler, more performant solution that is more disk friendly, and has faster iteration times for art as well. Simple features like tilling were added; a new dynamic decaling solution was also introduced; texture inputs are now also a form of baked G-buffer input, so the common PBR inputs like Albedo, Smoothness, Normals and so on.

For multi-GPU we tried to mitigate inter-frame dependencies as much as possible to help IHVs achieve a fairly decent scaling.

DSOGaming: DOOM looks beautiful and runs incredibly well on the PC. How were you able to achieve something like that?

Tiago: Thank you. Not having to support 1 million different games and platforms helps the team focus on long term results and on things that matter, plus keeping code quality fairly high and minimalistic due to much less legacy / code entropy.

Performance was a huge team focus, and I mean all departments, not just code. On code performance side, I could say though that our philosophy was “early optimization is the root of all good”, and the results speak for themselves. For performance, I must also give credit to the consoles’ profiling tools; this is an area where PC has a lot of catch up to do.

Another important detail was consoles and lower platform bound being DX11 type of hardware, so, not having to support extremely old hardware and also the fact that current consoles have a relatively generous amount of memory, it actually helped a fair bit with our design choices.

For the visual side, it’s really a symbiotic relation of our rendering performant design and tools helping the amazing art department at id Software achieve their vision.

DSOGaming: Physically-based rendering is the next big ‘graphical’ thing. Can you explain the benefits of using this rendering technique to both our advanced-tech readers and the average Joe?

Tiago: My favorite PBR part is that it helps enforcing sanity / a good methodology for artists to work in a consistent matter, other than every single artist randomly creating textures without much physical meaning.

In simple terms, non-PBR solution can of course work, but changing lighting conditions you’ll start having lighting inconsistencies, which inevitably causes art to enter a loop. For example – why do things look OK on an artist’s machine but with different light conditions it doesn’t look as good; another example could be why some assets nearby look completely out of place.

These types of inconsistencies in the past would have created a lot of work for the art departments. Looking back, it’s hard to grasp how the Movie and Games industry ever managed to work without PBR.
That said, I believe we can still improve for a variety of different types of inputs and cases.

To summarize, one can say the main advantage is consistency of visuals/results across a wide variety of lighting conditions, generally better visuals in the end due to consistency, and very importantly is it helps save art time.

DSOGaming: Is id Tech 6 able to handle huge open-world environments (Skyrim-like) with the same quality of visuals that is available on DOOM?

Tiago: The engine can support relatively open worlds but it wasn’t our goal with the current idTech iteration, it was all about what worked for DOOM.

DSOGaming: Can you share more tech details about id Tech 6’s Global Illumination solution?

Tiago: For maximizing performance, on this iteration we kept to a mix of pre-baked and real-time components, being all inputs in HDR. For diffuse indirect lighting we ended up using a lightmap for the static geometry. For all types of dynamic geometry we made a derivation of Irradiance Volumes. For indirect specular it is an Image Based Lighting solution. The realtime components are approximated in screen space for performance, we ended up using a fairly fast derivation of Screen Space Directional Occlusion / Reflections.

DSOGaming: If you had to choose only one tech feature of the id Tech 6, what that would be?

Tiago: Hard to pick but I guess it would be how all lighting/shading works consistently and efficiently across a wide variety of surface types. As with most things, we can improve it, but I’m happy with the results.

DSOGaming: While testing DOOM, we noticed some scenes in which our GPU was not used to its fullest (even though we were not CPU or VRAM limited). Is this a limitation of the OpenGL API and can we expect better GPU utilization in these scenes with Vulkan?



Axel:
From data we collected with our profiling tools we do not expect the GPU to be idle for significant periods of time in a GPU limited scenario. OpenGL does not have any limitation in this regard. In general, it’s a safe assumption that it behaves similar to DirectX 11. However, we do limit the number of frames in-flight to only one, because latency is very important for a very fast paced shooter like DOOM. This could lead to slightly lower GPU usage compared to an engine that allows more frames to be buffered. We expect Vulkan to help leverage remaining free GPU time by running asynchronous compute workloads if the hardware and driver supports it.

In case this scene is limited by CPU draw calls, Vulkan will certainly help immensely. DOOM on Vulkan will also distribute the rendering to more than one thread, which will help CPUs with more than two cores. Please mind that there is still game related code like AI, path finding and physics which cannot be sped up by using a different graphics API.

(Editor In Chief: we can confirm that this scene runs way better in Vulkan.)

DSOGaming: Since you’ve worked on both DirectX and OpenGL, what’s your opinion on both of these APIs? And do you consider Vulkan to be better than DirectX 12?

Axel: Overall we had very little issues developing DOOM with OpenGL. DirectX 11 might have an advantage when it comes to optimizations on driver side, as more effort was put into that over the years by the IVHs compared to OpenGL. Also admittedly, there is a broader range of tools that work with DirectX than OpenGL at this point in time. On the other hand, OpenGL has a bigger feature set through extensions and it is less restrictive in terms of certain GPU operations.

DirectX 12 and Vulkan are conceptually very similar and both clearly inherited a lot from AMD’s Mantle API efforts. The low level nature of those APIs moves a lot of the optimization responsibility from the driver to the application developer, so we don’t expect big differences in speed between the two APIs in the future. On the tools side there is very good Vulkan support in RenderDoc now, which covers most of our debugging needs. We choose Vulkan, because it allows us to support Windows 7 and 8, which still have significant market share and would be excluded with DirectX 12. On top of that Vulkan has an extension mechanism that allows us to work very closely with AMD, NVIDIA and Intel to do very specific optimizations for each hardware.

DSOGaming: Photogrammetry is a technique that has impressed everyone these past few years. Does id Tech 6 support it and what’s your opinion on it?

Tiago: It’s not really an engine specific feature. Photogrammetry has been around for quite some decades before it was a buzzword. Essentially it means using photography as means of data acquisition. One could say back in 2006 that Crysis used photogrammetry for Vegetation, Paralax Occlusion Mapped rocks, for ground detail and some other assets.

But it has been continuously refined. I’m a fan of real world data measurement / acquisition; I think it has some potential for speeding up production. I think one area, in particular for games that are aiming at a realistic art direction, is that for me it doesn’t really make sense to remake assets over and over again, so why not have a database?

From that perspective, I wouldn’t be surprised if someone would make a business out of it. Then there’s things like entirely scanning certain scenes; it could be interesting / doable for certain restricted cases – when I say restricted, I mean due to amount of data storage requirements, for a high fidelity / density / HDR encoded, is a bit insane.

Then there are things like characters scanning and other types of data acquisition, it’s definitively an interesting research field these days.

DSOGaming: Does id Tech 6 support destructible environments? And can we expect a id Tech 6 game to feature a destruction system as advanced as the one found in Frostbite 3 games (like Battlefield 3 or 4)?

Tiago: We support destructible environments. It’s something that is definitively on our list to improve. I think particularly these days, fairly good and performant results can be achieved with geometry cache type of solutions for massive scale destruction, mixing with physics for lower scale.

DSOGaming: Doom 3 – powered by id Tech 4 – was one of the few games in which almost all light sources casted dynamic shadows. It was mind-blowing back in 2004. However, we’ve noticed that in newer games, a lot of lights do not cast shadows. Is id Tech 6 able to render dynamic shadows on all light sources? And what’s the maximum number of light sources that can cast dynamic shadows in id Tech 6?

Tiago: The engine actually supports a fairly generous amount of light sources, conceptually > 100k, but we do clamp to a sane maximum of about 8k, although if you’re using that many light sources you’re likely doing it wrong! The amount of shadow casters is limited by the video memory we have to cache our shadows, so for a single frustum and for performance we do clamp to 256 shadow caster lights – it could be increased, but at performance cost and again, if you’re using that many, you are likely doing it wrong for shadow map based approaches.

Per frame only a very small subset of these are actually updated, since costs are constantly amortized over frames and at times cached entirely if nothing updated in a light frustum.

DSOGaming: Let’s talk about Ray-tracing. Does id Tech 6 support ray-tracing? How far away are we from a fully real-time ray-tracing (or path-tracing) lighting system in video-games?

Tiago: We could say our dynamic reflections is a ray-tracing approximation in an efficient / friendly space for realtime rendering, but it’s different to a fully-fledged ray tracing solution. There is a funny saying, that is something like “Ray Tracing is the Future, and will always be” ( particularly path tracing ) – and I think that’s going to be the case, for a long time, for fairly complex games and if we are talking about entirely photo-real approximation, and particularly with the current push for 4k and wider resolutions.

For the shorter term, I could totally envision smart path tracing approximations for not very intensive / simple environments, particularly solutions computed at a lower frequency rate than screen pixel density of course.

DSOGaming: Pop-in of objects is an issue that has not been addressed yet. While some games have higher LOD levels, every game – more or less – suffers from it. What’s really puzzling us is that while dynamic tessellation basically solves this problem by varying the level of detail on the fly, no one has ever used it. What’s your opinion on dynamic tessellation and have you experimented with it in order to eliminate object/environmental pop-ins?

Tiago: Back at Crytek times, we did use tessellation on Crysis 2 for certain type of simple assets and dynamically adjusted tessellation ratio. For generalizing it’s actually a relatively complex problem, starting with asset creation where artists usually want to have full control of the result. Regular realtime tessellation (phong and such simple derivatives) tends to actually smoothen out details artists want to control, so just a simple approach doesn’t suffice for general cases, particularly for character close ups – for example if we start from the low resolution mesh a lot of the data will be missing for a relatively good looking animation.

This is why we mostly see in games, tessellation used for visual fluff, so simpler / controllable cases, e.g. environmental effects, things that topologically are essentially a derivation from displacement/vector displacement mapping (water, rocky surfaces, footprints on snow and so on).

Then there’s some more fundamental issues, like tessellation having a cost attached to it, things will not entirely properly interact with the displaced surface. E.g. imagine a chair, on a displaced tile floor – there will be clipping.

But for eliminating pop-ins there’s actually quite a variety of approaches – if popping is noticeable in a game these days, it is generally from not taking into account techniques switching off, with other than via a binary solution.

DSOGaming: Thank you very much for the interview, any last words you want to share with our fans?

Tiago: I would just like to say a thank you very much for all our fans’ support to DOOM.