Unity Technologies on DX12, Vulkan, Ray-tracing, Physically-based Rendering, Dynamic LOD, E3 Demos & more


A couple of days ago, we had the pleasure of interviewing Unity Technologies’ Field Engineer, Mathieu Muller. We talked with Mathieu about a lot of interesting subjects, such as the performance issues that affected all Unity 4 games, the Global Illumination solution for Unity 5, DX12, Vulkan, ray-tracing, and the future of the Unity Engine. Enjoy the interview after the jump!

DSOGaming: Before we begin, please introduce yourselves to our readers.

Mathieu Muller: My name is Mathieu Muller. I am Field Engineer at Unity Technologies, and my job is to travel across Europe and Africa to represent the development team for the people using Unity and vice versa. I have been working in the simulation and game industry for about 15 years as graphics and AI software engineer.

DSOGaming: A lot of engines already support DX12, and Unity 5 is one of them. Have you experimented with this new API and what are the performance benefits of it? What is your general opinion on DX12?

Mathieu Muller: We started work on DX12 with support from Microsoft and hardware manufacturers about 2 years ago. We first focused on introducing functional support and experimental support in Unity 5.2. But we could see already slight performance improvements on some tests. In parallel, we have been working on a new graphics architecture to support graphics jobs that could be used across all new low-level graphics APIs. This is currently available as experimental in 5.4 Beta, and we are getting some performance benefits but not yet nearly as much we could—we are working on changing even more of our code to be better. Moreover, most performance benefits currently come when you are CPU-bound while PC games tend to be GPU-bound.

DSOGaming: The first DX12 games that have been released on the PC were a bit underwhelming (in terms of performance and visuals compared to DX11). Is that normal? When can PC gamers expect to see games that will take advantage of DX12?

Mathieu Muller: Yes, quite normal. Most engines are in a similar situation as I previously mentioned: the potential for performance of DX12 is there, but it is hard to beat 5+ years of driver optimization improvements. To some extent, this reminds me of PS3 — while the potential of the architecture may be outstanding, it also requires deep changes. You get very close to the hardware, but it takes time to master this power and balance the end benefits as opposed to the overhead of the system that will use the new predicates. A decade ago, many studios spent a huge chunk of development time optimizing their software for PS3’s SPUs and ended up obtaining fewer improvements than if they would have simply optimized their main algorithms. After some time, the architecture and the drivers had matured sufficiently and great results could be reached. The same thing should happen in the coming years with DX12.

DSOGaming: Asynchronous Compute is a feature that has been in the spotlight since the announcement of DX12. Have you experimented with it and how can this feature benefit future games?

Mathieu Muller: The potential is certainly there, but currently only AMD GPUs do it efficiently and we haven’t implemented Async compute support yet.

DSOGaming: Apart from the performance boosts, what visual improvements can PC gamers expect from DX12 games?

Mathieu Muller: Most of the new features (conservative rasterization, etc.) are coming to DX11.3 too. So DX12 is very much about (CPU) performance and frame rate stability.

DSOGaming: What’s your opinion on UWP? Do you see it as something that will limit your future audience in case Microsoft locks DX12 behind it?

Mathieu Muller: For us, UWP is a platform that offers great opportunities to reach XBox One without a dev kit, which is especially empowering for developers. DirectX 12 is currently not limited to UWP, and we don’t foresee it becoming unavailable for classic Win32 applications.

CGI Animated Short & Tech Demo: "The Blacksmith" - by Unity Technologies | TheCGBros

DSOGaming: Does Unity 5 support Vulkan and have you experimented with it? If you had to choose one, would you go with DX12 or Vulkan?

Mathieu Muller: We just announced officially at Google I/O that we will be supporting Vulkan and cannot wait to give it into the hands of our users! We have been working on DirectX 12 for longer, and it will take some time for Vulkan to be as mature. On the other hand, many concepts are similar which should help getting the two technologies on par as quickly as possible. We are looking forward to getting Vulkan integrated to bring multi-threaded rendering to a huge number of mobile devices. So far, it is too early on in the process to make any definitive conclusions.

DSOGaming: Physically-based rendering is the next big ‘graphical’ thing. Can you explain the benefits of using this rendering technique to both our advanced-tech readers and the average Joe?

Mathieu Muller: Physically-based rendering has been originally introduced to remove the need for artists to iterate on textures when the lighting is changing, by giving physical properties to a material (e.g. roughness or softness, metallic-ness). If the environment is modified (a forest covered with snow or a warm desert), a piece of aluminium will always look like aluminium. Another benefit is to be able to mix assets from different sources if they are photorealistic and well calibrated. But it is not just about materials. Light is also very important. Currently, lighting is just one of the values, and just like mixing music, you often end up saturating your mix. The promise of physically-based lighting is in being able to play with real-life light values in order to get more control over the result. This is one of the research areas that our graphics team is working tirelessly on.

DSOGaming: Let’s talk about Ray-tracing. Does Unity 5 support ray-tracing? How far away are we from a fully real-time ray-tracing (or path-tracing) lighting system in video-games?

Mathieu Muller: Unity does not support ray-tracing as a final real-time rendering algorithm. However, ray-tracing is increasingly used in various parts of the engine and toolset. For example, global illumination lightmap baking is done using path-tracing, and real-time global illumination uses ray-tracing during the offline precomputation step. Similarly, post-FX such as SSRR (Screen Space Raytraced Reflections), that can be found in our cinematic effects or volumetric effects (e.g. atmospheric scattering, volumetric fog), do some kind of ray-tracing (ray marching). However, pure real-time ray-tracing rendering is in my opinion quite far away. Computational power will have hard time to keep up with the impact of increasing resolutions, better anti-aliasing, and stereo rendering over ray-tracing complexity. Imagine the power required to render a VR image, two times 1K rendering at 90 frames per seconds. For a ray-traced camera with this configuration, you need 180 MRays/sec to render primary rays only. An optimized GPU ray tracer can do ~300 MRays/sec, so adding a couple of shadow rays breaks the budget. An order of magnitude and more computing power is needed. The current trend is actually the opposite.

Industries like VFX are increasingly going for real-time rendering technologies to improve their iteration time and reduce their rendering farm workload. Our demos such as the Blacksmith and ADAM prove that real-time rendering is getting close to a high enough quality level for creating feature films. Caustics, complex VFX, hair, dense vegetation or water simulation are huge challenges for real-time rendering, and I bet that more progress will be done there than in full real-time ray-tracing. Our graphics lab team will, for example, present this summer at Siggraph a method to render area lights, usually a raytracer feature, at real-time frame rates.

That said, we are doing R&D in this area, working with partners such as Imagination Technologies and AMD, to look into uses of real-time ray tracing. Short-term, fast ray tracing can be used to make lightmap baking an interactive process. Mid-term, hybrid rendering algorithms using ray tracing could be employed to resolve shadows and reflections. And longer-term fully ray-traced cameras could be considered. Again, it is hard to compete with an entire industry that has been focused on rasterization for several decades.

DSOGaming: Photogrammetry is a technique that has impressed everyone these past few years. Does Unity 5 support it and what’s your opinion on it?

Mathieu Muller: There is no real engine support needed for photogrammetry, since it is nothing more than meshes and textures. However, we have a team part of the graphics lab working actively on photogrammetry. Photogrammetry and physically-based rendering are closely tied together. When both are mastered, one should be able to take a reference picture of a scenery, scan the objects, export the geometry and materials to Unity, place the lights accordingly to the reference environment, and should visualise in-engine at real-time the almost exact scene. For us, it is key because it will be the reference to measure the quality of our rendering pipeline. This is not for now, but we should get very interesting things published along the way.

DSOGaming: Can you share more tech details about Unity 5’s Global Illumination solution?

Mathieu Muller: Unity’s global illumination is currently fully based on Enlighten by Geomerics that delivers real-time global illumination across almost all of our platforms. It is a fantastic technology which manages very complex parallel and time-sliced systems to allow to smoothly compute the global illumination at runtime when the lighting or materials change, even on high-end mobile platforms. Having said that, it also requires more attention and we have seen a few of our users suffer on the workflow side. It can be hard to author correctly with typical pitfalls (too many small objects, poor quality UV unwraps, inappropriate parameters,…) that can lead to long baking times. It is mostly a global process making it hard to selectively iterate on the lighting. This is why we are working on a progressive lightmapper partnering with Imagination Technologies, which progressively builds the lightmaps, focusing on the visible part of the scene and then the parts outside the view. This way, objects and lights can be moved around and users can directly see the impact on the lighting interactively while the solution is converging. It was presented at our GDC Keynote this year and we hope to provide an Alpha build in the coming months. In addition to this, we will improve the authoring workflow for real-time GI using Enlighten.

Unite 2015 - Advanced Global Illumination in Unity 5

DSOGaming: Let’s talk about the future of the Unity 5 Engine. What are the biggest future tech features that PC gamers can expect from Unity 5 (apart from VR )?

Mathieu Muller: There is a lot of exciting work in progress which is, or should become, available in the coming months or year: motion vectors, AAA-quality cinematic post-FX (temporal anti-aliasing, new tone mapping and color grading tools are just some of the examples), a progressive lightmapper, a cinematic sequencer to allow the authoring and playback of sequences of animation and audio clips, runtime build of the NavMesh, a whole lot of 2D tools, and much more. For more detail, take a look at our roadmap. We have also recruited a lot of very talented and experienced people across the world who have previously worked for the best engines and studios. They are helping out getting on par with quality expectations of the AAA industry, but are also already thinking and working on what will be the best tech in the coming years.

DSOGaming: Pop-in of objects is an issue that has not been addressed yet. While some games have higher LOD levels, every game – more or less – suffers from it. What’s really puzzling us is that while dynamic tessellation basically solves this problem by varying the level of detail on the fly, no one has ever used it. What’s your opinion on dynamic tessellation and have you experimented with it in order to eliminate object/environmental pop-ins?

Mathieu Muller: You almost guessed them all! Level of detail is again one of the research area our graphics lab is working on. There are many possibilities here and not only triangles have to be considered. Reducing the number of materials, the complexity of shaders, the number of details in the diffuse or normal maps to prevent aliasing or flickering are necessary. We have been internally experimenting on some really interesting concepts recently and hope they will find their way into the engine in the near future.

DSOGaming: A lot of PC gamers have complained about framerate issues in games powered by the Unity Engine, even on high-end systems. Moreover, some games that were ported to the latest version of the Unity 5 benefited from better framerates. What were the culprits of the underwhelming performance in older games powered by the Unity Engine, and what steps did you take in order to improve the overall performance of your engine?

Mathieu Muller: Unity 4 was more or less a single-threaded engine.  We had a few components multi-threaded and a render thread. Unity 5’s core has been rebuilt with multithreading in mind. At release time, we had multi-threaded physics, occlusion culling, real-time GI and skinning. Every release since then have seen other components getting multithreaded: UI batching, frustum culling, culling groups, sorting, graphics jobs, transforms evaluation, etc. We have also done a lot of refactoring effort of core components to get better data caching and SIMD optimization which is key for high performance. We also created a team dedicated to performance analysis and regression tracking.

Unity GDC demo - Adam - Part I

DSOGaming: Does Unity 5 support multi-GPU setups? And will you ever release your “Adam” short film as a real-time benchmark so that PC gamers can test their systems?

Mathieu Muller: We support multi-GPU in the sense of SLI/Crossfire, which does not always guarantee better performance. It’s mostly transparent to the engine/application, and how well does it scale largely depends on application, drivers and the moon phase.

Multi-GPU can also be in the form of “I have two different GPUs in my machine” (e.g. An integrated Intel one and a discrete one, or two different discrete GPUs). DX12 allows using that, but we don’t support that yet.

Concerning our short film ADAM, we are still working very hard on ADAM Part 2, and we plan to release as much of this content as possible in a third phase.

DSOGaming: Lately we’ve seen a number of games being downgraded from their E3/GDC showcases. What’s your opinion on this? Should game developers showcase games that are not possible even on high-end PCs?

Mathieu Muller: Setting expectations too high can backfire. On the other hand, trying to show the best of your technology is always interesting, because this is how you push the limits and differentiate from others. Another factor to take into account is that shipping a game can sometimes be almost the hardest thing on earth, and almost everyone has to cut a lot of things in the last months of development. Experienced studios have developed tools and methodologies to reduce that risk. For indies, things can go wild, if they have not done unit testing, continuous integration, tools to easily analyse and reproduce problems. And the Number One rule is: do not develop your game on a high-end PC!

DSOGaming: Thank you very much for the interview, any last words you want to share with our fans?

Mathieu Muller: Thank you for the great questions! Unite Europe starts May 31st – see you there or watch online!