Last week, Epic Games announced the public release of Unreal Engine 5’s Early Access version. And a couple of days ago, MAWI United has shared a video, showing its Birch Forest Map/Biome in Unreal Engine 5 with Nanite and Lumen.
MAWI United has managed to enable Nanite for every static mesh in this map. Moreover, this map has Lumen with hardware ray tracing enabled, so it can give you a glimpse at what you can expect from it. Not only that, but MAWI has enabled the new Virtual Shadow Maps so that every mesh can cast shadows.
As MAWI United noted, this map runs with around 30fps on an NVIDIA GeForce RTX2080. Now obviously we expect the final version of Unreal Engine 5 to run games smoother than that. Additionally, NVIDIA owners can expect a performance boost once – and if – Epic adds support for DLSS in Unreal Engine 5.
This map looks great, so go ahead and take a look.
Lastly, MAWI shared a video showing how you can import photogrammetry data into Unreal Engine 5 with Nanite, using 3dsmax.

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email
The Elder Scrolls 6 …is that you ?
It is………….After you apply the usual 100+ mods! 😉
Not a chance their games always look 10 years old on release.
Elder Scrolls 6 will be top of the line for last generation graphics LOL.
Latest desert demo using nanite is 100 GB of data and need 32 GB of system memory as minimum requirement with recommended 64 GB. Game of size Elder Scrolls using nanite will need supercomputer from NASA to work
Main problem with nanite is that technology take a lot of memory and world must be static. Latest demo of UE5 used 20 GB of system memory and required GPU with at least 10 GB of video memory. On GPU with 8 GB demo show warnings about low memory. A lot of memory for small demo
https://www.youtube.com/watch?v=UwHjuad47TE
Second problem is that this won’t work with raytracing. Hardware raytracing in AMD and Nvdia cards works on triangles. Game with millions of triangles will be really slow. This is why Epic created voxel cone tracing called Lumen. Software emulation of basic raytracing working on voxels (boxes) which is not accelerated by existing hardware. Voxels can’t do reflections or correct shadows like RTX.
Wait so this article says this demo is using hardware raytracing and you’re saying its software only…
Not sure who’s right…
EDIT: in UE5 there is a checkbox “use hardware ray tracing when available” for lumen.
So I guess its gonna use RT cores if you have them.
Is there a visual defference between software and hardware lumen or is it just the performance?
The settings menu would imply its the same thing calculated with either RT cores or normal cores.
Imagine being a console peasant in 2021
wonder how it looks and runs without the rtx.
Looks amazing!
The one good thing Epic Games has these days!
oh grow up!
Consoles are at least just as powerful as the 2080 running this demo.
Turn off raytracing and you would get 60fps on consoles.
Okay and in CoD cold war ps5 is faster then the 2080S so what does that prove?
We can both cherry pick games you know?
Those two games you listed are super early ps5 updates that clearly aren’t optimized for it and I think they got better with updates.
Those games don’t need a fast cpu so what? Some games do some don’t.
Both 2080 and ps5 are 10Tflops and the architectures used have pretty much the same perf/Tflop.
6700XT has very slightly lower Tflop count then the 2080Ti and very slightly less performance.
How is that not comparable?
Has nothing to do with Ampere…
Do you think AMD gimped console gpus for some reason so they somehow perform worse then their desktop counterparts?
That’s an earlier version that hadn’t Nanite for most static objects.