Unreal Engine 4 Birch Forest Map-1

Here is a beautiful forest map in Unreal Engine 5 with Lumen and Nanite

Last week, Epic Games announced the public release of Unreal Engine 5’s Early Access version. And a couple of days ago, MAWI United has shared a video, showing its Birch Forest Map/Biome in Unreal Engine 5 with Nanite and Lumen.

MAWI United has managed to enable Nanite for every static mesh in this map. Moreover, this map has Lumen with hardware ray tracing enabled, so it can give you a glimpse at what you can expect from it. Not only that, but MAWI has enabled the new Virtual Shadow Maps so that every mesh can cast shadows.

As MAWI United noted, this map runs with around 30fps on an NVIDIA GeForce RTX2080. Now obviously we expect the final version of Unreal Engine 5 to run games smoother than that. Additionally, NVIDIA owners can expect a performance boost once – and if – Epic adds support for DLSS in Unreal Engine 5.

This map looks great, so go ahead and take a look.

MAWI Birch Forest Biome | Unreal Engine 5 with Nanite & Lumen

Lastly, MAWI shared a video showing how you can import photogrammetry data into Unreal Engine 5 with Nanite, using 3dsmax.

16 thoughts on “Here is a beautiful forest map in Unreal Engine 5 with Lumen and Nanite”

    1. Latest desert demo using nanite is 100 GB of data and need 32 GB of system memory as minimum requirement with recommended 64 GB. Game of size Elder Scrolls using nanite will need supercomputer from NASA to work

  1. Main problem with nanite is that technology take a lot of memory and world must be static. Latest demo of UE5 used 20 GB of system memory and required GPU with at least 10 GB of video memory. On GPU with 8 GB demo show warnings about low memory. A lot of memory for small demo

    https://www.youtube.com/watch?v=UwHjuad47TE

    Second problem is that this won’t work with raytracing. Hardware raytracing in AMD and Nvdia cards works on triangles. Game with millions of triangles will be really slow. This is why Epic created voxel cone tracing called Lumen. Software emulation of basic raytracing working on voxels (boxes) which is not accelerated by existing hardware. Voxels can’t do reflections or correct shadows like RTX.

    1. Wait so this article says this demo is using hardware raytracing and you’re saying its software only…
      Not sure who’s right…
      EDIT: in UE5 there is a checkbox “use hardware ray tracing when available” for lumen.
      So I guess its gonna use RT cores if you have them.

      1. Is there a visual defference between software and hardware lumen or is it just the performance?
        The settings menu would imply its the same thing calculated with either RT cores or normal cores.

  2. Consoles are at least just as powerful as the 2080 running this demo.
    Turn off raytracing and you would get 60fps on consoles.

    1. Okay and in CoD cold war ps5 is faster then the 2080S so what does that prove?
      We can both cherry pick games you know?

      Those two games you listed are super early ps5 updates that clearly aren’t optimized for it and I think they got better with updates.
      Those games don’t need a fast cpu so what? Some games do some don’t.

      Both 2080 and ps5 are 10Tflops and the architectures used have pretty much the same perf/Tflop.

      1. 6700XT has very slightly lower Tflop count then the 2080Ti and very slightly less performance.

        How is that not comparable?
        Has nothing to do with Ampere…

        Do you think AMD gimped console gpus for some reason so they somehow perform worse then their desktop counterparts?

Leave a Reply

Your email address will not be published. Required fields are marked *