The Matrix Awakens Megacity

Unreal Engine 5 The Matrix Awakens Tech Demo runs with 30-50fps on an NVIDIA RTX3090 in 4K

A few days ago, we informed you about the release of the Megacity Sample Project (which is basically the city that Epic Games used for The Matrix Awakens demo). This Megacity uses Lumen, Nanite and Metahumans, and showcases what Unreal Engine 5 is capable of. And, from the looks of it, this tech demo is one of the most demanding ones.

For starters, this tech demo is really heavy on the CPU. As such, and even with high-end AMD Ryzen CPUs, a lot of gamers will encounter CPU bottlenecks, even at 4K. Things get better once you start using Intel’s high-end CPUs though.

According to Beyond3D’s Andrew Lauritzen, the tech demo runs between 40-50fps when flying around on an Intel i9 12900k with 32GB of RAM and an NVIDIA GeForce RTX 3090.

YouTube’s AJ Gaming has also used a similar system, showcasing the demo running between 30-50fps at 4K.

Before closing, we should note that this tech demo appears to be lacking any major optimization tweaks. This could explain this underwhelming performance, even on high-end systems. Furthermore, there are major stuttering issues when driving. Again, this is most likely due to the lack of optimizations. Nevertheless, it’s cool witnessing this tech demo in action.

Enjoy!

Matrix Awakens City Demo on PC | i9 12900K - RTX3090

30 thoughts on “Unreal Engine 5 The Matrix Awakens Tech Demo runs with 30-50fps on an NVIDIA RTX3090 in 4K”

  1. This is native 4K? Becouse you can see RT reflections and GI being used, so at this point wich game can run on 4K + RT features at 50fps without tricks like DLSS.

    1. There are some reconstruction artifacts (you can notice them when panning the camera) so my guess is that this is using TAU by default.

      1. I have tested this demo and even without TSR there are artifacts during motion. In UE5 demo Valley of The Ancient I was able to get rid of motion artifacts by disabling motion blur, but this particular “matrix” demo dont have any menus, so I dont know what command line to use in order to disable it. But even with motion blur disabled there will be some ghosting from lumen GI.

    2. Calling DLSS a trick is f*king stupid. It’s the best tech to come out in years. Quality and ultra quality mode are just better than native because of TAA blur. The other modes can be used with DLDSR to find a balance between quality and framerate.

      1. Calm your t*ts bro, i never said it’s bad, i just asked if it was native 4k or just using any tricks to achieve it.

  2. This demo would benefit from direct storage and HW decompression, because that’s the reason why even high end CPU’s struggle during movement.

    https://youtu.be/vXI1FjLOlbY

    This demo is extremely demanding but it looks jaw droping as well. Now even GTA6 will look like that.

    1. I read that UE5 already use Direct Storage. That was main reason why Epic waited with this demo until both AMD and Nvidia will support of Direct Storage in their drivers.

      Nvidia added support for Direct Storage in September 2021
      AMD support it since March 14, 2022

      “DirectStorage API promises to help developers improve the gaming experience by providing an optimized file I/O and GPU resource loading API designed for modern hardware. Due to our unique position of providing CPU, GPU, and chipsets, AMD is collaborating closely with Microsoft, storage hardware vendors such as Phison, and game studios to ensure that DirectStorage is optimized for our entire hardware and software stack”

      Of course you need have current GPU drivers, NVM SSD and Windows 11. Older Window 10 support API only as facade with all new calls are translated to standard Win32 calls

      1. Direct storage helps with reading speed and it also reduce CPU usage by around 20-40% (according to MS engineers), but ultimately something has to decompress all these textures later on. Without HW decompression even 16c32t CPUs are struggling in this UE5 tech demo, so people get sub 60fps no matter what resolution and GPU they will use.

      2. This demo doesn’t used DS, and it did not use on Xbox not the equivalent on PS5.

        The performance issues have nothing to do with storage or IO, Nanite does not use more than SATA 3 levels of bandwidth.

      3. I would call it 1/2 of direct storage currently as its main piece is still missing – Hardware decompression. Will be interesting to see what that along with the upcoming 14ish “raw” pcie5 ssd’s will bring to the table.

        Judging by the up to marketing of the consoles i bet claims of something in the lines of “up to 50gb/sec” but in reality i would expect around 2x practical so around “up to” 30gb/sec depending on type of compression used (few games that only uses 0’s or 1’s in textures :)).

        Will be interesting to see if the gpu’s decompression throughput will be the bottleneck or the ssd/optane drives.

    2. I read that UE5 already use Direct Storage. That was main reason why Epic waited with this demo until both AMD and Nvidia will support of Direct Storage in their drivers.

      Nvidia added support for Direct Storage in September 2021
      AMD support it since March 14, 2022

      “DirectStorage API promises to help developers improve the gaming experience by providing an optimized file I/O and GPU resource loading API designed for modern hardware. Due to our unique position of providing CPU, GPU, and chipsets, AMD is collaborating closely with Microsoft, storage hardware vendors such as Phison, and game studios to ensure that DirectStorage is optimized for our entire hardware and software stack”

      Of course you need have current GPU drivers, NVM SSD and Windows 11. Older Window 10 support API only as facade with all new calls are translated to standard Win32 calls

    3. Not only you have some real anger issues and cant formulate even a single sentence like normal human being, but you also post totally innacurate claims and opinions. Pretty much all your claims are pure lies and when people ask you for any evidence to support your claims then you stop responding. You are the biggest clown here. No sane person will say RT makes no real difference, or compare GTA5 with mods to this UE5 tech demo, but it looks like in your crazy mind everything is upside down :P.

      Direct storage is just a software, but it cuts CPU usage by around 20-40%, and that’s not my opinion. You are a clueless moron, but MS engineers know very well what they are talking about.

      https://www.tweaktown.com/news/85277/directstorage-reduces-cpu-overhead-by-40-on-windows-11/index.html

      Also since when we got CPU issues at 4K esp when talking about any processor above 8C16T since Ryzen 3000 / Intel 9th gen. It’s always a GPU situation

      This UE5 demo destroy even 16c32t CPU’s (high CPU usage, sub 60fps and low GPU usage), yet alone 8c16t CPU’s. XSX run this demo only at 30fps, but unlike PC I cant see these streaming stutters, so obviously direct storage and HW decompression helps. Unlike you I can provide a proof for my claims, so here you go. Sub 60fps on RTX3090 and low GPU usage, this tech demo is obviously CPU limited and that’s because even such high end CPU struggle with decompression.

      https://www.youtube.com/watch?v=3yBfykhdHmI

      And still both consoles lose out to all GPUs – Check GamersNexus PS5 benchmarks, it loses out to 1080 rof

      That’s not true as well. For example cyberpunk runs at 1440p high settings and with locked 60fps on both PS5/XSX, while even 1080ti (30% stronger GPU than standard GTX1080) gets only 45-55fps with medium settings and here’s the proof.

      https://www.youtube.com/watch?v=th7TBvMVzwA

      I’m using standard GTX1080 with 2GHz OC (10TF), and there’s no way I can match XSX/PS5 settings and especially if game is using RT.

      1. So typical of you, when my links shows how wrong you are then you try to talk about something else, or you try to pretend that’s just a single example LOL ??. If you dont believe MS engineer you can also watch direct storage benchmarks. According to LinusTech benchmarks this API clearly improved loading times on the same hardware (around 3x times in their benchmark) and on top of that it require less CPU resources. MS clearly improved I/O bottlenecks on PC with direct storage, but I guess you will always find something to hate.

        Decompression requires significant compute resources and that’s a fact. This UE5 “matrix” demo clearly shows even 16c32t processor cant decompress data fast enough (without bottlenecking GPU). If hardware decompression will free up CPU from this task, then obviously people can expect better performance. Nvidia already made their GPUs with HW decompression technology, but we still need to wait for RTX I/O support in games and benchmarks.

        I dont own current gen consoles, however I often look at XSX / PS5 comparisons and compare it to my own results. I’m using exactly GTX1080, so I know your estimations about this GPU arnt accurate. What GPU you are using? If you can match XSX/PS5 results, then it must be something much better than my GTX1080. I know my GPU is old, but I play at 1440p max, so I still get decent performance, so I’m more than happy with GTX1080 results (although I plan to buy 4060/4070 next year beacause I want to play with RT features).

        Dude, I dont hate you because you always complain about everything. In fact complaining can be a good thing, because only then companies like MS, Intel or Nvidia will listen and improve their products. I hate you, because you behave like a jerk and your opinions are far from accurate (and that’s politely spekaing). Listen man, I rarely replay to your comments, but I can see you always fighting with people here. Cant you see there’s a problem with you?

    4. Nanite does not use or need more than Sata 3 bandwidth.
      This demo was built on a Beta version of UE5 and it lacks any optimizations on PC, they did some for the consoles.
      DF tested this demo on a gimped NVME made to run at SATA 3 speeds and it was identical, which is not surprising, Nanite does not stream that much data.

      1. “DF tested this demo on a gimped NVME made to run at SATA 3 speed”

        NVME capped at 500 GB/s still will be 2-3x faster than SATA3 at 500 GB/s when you will use Direct Storage. Direct Storage is low level access to NVM with much lower input latency and CPU usage than old single threaded SATA3 protocol

      2. It isnt just about bandwidth, but performance implications. This demo is destroying even high end CPUs because of deconpression task.

    5. Yeah, some gpu’s have hw decompression ready just waiting for the api to get that feature enabled. Will be fun to see what pcie5 ssd/optanes + hw decompression will bring to the table for high end pc gaming and I suspect that will happen before the typical game will take advantage of that anyhow… many still code for the storage laughable ps4, old xboxes and potato pc’s with their packaged approach so once that is gone and the hw decompression is enabled and used with direct storage enabled titles we are in for a treat

    1. 1440p was confirmed by author of this video. This is native 1440p with UE Temporal Super Resolution disabled

      “Hi, Did u do pixel counting for this? I wonder if TSR is on”
      “Yes I did, it is 1440p. By the looks of it – it isn’t on”

  3. 30-35 fps on a Ryzen 5800X + RTX 3070 Ti sistem,@ 2560×1440 res……absolutely heavy and unoptimized!

    1. This isn’t a game. This is a graphics demo. This is basically “look at what this engine can do and run in real time”.

      You can’t say this isn’t impressive. Cannot wait to see what AAA studios can do with this and four years.

Leave a Reply

Your email address will not be published. Required fields are marked *