Uncharted 3 screenshot header

The Last of Us, Uncharted 2 & Uncharted 3 see huge performance improvements in RPCS3

The RPCS3 team has released a new video, highlighting some major performance improvements in three Naughty Dog games. These three ND games are: The Last of Us, Uncharted 2 and… wait for it… Uncharted 3.

RPCS3 is the best Playstation 3 emulator that is currently available for the PC. Its team has been constantly improving overall performance and game compatibility, and we may soon be able to play these three games on the PC.

Now do note that these major performance improvements come from new patches/hacks. Players will have to apply these patches in order to get the following performance boosts. Moreover, these patches disable some graphical effects, so that’s something that may disappoint some of you. And while these games can run with higher framerates than those of PS3, they are not fully playable. According to the RPCS3 team, there are still some unavoidable crashes.

Nevertheless, it’s good witnessing these games running with a higher framerate in RPCS3. Will we be able to fully enjoy them by the end of 2020? Well, we hope so, though nothing is certain at this point.

[youtube]https://youtu.be/VgHb6ITdOtY[/youtube]

Last but not least, Aphelion Gaming has shared a video showing the settings that you’ll need to use in order to play The Last of Us on PC.

Have fun!

How to Play The Last of Us on RPCS3 - New Patches, Settings, and Performance

32 thoughts on “The Last of Us, Uncharted 2 & Uncharted 3 see huge performance improvements in RPCS3”

  1. Asking from pure ignorance, but, wouldn’t it be easier to emulate the last of us from PS4 given that it has a far more similar architecture to PCs? is it a development or legal problem? I’d love to know

    1. I mean even Naughty Dogs themselves had to change much of the underlying code to port it to PS4, sounds like they did half the job for us

    2. Because there’s no architecture manual for the PS4’s CPU, whereas PS3 uses the IBM Cell Broadband CPU with a fully public aware architecture manual.

      That’s why development of the Xenia 360 emulator isn’t as fast.

    3. the OS and Game coding are still always going to be the issue to emulate.
      and then there is the hardware necessary to run the emulator.
      emulators usually require hardware of more than 5-10x the power of the base console it emulates.

    4. no emulation is emulating the entire hardware on top of yours, this is why performance is so bad. The only reason ps3/x360 emulation is progressing so fast is because low api vulkan.

      1. That’s maybe right for the Rpcs3 but Xenia progressed mostly through the DX12 renderer which is low level too, and CEMU progressed nicely with OpenGL years before they added the Vulkan API, the other reason is that current CPUs are powerful enough to brute force a lot of the limitations as these consoles are now pretty old

    5. The fact that it’s x86 architecture does not make it any easier to emulate.

      It still uses custom made AMD hardware which has significant differences from the desktop counterparts, and a unified memory design which is not a thing on PC.

      It’s completely closed source, and unlike PS3 and Xbox 360 there is zero documentation on the architecture.

    6. Reading your comments make me remember a guy who try to port beyond two Souls ps3 to PC. He already finish 60-70% of the projects before quantic dream stop him. So based on that i wonder if possible to port TLoU into other engine than emulated it…

    7. From the developer of the PS4 emulator Orbital:

      This is quite a misconception that appears often in threads about PS4 and Xbox emulation. Even if these consoles share the same architecture with PC, I’m afraid it’s not that simple. There’s two approaches for us:

      Translation: In most host userland environments is not possible to directly execute certain guest x86 instructions (not even some guest userland instructions!). You might consider just patching those few incompatible instructions, but it’s impossible to fit a substitute in the few bytes they originally occupy, and if you go beyond that size you’ll need to patch relocations and relative offsets as well. Eventually, regardless of HLE or LLE, you’ll end up resorting to full translation (e.g. lifting to LLVM), which is essentially what other non-x86 emulators, including PS3 ones do. Writing an x86 translator, is not significantly easier than writing a PPC+SPU translator.
      Virtualization: To prevent translation overhead, some architectures, including x86, offer extensions for hardware-accelerated virtualization. Since both host and guest share the same architecture, we can use this to run the PS4 kernel. However, writing an hypervisor is definitely not an easy task: Their code is largely specific to the host kernel and runs in ring 0. Debugging them is a nightmare, and if something fails, your kernel panics. Luckily there’s few existing hypervisors for x86, but none of them worked with the PS4 kernel out-of-the-box. I needed to work on Intel HAXM for 6 months to get it working, which is comparable to the time I’d have needed to write a JIT-translator for x86.
      So, in summary, CPU-wise there’s no significant advantages on having guest x86 code (PS4), compared to other architectures. If anything it will be negligibly easier…

      However, the real problem lies on the GPU, it has absolutely nothing to do with the Nvidia RSX used by PS3. It manages to blow my mind every single day. There’s multiple CUs, each running multiple wavefronts, where one of them can have up to 256 vector registers, each 2048 bits wide (64 dwords). Resources constants can be fetched dynamically from memory, even the GCN bytecode for shaders could be self-modifying (although I haven’t seen this in practice). There’s so many stages, which are nearly impossible to replicate on any existing graphics API. There’s compute shaders. There’s so many workarounds I needed to implement to get to this point, some ugly, some somewhat elegant. But yeah, my point is, none of this ever existed on PS3 (and never had to deal with such complexity back when I worked on RPCS3).

      As you see, GPU-wise things have become way more complex than PS3, and any previous console.

      1. Yep, ironic how the studio was good and not politicized when they had an actual woman writing their games…

        1. I can see why you think that, but the overwhelming majority of people still believe in Valve and CD Projekt RED. Cannot say the same about other once Top Tier developers.

          1. People stopped believing in Valve after they realized there won’t be HL2 E3 or HL3. And those little who still believed stopped after Artifact.

        2. CDP has GOG and for now all their 4 games are good, and we have to give them the benefit of the doubt from CP2077, as for Valve even tho they were absent as a good dev for nearly a decade, they took the PC platform to a stratospheric level with Steam and HLA is a blast for anyone who owns a VR headset

          1. Maybe for Valve, but we don’t really know for CDP, the last big game they made is from 2015

    1. Don’t worry for TLOU and UC, you’re not missing something, in this video they’re running on multiple PCs implying an I7 8700k and an I9 9900k, both OCed to the death and yet they’re still running like crap in multiple places and with AO disabled, and TLOU is not playable from start to finish yet.

    2. PCSX3 actually quite light on minumum specs depends on the game. My i3 6100 could ran skate 3 very well, Ninja Gaiden Sigma also run perfect and Demos Souls are also quite smooth. Give it a try and you will be surprised, dont expect 1st party game will run without problem though

Leave a Reply

Your email address will not be published. Required fields are marked *