GDC 2016 – New Video Showcases NVIDIA’s PhysX GPU Rigid Body Simulation

PhysXInfo has shared a video, showing a real-time demonstration of NVIDIA’s new hybrid CPU/GPU rigid body solver, based on the experimental branch of PhysX SDK 3.4. As stated, GPU acceleration (as showcased in the video) can provide a significant performance boost in heavy scenes, and also has same behavior and features as the CPU execution.

As PhysXInfo claimed:

“PhysX GRB is the new GPU accelerated Rigid Body simulation pipeline. It is based on heavily modified branch of PhysX SDK 3.4, but has all the features of the standard SDK and almost identical API. PhysX GRB is currently utilizing CUDA and requires NVIDIA card for GPU acceleration.”

Yes yes, another NVIDIA exclusive feature. Still, it does look impressive, so enjoy the video!

PhysX GPU Rigid Body simulation

31 thoughts on “GDC 2016 – New Video Showcases NVIDIA’s PhysX GPU Rigid Body Simulation”

    1. That’s the kicker, they have all of these amazing tech demos but I never see them implemented anywhere. Nvidia has some really incredible fluid simulation demos, but that tech has never touched a game.

      1. Yeah, i was just about to say the same thing. We had some really great looking/acting physics for so many years now. Games barely use them, at best they use like 5%-10 % of what we are usually shown in most videos. Shame really.

        1. Problem is PhysX was very badly optimized for many years and today is probably much better but still they don’t support most of the market, so if you wanna make a game you can use PhysX but for most devices you will have to have CPU only acc. and that limits you quite heavily. And if your game will be cross-platform (consoles, PC with AMD/intel GPU, linux, android) you cant really use GPU physX as your general physics engine, it always have to be effect above the base physics. But that is what happen to tech that is artificially limit to some HW on HW open platform or cross-platform development.

          1. “still they don’t support most of the market”

            AMD is not most of the market. 🙂

          2. PhysX being badly optimized is very old story. Even late version of physx 2 is not that bad. To date physx most likely the only physic engine that able to rival havok performance; good enough until havok decided to make more public announcement about what game and game engine use their solution (they did not bother about this in the past because of their dominance). Also developer in general not interested with gpu accelerated physics. Amd admit this publically when their bullet physic initiatives (using opencl) was completely ignored by developer.

          3. I didnt think of that. Being limited to Nvidia sure is a sucky thing. The consoles have their wars, we got the gfx card wars -_-
            At least we dont have any Nvidia only games.

            Lol @ Card exclusive games idea. I hope that never will happen.

          1. Yeah, it’s surprising that a game which isn’t so good-looking to begin with has one element that is ahead of nearly every title.

        1. Fallout 4 uses it for weapons debri only. I’m talking about really using it for tons of stuff, not just minor visuals.

    2. Wreckfest and beamng have soft body flex physics, with beamng being far more accurate but alsofar heavier to run, the difference is that they both still use all cpu to calculate

  1. Dedicated PhysX is not a form of multi gpu. Hence they did not need sli certified mobo to begin with. They just offload gpu calculation on specific gpu instead running them on primary gpu.

  2. And since PhysX has still only CUDA based compute and not DirectCompute nor OpenCL, it cannot be used as general physics engine for most of the market, not in the same quality across all of them anyway. Which is shame really, whole GameWorks policy is shame for gaming community.

      1. Or you can keep them and expect to be used only when you pay for it. I see no problem in there i just state the fact.

    1. A PC game. Oh wait! shame all we get now are console ports. So yea maybe in 15 years console developers will give a damn about this tech. Right now it’s : consoles cannot support this, no point adding it.

  3. I wish they showed this off in engine, in UE4 for example, on a properly rendering scene that was built out of cement, wood, glass and rocks instead of stacked legos.

  4. 0:38

    Mark Hanna: No, what do you do? You get another
    brilliant idea, a special idea. Another situation, another stock to
    reinvest his earnings and then some. And he will, every single time.
    Mark Hanna: Keep the client on the ferris wheel, and it
    goes, the park is open twenty-four, seven, three, six, five. Every
    decade, every Goddamn century. That’s it.
    [they clink their glasses together to toast]
    Mark Hanna: Name of the game.

  5. NVIDIA does do Async with Maxwell 2 ,they just went for efficiency instead of Async dedicated hardware units on their die,which at the time no one was using. There is simply no need to use your die on stuff that isn’t used in the API.

    1. Are you serious? Stop shilling for Nvidia everywhere you go. 1st youtube, now here. You do know Maxwell cards were sold as dx 12 cards right? They even went a step further to support dx 12_1, so what the hell are you talking about when you say no need to use your die for things not used in the API? You do know Asynchronous Compute is under dx 12_0 right?

Leave a Reply

Your email address will not be published. Required fields are marked *