What has happened, is that the nVidia Developer Forum, where I'm a minor member, has in fact given me tech support. This is what they wrote:
Posted on 4/15/2013 9:57 AM
Dirk,
Apex 1.0 PhysX Lab does not have any GPU-accelerated physics, so it will always run in "CPU mode."
If you want PhysX to run on the GPU and you are developing software (as you indicate), there are a fewsteps that may be required.
When initializing the SDK:
1. Clear the NX_SDKF_NO_HARDWARE bit in the NxPhysicsSDKDesc::flags member.
2. Set the GPU Heap Size: NxPhysicsSDKDesc::gpuHeapSize = 64;
Then, when you create a fluid or cloth effect, ensure that you set the hardware flags in those descriptors:
1. Set the NX_FF_HARDWARE bit in the NxFluidDesc::flags.
2. Set the NX_CLF_HARDWARE bit in the NxClothDesc::flags.
Also, if your _old_ PhysX 2.8.4 is _old enough_, it may not have support for the newest hardware, so you may want to update it.
I have followed all their advice, and now my NxOGRE demos, which were actually Betajaen's demos,
do run with Cloth and Fluid in GPU mode.
I did catch the fact that under PhysX 2.8.x , only cloth and fluids ill receive H/W support, because they use the particle-oriented method of doing Physics (which conventional rigid-bodies do not). And, soft-bodies are really an extension of cloths. Also, since my frame-rates for the OGRE demos tend to be in the thousands of FPS, under virtually no load, it's understood that they won't change much if I switch on H/W support. But for any more serious uses, this could still become handy.
What I've done now is to add my own custom 'unsigned int' member in NxOgreWorldDescription.h, which I named 'mGpuHeapSize', and then in NxOgreWorldDescription.cpp , I added a default of 64 (MB) for mGpuHeapSize to the function WorldDescription::reset() .
Next, in NxOgreWorld.cpp I put:
NxPhysicsSDKDesc sdk_description;
sdk_description.cookerThreadMask = description.mCookerThreadMask;
sdk_description.flags = 0;
sdk_description.flags |= NX_SDKF_PER_SCENE_BATCHING;
sdk_description.flags |= NX_SDKF_NO_HARDWARE;
if ( !(description.mNoHardware) ) {
sdk_description.flags ^= NX_SDKF_NO_HARDWARE;
sdk_description.gpuHeapSize = description.mGpuHeapSize;
}
And finally, in NxOgreCloth.cpp, NxOgreFluid.cpp and NxOgreSoftBody.cpp respectively, I simply added a line thus:
NxClothDesc desc;
::NxOgre::Functions::PrototypeFunctions::ClothDescriptionToNxClothDesc(description, desc);
desc.flags |= NX_CLF_HARDWARE;
(...)
fd.flags |= NX_FF_HARDWARE;
(...)
NxSoftBodyDesc desc;
::NxOgre::Functions::PrototypeFunctions::SoftBodyDescriptionToNxSoftBodyDesc(description, desc);
desc.flags |= NX_SBF_HARDWARE;
to avoid requiring a per-object and redefined ClothDescription, FluidDescription, or SoftBodyDescription object.
Yet I felt that the need might arise, to set a non-default GPU Heap Size. And finally, I changed the default for mUseHardware to = true in SceneDescription::reset() . And then amazingly, I also discovered that my header file declaration and source file use of WorldDescription fully works, implying no silly mistakes.
Dirk
P.S. Now, one plausible reason
not to have SceneDescription::reset() activate mUseHardware by default, would crop up if any part of NxOgre was to call this function outside initializing a default Scene Description, for example if an animation had crashed and wanted to restart. Yet, because my code lobs off 64MB of GPU heap anyways, regardless of whether hardware is going to be used, I decided to have it enabled by default.
The NxOgre demos would all override it to false as-is, for which reason I edited each one.