Well this is interesting. PhysX/CUDA on ATI?

betajaen

28-06-2008 08:53:28

Jerusalem (Israel) – This one did not take long: We already knew that Nvidia is working on a CUDA version for x86 CPUs, but said it would leave a modification for ATI GPUs to others. Eran Badit of NGOHQ.com told us that he has done it already and was able to get the Nvidia PhysX layer to run on ATI Radeon cards.

http://www.tgdaily.com/html_tmp/content ... 7-135.html

- I suppose this begs the question. Would nVidia make a CUDA a non-nVidia card based standard like OpenGL, and allow it on competitors hardware?

ncomputerm4

28-06-2008 16:37:10

It would only stand to make PhysX stronger, and seeing has nVidia is holding PhysX and can change whatever they need to. To make it better for nVidia's cards (but still work on ati's), why not? New marketing would be not only the best graphics, but "Games run better with nVidia"
I hope they are smart about it :D

kojack

29-06-2008 07:27:26

So far the only proof of existence is the author posted a single screenshot. He refuses to give it out because AMD has annoyed him recently and he doesn't want to help them, and they won't give him free ATI cards.

Until someone actually sees it running (not just a pic which shows the physx control panel with the physx logo replaced with the author's website logo, which is probably a legally iffy action), I'll remain a bit skeptical. But if it's true, why isn't the geforce 8800 supported by nvidia???
If an ATI card can run it, why does nvidia only let the 9800 gtx and gtx 2x0 cards work with this driver release? Shouldn't cuda be pretty much the same on them all (besides the number of shader units used).

Of course I think physx needs to be supported by ATI as well as nvidia. Otherwise game makers will just use it for special effects instead of game altering main physics. But without any verifiable proof, so far it just looks like an easy method to bump the page hits on some website I've never heard of. The author said he might release it in a week or 2, so maybe we'll see soon if it's real.

betajaen

29-06-2008 07:35:17

Thing is; I've always understood that communicating to the graphics card is faster than putting things than getting things out. It's a device to shove vertices, faces, and textures into and your not concerned about anything else.

So how would CUDA manage to get all the physics data (nearly 60 times a second I may add) back to the processor to be cleaned up and then rendered? Speed may be the issue and perhaps that's why it's on the newer cards.

kojack

29-06-2008 09:58:50

There's 2 main factors: bus and stalling.
Older buses sucked at retrieving data from a graphics card. PCI and PCI-Express aren't just for graphics, they handle other stuff like network and sound cards too. They can handle bi-directional transfers much better than something like AGP.
Stalling is due to a graphics card queuing up operations. When you tell the card to render something, it might really reorder operations or put it off for later, in order to optimise things. But when you try to read back a pixel, the card has to flush all pending operations to get the final result, causing a stall while the card catches up.

Cuda on a geforce is typically pci-express, and since it's not a standard gfx api based system, it can handle readbacks to the cpu much better.


Hmm, while reading the latest Cuda programming guide just now, I spotted something I haven't heard mentioned anywhere else yet: the new geforce gtx 260 and 280 (damn nvidia messing with naming formats again) support double precision floats! Cool.

betajaen

29-06-2008 10:16:07

Ahh, that certainly clears things up.

And I agree. CPU, GPU and Motherboard manufacturers should name their products based on the year.

kojack

29-06-2008 12:30:00

But then you get products like Max 2009, which is out now because they've already had a Max 2008. :)

Nvidia have loved to change the meanings of extensions around, the gts, gtx, gs, mx, etc. But the number was always the major part and the extension separates different models of the same family. But now they've redefined gtx as the family name and use numbers for the card (gtx 280). That's rather stupid.

But enough name ranting, back to physics. :)

betajaen

29-06-2008 12:55:15

But I love ranting as much as physics!

Bob

29-06-2008 15:49:52

PhysX will run on G80 cards, but one week later as the new cards.

:wumpus:

29-06-2008 19:49:25

So far the only proof of existence is the author posted a single screenshot. He refuses to give it out because AMD has annoyed him recently and he doesn't want to help them, and they won't give him free ATI cards.
Indeed!

Being someone that has quite some experience working with CUDA, I was really really surprised when I heard that someone managed to port CUDA to ATI. I mean, it's not impossible, but the author stating that it is 'easy to activate PhysX on ATI' makes me kind of wonder if he is not just kidding us.

An added complication is that the PhysX for CUDA is compiled CUDA code. They don't give out the source AFAIK. This complicates porting enormously. Also, the architectures are a little bit different between ATI and NVidia, this would be like converting ARM machine code to X86 without performance loss. Possible, sure, but "easy"?!?! It's certainly more than 'rewriting a few GPU calls'.

We want proof :) I'd be more happy with a screenshot of his program converting a CUDA kernel to ATI.

kojack

29-06-2008 21:51:46

I've never used cuda, but I've skimmed the docs a few times. I thought it did direct low level access to the gpu, to bypass all the shader stuff (that's how it gets around all the cg/hlsl limits). If so, porting would be insanely painful, since ATI's gpu wouldn't be anything like a Nvidia gpu internally. If on the other hand cuda was just a wrapper for shader assembly, then porting would be easier since both gpus are more compatible at that level.

Imagine the physics a Nvidia Tesla S1070 could do! :)
It's the one with 960 cores, 4 terraflops of performance and 16GB of ram.

syedhs

30-06-2008 06:39:50

This is a definitely a good move by Nvidia. I can smell something not right when Ageia in the past, began promoting PhsyX Card. The market is simply not there in the sense that one has to buy a new card for that matter. People are generally lazy, and that is what prompt them to use IE (which icon can be found on most of your Windows desktop ) instead of Netscape. People have to download Netscape in order to use it.

If ATI (aka AMD) is not careful, ATI may end up as simply 3d supplementary chips on AMD. Which mean that the card is projected as low cost solution and integrated together with AMD chips. Maybe like Intel GMA series but a lot better of course. And if that happens, Nvidia will be the undisputed monopolist in the 3D card accelerator business which is not good for consumer. And some games may have an option of 'Physics Accelerator' very much like what they get when they buy Ageia's PPU.

But foreseeable future, I like how the things are going. :)