GC_PaNzerFIN
Work and study @ Technical Uni
+528|6693|Finland

Look what nVIDIA just did!
bye bye competition in physics business....

http://www.nvidia.com/object/io_1202161567170.html
3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
NeXuS
Shock it till ya know it
+375|6620|Atlanta, Georgia
So my physics card means nothing?
GC_PaNzerFIN
Work and study @ Technical Uni
+528|6693|Finland

NeXuS4909 wrote:

So my physics card means nothing?
yeah... pwnt... 8800 series have no problem doing your physx card's job
3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
elbekko
Your lord and master
+36|6680|Leuven, Belgium
Sweet.
Byebye ATI now
Naturn
Deeds, not words.
+311|6884|Greenwood, IN

elbekko wrote:

Sweet.
Byebye ATI now
I would not be so sure of that.  They got some good stuff in the works.
Gooners
Wiki Contributor
+2,700|6911

NVidia Made the worlds first GPU?
trippy982
Member
+34|6676

rdx-fx wrote:

Just a formality, so there weren't any troublesome patent issues for Nvidia, in my opinion.

Graphics cards excel as specialized mathematical processors (matrix mathematics and matrix manipulation being foremost on the list).

Graphics card = specialized math coprocessor.

Nvidia adding a few more transistors to do a few physics equations isn't any sort of a stretch for them.


Put another way:
My TI-89 calculator, using the same processor as an ancient Apple ][ computer (motorola 68hc11) + a small math coprocessor can do differential calculus equations.  An Nvidia GPU has a few orders of magnitude more transistors.  [TI-89 CPU = 70,000 transistors.  Nvidia 8800 GPU = 681 million transistors].  Adding some transistors to do a little physics math?  Not even a stretch for Nvidia.

Yet a 3rd perspective:
You can already do SETI@Home number crunching on your Graphics card.
If you can do that math already, basic Newtonian collision physics isn't a stretch.
Um....so what u sayin'??!?1  you saying they gonna come out with geForce cards for teh calculat0rz?
aimless
Member
+166|6403|Texas
I know the Nvidia CUDA technology can be used to render audio, if you use a digital audio workstation e.g. Abelton Live, FL Studio...
unnamednewbie13
Moderator
+2,054|7050|PNW

rdx-fx wrote:

trippy982 wrote:

rdx-fx wrote:

Just a formality, so there weren't any troublesome patent issues for Nvidia, in my opinion.

Graphics cards excel as specialized mathematical processors (matrix mathematics and matrix manipulation being foremost on the list).

Graphics card = specialized math coprocessor.

Nvidia adding a few more transistors to do a few physics equations isn't any sort of a stretch for them.


Put another way:
My TI-89 calculator, using the same processor as an ancient Apple ][ computer (motorola 68hc11) + a small math coprocessor can do differential calculus equations.  An Nvidia GPU has a few orders of magnitude more transistors.  [TI-89 CPU = 70,000 transistors.  Nvidia 8800 GPU = 681 million transistors].  Adding some transistors to do a little physics math?  Not even a stretch for Nvidia.

Yet a 3rd perspective:
You can already do SETI@Home number crunching on your Graphics card.
If you can do that math already, basic Newtonian collision physics isn't a stretch.
Um....so what u sayin'??!?1  you saying they gonna come out with geForce cards for teh calculat0rz?
No, saying that as humble 70,000 transistor CPU can handle differential calculus - a 681 million transistor monster like the Nvidia 8800 (or, it's descendants) should be able to incorporate some basic physics algebra.  Nvidia buying out Ageia likely had more to do with making sure they were covered regarding patents, rather than Ageia actually having any technology that Nvidia couldn't figure out on a lunch break.
Another way to put it is that NVIDIA would rather comply with existing programs that already make use of PhysX than call developers and tell them to rewrite code to comply with some new type of hardware.
Poseidon
Fudgepack DeQueef
+3,253|6816|Long Island, New York
An nvidia card + Physics card all in one = hax.
GC_PaNzerFIN
Work and study @ Technical Uni
+528|6693|Finland

Poseidon wrote:

An nvidia card + Physics card all in one = hax.
that will never happen tho. Why add a physics chip in gfx card when the GPU can already do its job..... but they might optimize the GPU more for physics calculation...

Last edited by GC_PaNzerFIN (2008-02-05 01:20:22)

3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
CrazeD
Member
+368|6951|Maine
I bet they just make a separate card instead, get more money that way....
Nappy
Apprentice
+151|6508|NSW, Australia

SLI vid card with phys card would be a good idea (in nvidia's perspective) MAYBE

but gg with the ageia ness
Mekstizzle
WALKER
+3,611|6900|London, England

trippy982 wrote:

Um....so what u sayin'??!?1  you saying they gonna come out with geForce cards for teh calculat0rz?
lmfao
Nikola Bathory
Karkand T-90 0wnage
+163|7065|Bulgaria

Poseidon wrote:

An nvidia card + Physics card all in one = hax.
I agree. nVIDIA should really do this now, put both chips on one card! That would be worth buying (me thinks).
topal63
. . .
+533|6997

GC_PaNzerFIN wrote:

Poseidon wrote:

An nvidia card + Physics card all in one = hax.
that will never happen tho. Why add a physics chip in gfx card when the GPU can already do its job..... but they might optimize the GPU more for physics calculation...
Why? That could not be a serious question nor could it be a serious rhetorical comment.

Here is why - pathway (parallel processing). Multi-threading in applications, as a norm, is really not that far off. A few years from now it will be necessary so that game creators can keep up with the competition in terms of performance.

Having a separate CISC* pathway means the GPU, which is a tessellation engine/rendering engine/pixel shading engine (FPU**), wont have to do it. The physics-simulation will have a separate pathway and run concurrently. I have a physics card and can tell you that it doesn't matter if the GPU has the potential to do physics calculations/simulations. It works better (there is better performance) when a separate pathway is doing the physics calcs. The problem with physics simulation on a card - is the lack of games supporting it and not the implementation/design of the physics-card.

If games could make better use of threaded code (take for example: AI) on a Skulltrail platform you would get incredible environment-behavior combined with incredible performance. If/when they actually use (finally apply as a norm) multi-core processing (threading, multi-pathway, parallelism, etc).

*CISC: Complex Instruction Set Computer-chip.
**FPU: Floating Point computing Unit .

Last edited by topal63 (2008-02-05 09:58:27)

GC_PaNzerFIN
Work and study @ Technical Uni
+528|6693|Finland

topal63 wrote:

GC_PaNzerFIN wrote:

Poseidon wrote:

An nvidia card + Physics card all in one = hax.
that will never happen tho. Why add a physics chip in gfx card when the GPU can already do its job..... but they might optimize the GPU more for physics calculation...
Why? That could not be a serious question nor could it be a serious rhetorical comment.

Here is why - pathway (parallel processing). Multi-threading in applications, as a norm, is really not that far off. A few years from now it will be necessary so that game creators can keep up with the competition in terms of performance.

Having a separate CISC* pathway means the GPU, which is a tessellation engine/rendering engine/pixel shading engine (FPU**), wont have to do it. The physics-simulation will have a separate pathway and run concurrently. I have a physics card and can tell you that it doesn't matter if the GPU has the potential to do physics calculations/simulations. It works better (there is better performance) when a separate pathway is doing the physics calcs. The problem with physics simulation on a card - is the lack of games supporting it and not the implementation/design of the physics-card.

If games could make better use of threaded code (take for example: AI) on a Skulltrail platform you would get incredible environment-behavior combined with incredible performance. If/when they actually use (finally apply as a norm) multi-core processing (threading, multi-pathway, parallelism, etc).

*CISC: Complex Instruction Set Computer-chip.
**FPU: Floating Point computing Unit .
that's exactly why nvidia bought AGEIA. They want physx support for their gfx cards. Physics calculation with GPU is at it's early stages tho. The true next generation gfx cards with multiple cores can do it much better. There even might be a seperate core in the GPU that does the physics calculation. There aren't many games supporting the AGEIA or gfx card physics yet. But it is more than likely that companies like nvidia will eventually get more and more support for that. Atm it is pretty tough to add a seperate physics chip on gfx and that would cost quite a lot too. It is more likely that they start selling geforce cards that are optimized for physics only.
3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
topal63
. . .
+533|6997
I doubt it... you're confused.

First-off Physx is primarily a software physics kit (SDK), like Havok is a software physics kit (SDK). Games already are developed without the need for a PPU (Physics Processing Unit); there is already a bunch of Physx & Havok titles that use one or the other SDK. That is the point and reason for the SDK accelerator-card; to off-load the calcs for either the CPU and/or the GPU. The Ageia Physx (formerly Novadex?), SDK, as a card is an accelerator that off-loads physics-calcs that the CPU is doing (in the SDK). Having the GPU do that will LOWER the performance of the GPU-card and overall system performance; when compared to a system with 3 separate dedicated pathways: CPU, GPU, PPU.

Nvidia bought a software SDK and a PPU-card implementation. If you think they want to kill parallel processing, by incorporating physics calcs all-in-one on a GPU, than I think you are mistaken. That is the point of the PPU. The business IFs are: how to make a PPU a, more or less, design standard, can you incorporate it onto a main-board (thus not waste a PCI-slot), incorporate the chip and a pathway on a GPU-card (and not interfere with the render-pipeline), things like that.

If a Geforce-card is optimized for physics than it will be for the Physx SDK (the one Nvidia bought); but that already exists it's called an Ageia Physx PPU-card. Instead of re-inventing the wheel - I think they will simply improve the technology that exists instead (hardware support for an SDK).

Last edited by topal63 (2008-02-05 12:33:03)

GC_PaNzerFIN
Work and study @ Technical Uni
+528|6693|Finland

topal63 wrote:

I doubt it... you're confused.

First-off Physx is primarily a software physics SDK, like Havok is a software physics SDK. Games already are developed without the need for a PPU (Physics Processing Unit); there is already a bunch of Physx & Havok titles that use one or the other SDK. That is the point and reason for the SDK accelerator-card; to off-load the calcs for either the CPU and/or the GPU. The Ageia Physx (formerly Novadex?), SDK, as a card is an accelerator that off-loads physics-calcs that the CPU is doing (in the SDK). Having the GPU do that will LOWER the performance of the GPU-card and overall system performance; when compared to a system with 3 seperate dedicated pathways: CPU, GPU, PPU.

Nvidia bought a software SDK and a PPU-card implementation. If you think they want to kill parallel processing, by incorporating physics calcs all-in-one on a GPU, than I think you are mistaken. That is the point of the PPU. The business IFs are: how to make a PPU a, more or less, design standard, can you incorporate it onto a main-board (thus not waste a PCI-slot), incorporate the chip and a pathway on a GPU-card (and not interfere withe render-pipeline), things like that.

If a Geforce-card is optimized for physics than it will be for the Physx SDK (the one Nvidia bought); but that already exists it's called an Ageia Physx PPU-card.
so you are saying they don't make a PPU of their own which was what I ment?

edit: geforce optimized for physics = PPU

Last edited by GC_PaNzerFIN (2008-02-05 10:38:54)

3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
topal63
. . .
+533|6997
Physx is an SDK. Ageia Physx as a card is hardware support for an SDK. What aren't you understanding?
GC_PaNzerFIN
Work and study @ Technical Uni
+528|6693|Finland

topal63 wrote:

Physx is an SDK. Ageia Physx as a card is hardware support for an SDK. What aren't you understanding?
So? I know AGEIA's PPU is hardware support for the SDK....
Lets make my points more simpler.

1) next gen GPUs have multiple cores so what exactly is stopping them from using one for physics? just like multi core CPU can do physics on other core and the other stuff with the remaining cores. (one PPU core.)

2) nvidia has had plans for a physics card long before they bought AGEIA. that means they could be planning on releasing one. It is easier to put the PPU on different card so it is unlikely that they just stick one PPU on the gfx card with the GPU. (the reason why I said it is unlikely to happen). That is until it is possible to put a PPU core in the GPU.

Anyway there is one big company that will probably stike back at nvidia's big physics plans. Intel with Havoc.
3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
topal63
. . .
+533|6997
That's exactly the point. A multi-core GPU/PPU card would be EXACTLY like having an Ageia Physx card built into a Geforce-card (if they make one); considering Nvidia now owns the Physx SDK.

And, if Nvidia releases their own separate PPU-card it will support the Ageia Physx SDK,  maybe that will be called this instead the "Ageia Physx Card powered by Nvidia?" What's in a name? It will still serve the near exact same function as the currently existing Ageia Physx card.

____

Tech ARP: Wow.. That was some purchase!

NVIDIA: Yes, but it was in the pipeline for some time now. We only finalized the deal with AGEIA moments ago.

Tech ARP: Really? Why? Don't you already have SLI Physics?

NVIDIA: We decided that running physics on the GPUs just wasn't going to cut it, not even with the amount of power our GPUs have.

NVIDIA:  We will continue to support SLI Physics and Havok FX, but eventually we want the PhysX silicon to handle physics acceleration, leaving the GPU to do what it does best - graphics.
____

PS: What would be a good thing is this! A PPU that is both hardware support for the Physx and Havok SDK all in one.

Last edited by topal63 (2008-02-05 12:25:44)

GC_PaNzerFIN
Work and study @ Technical Uni
+528|6693|Finland

topal63 wrote:

That's exactly the point. A multi-core GPU/PPU card would be EXACTLY like having an Ageia Physx card built into a Geforce-card (if they make one); considering Nvidia now owns the Physx SDK.

And, if Nvidia releases their own separate PPU-card it will support the Ageia Physx SDK,  maybe that will be called this instead the "Ageia Physx Card powered by Nvidia?" What's in a name? It will still serve the near exact same function as the currently existing Ageia Physx card.
why are you thinking that they would use the AGEIA PPU? there is nothing superior with their tech compared to what nvidia can do.

PS: What would be a good thing is this! A PPU that is both hardware support for the Physx and Havok SDK all in one.
now THAT is what I have been trying to say. Since the PPU doesn't have to be the original AGEIA one.

Last edited by GC_PaNzerFIN (2008-02-05 11:15:19)

3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
topal63
. . .
+533|6997
I don't necessarily even think they will use the Ageia Physx PPU. I think they will glean off all the good ideas from the current design and re-design a new chip (IMO). It might not even be hardware/software compatible with the current Ageia Physx card implementation (but it might be). Considering there is no real reason to be backward compatible with older games or fading games (ones that people play less and less). I am thinking this is great step forward for PPU design and implementation. Ageia alone could not push this computing model and the technology, as far as Nvidia can - and hopefully will.

I was just disagreeing with you, only; really, on this minor point: that GPU's (which already can do physical simulations; FP operations) would be a good solution for physics calculations. Optimizing any single GPU, as they currently exist, to do more physical simulations would NOT be that good of a solution. Multi-core PPU/GPU = yes, a good solution. And, a separate PPU on a mainboard or on a card (like the currently existing Ageia Physx-card) = yes, a good solution. But, a single core optimized GPU doing physics and rendering = no not that good of a solution.

Last edited by topal63 (2008-02-05 11:32:24)

geNius
..!.,
+144|6721|SoCal

Gooners wrote:

NVidia Made the worlds first GPU?
Nvidia bought 3Dfx.
https://srejects.com/genius/srejects.png

Board footer

Privacy Policy - © 2025 Jeff Minard