In layman's terms please.
8 threads, loads of memory bandwidth
once upon a midnight dreary, while i pron surfed, weak and weary, over many a strange and spurious site of ' hot xxx galore'. While i clicked my fav'rite bookmark, suddenly there came a warning, and my heart was filled with mourning, mourning for my dear amour, " 'Tis not possible!", i muttered, " give me back my free hardcore!"..... quoth the server, 404.
Which means...max wrote:
8 threads, loads of memory bandwidth
it's like super fast
once upon a midnight dreary, while i pron surfed, weak and weary, over many a strange and spurious site of ' hot xxx galore'. While i clicked my fav'rite bookmark, suddenly there came a warning, and my heart was filled with mourning, mourning for my dear amour, " 'Tis not possible!", i muttered, " give me back my free hardcore!"..... quoth the server, 404.
Lol. So it doesn't have 8 cores like I was (Falsely) told?max wrote:
it's like super fast
4 cores, 8 threads, and on-chip memory controller for assloads of memory bandwidth, iirc.Sir Schmoopy wrote:
Lol. So it doesn't have 8 cores like I was (Falsely) told?max wrote:
it's like super fast
Each core doing the evolutionary equivalent of Hyper-Threading to kinda look like it's 2 cores.
(metric assloads, not american assloads, BTW)
Last edited by rdx-fx (2008-12-18 10:38:17)
It is like freaking fast with SLi and CF.
Last edited by GC_PaNzerFIN (2008-12-18 10:38:38)
3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
What are these threads and hyper-threading you speak of?rdx-fx wrote:
4 cores, 8 threads, and on-chip memory controller for assloads of memory bandwidth, iirc.Sir Schmoopy wrote:
Lol. So it doesn't have 8 cores like I was (Falsely) told?max wrote:
it's like super fast
Each core doing the evolutionary equivalent of Hyper-Threading to kinda look like it's 2 cores.
(metric assloads, not american assloads, BTW)
You remember how AMD was so fucking awesome in the P4/Athlon days because of the onboard memory controller?
Put that on a Quad Core, and throw on something reminiscent of Hyper Threading plus a few other advances = i7.
Put that on a Quad Core, and throw on something reminiscent of Hyper Threading plus a few other advances = i7.
It makes it a pseudo 8-core chip. My understanding is that this allows the operating system to use a bit more of the chip that would be idle without HT. It doesn't match the difference of single -> dual core, but it does help.Sir Schmoopy wrote:
What are these threads and hyper-threading you speak of?
Last edited by Defiance (2008-12-18 10:51:52)
It's two better than the i5.
Right now, I think they're what the Pentium D was back in the day; Big, beefy, expensive, hot, and not really necessary.
The idea of any hi-fi system is to reproduce the source material as faithfully as possible, and to deliberately add distortion to everything you hear (due to amplifier deficiencies) because it sounds 'nice' is simply not high fidelity. If that is what you want to hear then there is no problem with that, but by adding so much additional material (by way of harmonics and intermodulation) you have a tailored sound system, not a hi-fi. - Rod Elliot, ESP
but it's ever so sexyFreezer7Pro wrote:
Right now, I think they're what the Pentium D was back in the day; Big, beefy, expensive, hot, and not really necessary.
once upon a midnight dreary, while i pron surfed, weak and weary, over many a strange and spurious site of ' hot xxx galore'. While i clicked my fav'rite bookmark, suddenly there came a warning, and my heart was filled with mourning, mourning for my dear amour, " 'Tis not possible!", i muttered, " give me back my free hardcore!"..... quoth the server, 404.
So was the P-Dmax wrote:
but it's ever so sexyFreezer7Pro wrote:
Right now, I think they're what the Pentium D was back in the day; Big, beefy, expensive, hot, and not really necessary.
The idea of any hi-fi system is to reproduce the source material as faithfully as possible, and to deliberately add distortion to everything you hear (due to amplifier deficiencies) because it sounds 'nice' is simply not high fidelity. If that is what you want to hear then there is no problem with that, but by adding so much additional material (by way of harmonics and intermodulation) you have a tailored sound system, not a hi-fi. - Rod Elliot, ESP
Hot? Those mofuggas are on the 45nm process, now if those are getting hot, then at least you know there's some serious shit going on
waiting to see how the deneb stacks up against them... especially price wise.
80°C load is no problem at allMekstizzle wrote:
Hot? Those mofuggas are on the 45nm process, now if those are getting hot, then at least you know there's some serious shit going on
once upon a midnight dreary, while i pron surfed, weak and weary, over many a strange and spurious site of ' hot xxx galore'. While i clicked my fav'rite bookmark, suddenly there came a warning, and my heart was filled with mourning, mourning for my dear amour, " 'Tis not possible!", i muttered, " give me back my free hardcore!"..... quoth the server, 404.
135W TDP, go figureMekstizzle wrote:
Hot? Those mofuggas are on the 45nm process, now if those are getting hot, then at least you know there's some serious shit going on
main battle tank karthus medikopter 117 megamegapowershot gg
I regret I asked..
and oh man not to talk about how fast it is! SLi and CF get huge boost in performance compared to C2Q.max wrote:
but it's ever so sexyFreezer7Pro wrote:
Right now, I think they're what the Pentium D was back in the day; Big, beefy, expensive, hot, and not really necessary.
3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8
It's newer and it's faster.Sir Schmoopy wrote:
In layman's terms
/thread
Plus girlies will sleep with you.Bell wrote:
It's newer and it's faster.Sir Schmoopy wrote:
In layman's terms
/thread
All those bazillions upon bazillions of transistors, and they're still stuck shoving the majority of the data through the same damn 32 bits of an EAX register as ever...
Millions and millions of possible parallel execution units, but due to the linear programming mentality, we're stuck shoving data through the same little tiny 32 bit window as ever.
731 million transistors in the i7, which is 146,000 TIMES the transistor count of of an old-school MOS 6502 (Apple II) or 731 times the transistor count of the 32-bit version of the Motorola 68K.
...and programmers today are kvetching, bitching, and whining about the difficulties of coding for 4 or 8 threads...
My point:
i7, cool bit of tech that it is, is an extreme example of developing along a path out of sheer "that's the way it's always been done" rather than adopting massively parallel design because it's more efficient/elegant/simple/cheaper.
It's a solution in hardware to an argument of software and logic.
I would love to see a RISC-like massively multi-core design benchmarked against an i7.
(say, instruction set of 80 commands, 20 commands being basic array/vector/calculus ops. 100,000 transistors per core, and a total of 7310 cores to match the transistor count of the i7 CPU)
..oh, wait. Nevermind. already been done.
Ask Max about how the parallel architecture GPUs do against the monolithic architecture CPUs in Folding@Home type applications....
Millions and millions of possible parallel execution units, but due to the linear programming mentality, we're stuck shoving data through the same little tiny 32 bit window as ever.
731 million transistors in the i7, which is 146,000 TIMES the transistor count of of an old-school MOS 6502 (Apple II) or 731 times the transistor count of the 32-bit version of the Motorola 68K.
...and programmers today are kvetching, bitching, and whining about the difficulties of coding for 4 or 8 threads...
My point:
i7, cool bit of tech that it is, is an extreme example of developing along a path out of sheer "that's the way it's always been done" rather than adopting massively parallel design because it's more efficient/elegant/simple/cheaper.
It's a solution in hardware to an argument of software and logic.
I would love to see a RISC-like massively multi-core design benchmarked against an i7.
(say, instruction set of 80 commands, 20 commands being basic array/vector/calculus ops. 100,000 transistors per core, and a total of 7310 cores to match the transistor count of the i7 CPU)
..oh, wait. Nevermind. already been done.
Ask Max about how the parallel architecture GPUs do against the monolithic architecture CPUs in Folding@Home type applications....
Last edited by rdx-fx (2008-12-18 12:28:40)
*cough ia-64*
we all know how well that worked out. No reason for the chipmakers to adopt something new if the majority of PCs run NT 3.1 with a couple of patches
we all know how well that worked out. No reason for the chipmakers to adopt something new if the majority of PCs run NT 3.1 with a couple of patches
once upon a midnight dreary, while i pron surfed, weak and weary, over many a strange and spurious site of ' hot xxx galore'. While i clicked my fav'rite bookmark, suddenly there came a warning, and my heart was filled with mourning, mourning for my dear amour, " 'Tis not possible!", i muttered, " give me back my free hardcore!"..... quoth the server, 404.
i7 is neat, cool, and fast.
The future of computing, however, is in the architecture of the current Graphics chips.
That's where the really cool developments in hardware are taking place.
Monolithic core CPUs with a few cores, the innovation is in the management of their complexity.
In contrast, the GPU in your video card has the real innovation in solving shit-tons of complex mathematics simultaneously, rapidly, and in bulk quantity.
The future of computing, however, is in the architecture of the current Graphics chips.
That's where the really cool developments in hardware are taking place.
Monolithic core CPUs with a few cores, the innovation is in the management of their complexity.
In contrast, the GPU in your video card has the real innovation in solving shit-tons of complex mathematics simultaneously, rapidly, and in bulk quantity.
you know why?rdx-fx wrote:
i7 is neat, cool, and fast.
The future of computing, however, is in the architecture of the current Graphics chips.
That's where the really cool developments in hardware are taking place.
Monolithic core CPUs with a few cores, the innovation is in the management of their complexity.
In contrast, the GPU in your video card has the real innovation in solving shit-tons of complex mathematics simultaneously, rapidly, and in bulk quantity.
CPU needs to do massive number of different kinds of calculations. GPUs can be optimized to do certain calculations faster. Besides, graphics rendering just goes well with massive multithreading. Making software for lets say, 100 core CPU, would be VERY tricky.
Last edited by GC_PaNzerFIN (2008-12-18 12:45:59)
3930K | H100i | RIVF | 16GB DDR3 | GTX 480 | AX750 | 800D | 512GB SSD | 3TB HDD | Xonar DX | W8