Originally posted by Jagga
Why do you think 2MB will be too big for a small server/desktop, the original XServe Dual G4 had 2MB of L3 RAM per CPU. Also, in the Quicksilver models the 933Mhz G4 had 2MB, as well as the original Dual 1.0Ghz had 2MB per CPU as well?!!
That was L3 Cache chips in addition to the CPU. I'm talking about adding an additional 1.5 MB of L2 on the same die as the CPU.
IBM could easily make a PowerPC with 2MB of L2 cache, even on .13 micron, but it would be a lot bigger physically. It would take up more space on an expensive multilayer platter. This would drive down the number of chips per wafer and increase the cost of each processor. It would also increase the heat dissipation of the processor.
IBM will take multiple criteria into consideration when upping the cache:
- what is an optimal CPU die size (mm^2) to work with, dies too small are difficult to work with.. difficult to connect to the packaging.
- what amount of L2 is reasonable for typical desktop use at a given processor speed (with a fast system bus)
- at what point do increasing L2 sizes get outweighed by increasing cost
...
Isn't the process, imbedding copper into Silcone to stretch the copper then fold-shrink it, actually part of the architecture of the processor?? Why else do it other than make electrons travel faster?! If that alone then the G5 should be at say 3Ghz already, but shrinking the die size does this, but architecture is how the build it, hence SOI right?!
Well, the Copper is way too small to fold and shrink it. CPUs aren't made like horseshoes or swords. The metal is formed by etching it with light. (pretty cool). CPUs manufacturing is closer to silkscreening tshirts than it is to hammering out a metal frame.
As far as "Architecture", I don't know of anyone who would describe a computer CPU architecture by including the process it's produced on. The process would be important in determining what Architecture would be feasible, but when I say architecture I'm basically refering to the layout of the processor logic, not the silicon.
I may know not as much as you do but on this paraphrase qoute, hasn't generations of the G4 in powerbooks and now the iBook used less power with increasing clockspeeds??
Not the same thing. The architecture evolved, and the transistor count of the G4 family has increased, but this has happened as the process has evolved and shrunk, and (yes) as the architecture has evolved. Motorola designs the G4 to be a low power embedded processor just as much (if not more) than it designs it to be a desktop processor.
OTOH, if you hold everything else consistent, process size, process materials... and you make the CPU bigger, or if you raise the core voltage to push the clock speed up, you'll generate more heat not less.
Um bitness of the bus is relevant, especially in terms of using PCI-X and AGP8x along with memory, DVD-R and SATA drives all jockeying for the maximum bits/bytes to transfer data to and from the CPU, DDRmemory and each other!! Why else do you think the Power5, G5, G4, P4, A64/Opteron need the extra bandwidth compared to say a P2 or G3?!?!
I think you missed the point here. I think the original argument was NOT that bitness was irrelevent, rather that you can't rely on the bit width. It's ALL ABOUT BANDWIDTH. You can have a 1 bit wide serial bus running super fast that has more bandwidth than a 64bit wide bus that runs much slower. This is why Serial ATA is faster than older parallel ATA.
It doesn't really matter if one processor has 32bit wide system pipes, or 128 bit wide pipes... what it comes down to is how much data can be pushed through those pipes.
So... Bitness isn't directly relevent to all those peripherals and buses you mentioned... the available bandwidth is. This why 16bit HT pipes are more than fast enough for all the external buses on a modern machine (usb, firewire, SATA...)
Also, I thought I'd mention that alot of the stuff you mentioned doesn't need lots of bandwidth to the CPU. DMA is direct memory access and it used on all modern peripherals. Not only that, AGP is designed to allow video cards to have direct access to the system memory if needed. just picking nits.