Originally posted by gopher
Maybe we have, but nobody has provided compelling evidence to the contrary.
You must be joking. Reference after reference has been provided and you simply break from the thread, only to re-emerge in another thread later. This has happened at least twice now that I can remember.
The Mac hardware is capable of 18 billion floating calculations a second. Whether the software takes advantage of it that's another issue entirely.
My arse is capable of making 8-pound turds, but whether or not I eat enough baked beans to take advantage of that is another issue entirely. In other words,
18 gigaflops = about as likely as an 8-pound turd in my toilet. Possible, yes (under the most severely ridiculous condtions). Real-world, no.
If someone is going to argue that Macs don't have good floating point performance, just look at the specs.
For the - what is this, fifth? - time now: AltiVec is incapable of double precision, and is capable of accelerating only that code which is written specifically to take advantage of it. Which is some of it. Which means any high "gigaflops" performance quotes deserve large asterisks next to them.
If they really want good performance and aren't getting it they need to contact their favorite developer to work with the specs and Apple's developer relations.
Exactly, this is the whole problem - if a developer wants good performance and can't get it, they have to jump through hoops and waste time and money that they shouldn't have to waste.
Apple provides the hardware, it is up to developer companies to utilize the hardware the best way they can. If they can't utilize Apple's hardware to its most efficient mode, then they should find better developers.
Way to encourage Mac development, huh? "Hey guys, come develop for our platform! We've got a 3.5% national desktop market share and a 2% world desktop market share, and we have an uncertain future! We want YOU to spend time and money porting your software to OUR platform, and on top of that, we want YOU to go the extra mile to waste time and money that you shouldn't have to waste just to ensure that your code doesn't run like a dog on our ancient wack-job hack of a processor!"
If you are going to complain that Apple doesn't have good floating point performance, don't use a PC biased spec like Specfp.
"PC biased spec like SPECfp?" Yes, the reason PPC does so poorly in SPEC is because SPECfp is biased towards Intel, AMD, Sun, MIPS, HP/Compaq, and IBM (all of whose chips blow the G4 out of the water, and not only the x86 chips - the workstation and server chips too, literally ALL of them), and Apple's miserable performance is a conspiracy engineered by The Man, right?
Go by actual floating point calculations a second.
Why? FLOPS is as dumb a benchmark as MIPS. That's the reason cross-platform benchmarks exist.
Nobody has shown anything to say that PCs can do more floating point calculations a second. And until someone does I stand by my claim.
An Athlon 1700+ scores about what, 575 in SPECfp2000 (depending on the system)? Results for the 1.25GHz G4 are unavailable (because Apple is ashamed to publish them), but the 1GHz does about 175. Let's be very gracious and assume the new GCC has got the 1.25GHz G4 up to 300. That's STILL terrible. So how about an accurate summary of the G4's floating point performance:
On the whole, poor.******
* Very strong on applications well-suited to AltiVec and optimized to take advantage of it.
** But these are relatively few.
*** And optimizing for it costs time and money.
**** Miserable on all double-precision floating point under all circumstances.
***** Miserable on all integer code under all circumstances.
****** Miserable on standard cross-platform code.
There you have it, G4 performance in a nutshell.