Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

dgdosen

macrumors 68030
Dec 13, 2003
2,751
1,388
Seattle
They aren't trying to be Nvidia at AI training. Not that they are not trying to be Nvidia at everything Nvidia does. Just were Nvidia has a giant 'moat' around their product they are not.

Which isn't 'new'. Apple stopped signing Nvidia drivers years ago at this point. In part, because Nvidia was all more so intent on building that moat than doing what Apple wanted them to do.

The workstation market isn't all Nvidia any more than Bitcoin is all of finance. Nvidia's AI business is trendy , 'sexy hot' topic , but not the single cornerstone. And it isn't workstations.

" ... SMC reportedly pledged to process an extra 10,000 CoWoS wafers for Nvidia throughout the duration of 2023. Given Nvidia gets about 60-ish A100/H100 GPUs per wafer (H100 is only slightly smaller), that would mean an additional ~600,000 top-end data center GPUs. .. "

Those 600,000 H100 are mostly not going into worksations. Most H100 modules are OAM standard ( or Nvidia's proprietary variation of OAM). Those are not PCI-e cards.


Apple's run rate on Mac Pros probably isn't even a 1/3 of that. Apple talked about AI/ML in that session in the Craif F. portion of the talk. Apple is leaning way more into inference than training. And there Nvidia isn't really the exclusive players. Lots of folks are doing inference without Nvidia. That segment is only growing, not shrinking.






Stepping up to PCi-e v4 isn't a performance increase over the last MP ? It is still performance.

At one point (in the 90's I think) Jobs said something along the lines of ' have to get past the notion that for Apple to win Microsoft has to loose". Folks going down 'rabbit holes' on Nvidia are doing the same thing.

Apple is taking on Nvidia + Intel + AMD + Qualcomm + Microsoft ... they can't afford to get drawn into a "land war in Asia" like in the game of Risk. Where trying to do everything for everybody. Apple isn't making luggable desktop replacement laptops .... are they doomed ? Nope. Same thing here. Smartly pick your battles where you have a tactical advantage. ... and leave the the "do everything for everybody" thing for other folks.

Chasing Nvidia because they got caught a sizzle stock price bump is the same kind of reasoning that got Apple into the "Apple Car" that is likely a bridge to nowhere. It isn't 'create a useful product ' focused.
Such pop culture...
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,900
I have been suggesting this solution since the Mac Studio was first announced most of us would be happy to sacrifice a thunderbolt port (or 2) to gain a fully pledged PCIe gen 5 slot, or dual industry standard m.2 slots.

A couple of things.

One, getting a x16 PCI-e controller out of the space of one (or two ) TB controllers with just a x4 PCI-e v3 controller embedded inside of them would be a problem. To get a x16 controller would more likely see 4 TB controllers go and that is probably more than most ( who have multiple monitors and devices ) would want to do. Nor Apple.

I do think that > 4 is kind of going out there on the fringe. However, the fringe buying 3-4 XDRs is attractive to Apple. And -4 on the Max Mac Studio would wind all the way back to zero.

Second, Apple doesn't have to sacrifice any of the TB controllers if just add the PCI-e controller on another chiplet. Which appears to be what they did.


If Apple did PCI-e v5 then that would probably be done more so to get more bandwidth out through fewer lands ( making the impact on Apple's die edge space smaller). They'd run that out to an external switch and fan-out from there. Conceptually, Apple could have done all of these PCI-e v4 slots from single x16 PCI-e v5 backhaul. I suspect it is a two x16 PCI-e v4 ... but on TSMC N3 or later implementations they'd slide backwards in lane count and up in backhaul speed.

PCI-e v5 (and v6) without CXL 2.0 (and up) don't make a lot of sense to me. And I don't see Apple saying much about being enamored with CXL. So I don't expect Apple to get there anytime soon except for perhaps backhaul lane reduction footprint. Decent chance CXL might have conflicts with their PCI-e security model and v5 SERDES probably has problems with their Perf/Watt goals.








Apple seems intent on wholely kneecapping the Apple silicon lineup. What is the point of a 800GB/s bus when the solitary internal SSD can only handle at best 7GB/s, and thunderbolt ports only half that each.

The point of the 800GB/s bus is the GPU. Period. The other stuff, including CPU , are segmented to more limited bandwidth subset with QoS (quality of service) metrics for all around ( so nobody starves anybody else for data).
It appears that Apple moved a decent amount of the intra GPU com traffic off to a separate mesh so it is easier to get better QoS to the other competing compute clusters of different types. Still 800GB/s but the GPU performance jumped up significantly.

As for a 'faster' SSD. Well, with the Mac Pro you can have one in the PCI-e v4 x8 or x16 slots.

The rest of the line up? Thunderbolt 5 is a bit log jamed. That will probably come with a faster LPDDR5 and a even more upgraded internal mesh backhaul for inter-cluster data traffic. Once the internal mesh aggregate bandwidth is up the SSD controller will probably get a bump.


TBv5 is probably coming over next two years. The internal SSD could/should get a bump at M3 generation family.

[ Some of that is likel also Intel and USB-IF 4v2 stumbling blocks. Apple can't make USB-IF move faster. And Intel running behind schedule too. Looks like Meteor Lake won't have TBv5 , so Intel won't go forward until 2H 2024 at the earliest. ]




There is no way to keep the CPU and GPU fully saturated with data.

Aggregate memory bandwidth is a shared resources and pragmatically have limited edge space. The 'poor man's' HBM is wide and can only get just so many memory controllers coming out of fixed die size ( and also have some other I/O that isn't memory).

That is what caches are for. So every core doesn't have to go to memory all the time.


I suppose that is why Apple silicon only clocks out at a relatively slow rate, ant such low temps. The board architecture fails to deliver the raw data those chips need to really perform.

Apple Silicon clocks at a slower (than top fuel drag racing desktop) rate because the whole system is designed around wider and moderate-high speed clocks rather than narrow channels and fantical clocks speed that bleed extra power losses.

GPUs don't clock at desktop rates and still get higher aggregate FLOPs work done.

Apple's P cores can issue like 10 uOps in a clock. AMD/Intel about 6-8. So Apple is 25% wider issue. If AMD/Intel clocks are 20% higher are they really doing more work per every 1 second?

The other problem is that faster clock often means a bigger area footprint for the core. Intel has to double , triple up on E-cores in part because their P-cores are so big. Those E-cores can't 'top fuel drag race' clock either.
 

Boil

macrumors 68040
Oct 23, 2018
3,251
2,876
Stargate Command
  • Generation 2 ASi Mac Pro
  • 3nm process / A17-based cores
  • M3 Ultra / M3 Extreme SoCs
  • Hardware Ray-Tracing
  • ECC LPDDR5X RAM
  • PCIe 5.0
  • ASi GPGPUs
 
Last edited:

ZombiePhysicist

macrumors 68030
May 22, 2014
2,785
2,684
Folks thinking that Apple is going to whittle down future versions so the Vision is just as light as normal glasses ... blah blah blah. Same boat as whitting down Mac Pro so it is a MacBook !2" . Just not going to happen and still be the same product coverage.

Not much difference these days, sadly.
 

ZombiePhysicist

macrumors 68030
May 22, 2014
2,785
2,684
Again out of context. Gruber made a fallacious statement of effectively "GPU == AI training processing". Really a rather shallow read on the topic which eventually gets outlined later by all the layers of AI going on in the Vision Pro.
So the "renting GPUs on the cloud" is really "renting GPUs on the cloud to do AI". Nvidia offers AI cloud compute. Apple doesn't. Apple didn't in the Intel era. Apple isn't in the M-series era. No deep mystery there unless drifing off into alternative universes.

Apple isn't in the AWS/Azure/Google Cloud business at all. Apple has some modest iCloud services for your devices that largely do not involve 'heavy compute services". And they have XCode Cloud.... which is not particularly different from what folks like MacStadium/miniColoc/etc have offered for years. "Rent a mac at a remote location". ( XCode cloud has specifics built into XCode to make all that remote build/test/etc process easier/clearner. But really renting headless remote Macs. )


Folks are smoking a lot of strong something if think Apple is going to get into the business of renting out computer subcomponents on the Internet. it is really not what they do.

Yea, I think you're the one in the fog on this one.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.