Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

majus

Contributor
Original poster
Mar 25, 2004
480
427
Oklahoma City, OK
Will someone with a close professional connection to Gurman, Kuo or anyone else please ask them to start spilling what they know about the new Mac Pro? I am so tired of waiting. Date, technical details, and pricing if possible. I know this is a big ask but the information is there somewhere.

Thank you,
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,902
Will someone with a close professional connection to Gurman, Kuo or anyone else please ask them to start spilling what they know about the new Mac Pro?

They can't spill what they don't know. Some signs now that back in March when Apple said something to effect of "will talk about the Mac Pro later" , that they didn't have a tight grip on when ''later' was either. So can't really leak when perhaps no one in Apple knows either. ( When was AirPower going to work ... the real answer was "don't know" instead of sneak peaking and ordering up boxes to ship it in. )

Kuo has not much of any record on accurate Mac Pro leaks. I suspect the suppliers he mainly tracks that have tight Apple supply chain tie-ins don't feed much into the Mac Pro supply chain because it is substantively different from phones , iPads , and lower end Macs. Or the ones that do know they'd be quickly identified as the leakers to Kuo so say nothing (their paycheck is worth more than enjoyment of gossiping with Kuo). Pretty sure he had nothing on the XDR. ( again extrmely narrow supply chain).

Gurman has about zero motivation to sit on anything . He has got a weekly gossip column he has to write up . Often it sometimes is filled with "I think" , "I expect" when he has no new truly leaked info to pass along. So again if don't know , can't tell.


If Apple's original plan was to have a M1 Ultra/Quad for the Mac Pro and now don't have the Quad then they could have painted themselves into another corner and are 'stuck'. They would be figuring out to get 'unstuck' themselves.

Gurman does appear to perhaps be any unofficial mouthpiece used by Apple sales/marketing to steer expectations from time to time. But if Apple has nothing they want to steer, or has much higher priority products to steer, then they won't feed Gurman much to manage the expectations.





I am so tired of waiting. Date, technical details, and pricing if possible. I know this is a big ask but the information is there somewhere.

If can't buy it until mid Spring 2023 (or later) what is the real problem?

If need a price point to budget to... just pick one of the current ones. Apple says that the 16 Core W5700 MP 2019 is the most popular configuration. They are extremely likely going to have some configuration that hits that price point. Budget to that and will have approximately enough money if have to "save up for it".

The only likely thing that will see radical drop in price if a current MP 2019 configuration that has an Afterburner Card. That will be baked in 'for free' ( can already see that in the Ultra SoC so should come as no shocker).

Also very likely that the Mac pro will start out in entry configurations with an Ultra. So something incrementally better than the current Ultra. ( can see M2 is a bump. A15 -> A16 is a bump. ) .... not sure why would expect otherwise.
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
I'll get right on it, just after I have my psychic prognosticator channel JFK, Caligula, Sun Tzu, and Pope Gregory III.

Assuming you're looking for an actual answer, then the Mac Pro and Mac mini are low volume products, which makes it harder to gleam information about them. Also, one of the more reliable leakers, Ross Young, is a display analyst, so he has decent scoops on Macs with built-in screens. Obviously, neither the Mac Pro or Mac mini has that.

We have one forum member who knew about the Mac Studio a week or so before it was announced. They said that the Apple Silicon Mac Pro had a single PCIe slot and when an AMD graphics card was plugged into it, it didn't work. However, this was likely a pre-release product, that may never see the light of day, much like the rumored Xeon refresh whose day has past.

Anything else is just conjecture. I tend to think that the 2019 Mac Pro is the final stand of the x86 Mac, the Last of the Mohicans, and we shan't see the likes of it again. If you want the analysis of an actual CPU engineer, then former Opteron architect, Cliff Maier, who knows the engineers at Apple from his time at AMD and Exponential, and talks with his old colleagues regularly, had this to say. When I asked him about the Apple Silicon Mac Pro he replied with the following:

It’s possible that apple allows slotted ram and puts its own gpu on a separate die, sure. But if it does that it will still be a shared memory architecture. I would say there’s a 1 percent chance of slotted RAM. An independent GPU is more likely; the technical issues with that are not very big, but the economics don’t make much sense given apple’s strategy of leveraging its silicon across all products. Still, I’d give that a 33 percent chance. And it wouldn’t be a plug in card or anything - just a separate GPU die in the package using something like fusion interconnect. Maybe for iMac Pro, Mac studio and Mac Pro.

So, he believes a 1% chance of DIMMs, and 33% chance of discrete GPU but still on-package, just not integrated in the SoC.

Replying to my followup question about the GPU being third-party or Apple designed, his response was:

Yeah, definitely their own design. I’m quite convinced they like their architecture, and that they have been working on ray tracing. Given how parallelizable GPU stuff is, it’s quite possible that they simply put together a die that is just made up of a ton of the same GPU cores they have on their SoCs. You could imagine that, for modular high end machines, instead of partitioning die like: [CPU cores+GPU cores][CPU cores+GPU cores]… it may make more economic sense to do [CPU cores][CPU cores]…[GPU cores][GPU cores]…. (Or, even, [CPU cores+GPU cores][CPU cores+GPU cores]…[GPU cores]…

As far as the economics are concerned:

It may also make more engineering sense, in terms of latencies, power supply, and cooling, too. Of course, Apple wouldn’t do that if it was only for Mac Pro (probably) because the economies of scale wouldn’t work (plus, now, supply chains are fragile). They might do it if it made sense to use this type of partitioning for iMacs, iMac Pros, Studios, Mac Pros, and maybe high end MacBook Pros, while using the current partitioning for iPads, iPhone Pros (maybe), Mac Minis, MacBook Pros, MacBooks, and maybe low end iMacs.

Not saying they will, but at least i give it a chance. More of a chance than RAM slots or third-party GPUs.

So, according to this veteran CPU architect, if Apple does include a GPU alongside the SoC, it's going to be their own design, not AMD or Nvidia, won't be available with add-on boards, and Apple will only implement it if they can leverage it in multiple products.

If you want further clarification, feel free to ask him yourself, he's quite chatty and answers all questions.

I tend to agree with everything he says, and that the current Mac Pro is a product of Intel's design philosophy with the Xeon platform, not Apple's new approach with Apple Silicon. With the current design, Apple was able to leverage Intel's niche, but still high volume Xeon family. Apple designed the case and implemented the MPX modules using mostly third-party chips. The Apple Silicon Mac is going to belong to a niche of a niche, thus I think it will likely be a different product compared to what we are currently familiar with, one which is more along the lines of the entire Apple Silicon Mac lineup, as it currently stands.

And I say all this as someone who just got a 2019 Mac Pro two days ago.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,902
When I asked him about the Apple Silicon Mac Pro he replied with the following:

... It’s possible that apple allows slotted ram and puts its own gpu on a separate die, sure. But if it does that it will still be a shared memory architecture. ..... An independent GPU is more likely; the technical issues with that are not very big, but the economics don’t make much sense given apple’s strategy of leveraging its silicon across all products. Still, I’d give that a 33 percent chance. And it wouldn’t be a plug in card or anything - just a separate GPU die in the package using something like fusion interconnect ...

So, he believes a 1% chance of DIMMs, and 33% chance of discrete GPU but still on-package, just not integrated in the SoC.

To be a discrete GPU(dGPU) it would have to meet two criteria. First, be a separate die. Second, it would have to have its own independent memory. If don't have both then it isn't really discrete in any substantive sense. Might have corner case if on a physically separate 'board' that you could pull from the main logicboard.

Only the die criteria fits there. Apple having a "GPU cores only" die but all of the other aspects of the two-die Ultra M1 package would still be an intergrated GPU (iGPU). Two , three, four dies inside of the same package is just a functional manufacturing disaggregation mechanism. (can be cheaper to make several smaller chips and 'glue' them together. ), but that doesn't have to have any impact on the integrated/discrete status of the RAM. it is all 100% shared memory then it is integrated.


Replying to my followup question about the GPU being third-party or Apple designed, his response was:
... Given how parallelizable GPU stuff is, it’s quite possible that they simply put together a die that is just made up of a ton of the same GPU cores they have on their SoCs. You could imagine that, for modular high end machines, instead of partitioning die like: [CPU cores+GPU cores][CPU cores+GPU cores]… it may make more economic sense to do [CPU cores][CPU cores]…[GPU cores][GPU cores]…. (Or, even, [CPU cores+GPU cores][CPU cores+GPU cores]…[GPU cores]…


As far as the economics are concerned:

What that is covering is a possibly more economical way of making a very large iGPU; not a discrete one at all.
If Apple built two "CPU" focused chiplets ( 10 and 20 cores) and two GPU focused chiplets ( 32 , 64 ) cores they could mix-and-match a wider set of products.

(C -- CPU die , G -- GPU die )

10 C + 32 G
10 C + 64 G
10 C + 10C + 10C + 32G
10 C + 32 G + 32G + 32G
20 C + 32 G
20 C + 20C + 10C + 32G
20C + 64G
20C + 64G + 64G + 64G
20C + 10C + 32G + 32G

Instead of having to make 9 different die masks and producing just the right number dies in each category, they could creating just four dies and combine them in different packages from a shared pool of constructed dies. ( one package starts to outsell another ... just make more of the other from pool of dies have stockpiled. )


All of that stuff would be packaged up as dual or quad dies in a single SoC. If all of these dies are mounted inside the SoC package then nothing is 'slotted' in . Especially if trying to hit industry leading Pref/Watt. The "modularlity" is in the package construction context not in the end user deployment sense.

If they hang the I/O ports off a different die than the CPU than don't have to get redundant elements ( multiple secure enclaves , multiple SSD controllers , 4x1 PCI-e v4 complexes , etc. ). That is where might get some end usage modularity. If Apple bumped up the common I/O subset to 2 -4 x8/x16 PCI-e v4 complexes to provision out far more generally useful lanes .



So, according to this veteran CPU architect, if Apple does include a GPU alongside the SoC, it's going to be their own design,

When he says "a separate die inside the package" that isn't "alongside" at all in any significant sense. You are tagging disaggregation with disintegration. Those are two different things.



not AMD or Nvidia, won't be available with add-on boards, and Apple will only implement it if they can leverage it in multiple products.

There is not a lot of good evidence for an Apple "add-on" board either. There is lots of evidence Apple likes their own GPU more. And they are spending tons of effort. deploying new tools, and developer educational efforts to push folks to optimize for iGPUs. All the inertia of Apple GPU is as an iGPU. (the optimizations are about leveraging the shared and tile memory). The tile memory specific is the only subs area where things get somewhat akin to discrete GPU optimizations (as that GPU local area of memory is mostly separate, but also more of a cache than large independent working pool. Not reaslly shared across all the GPU cores either. ) .

Most of Apple's upper 'half' desktop lineup isn't going to take the very top end of what AMD or Nvidia are going to offer going forward either. There is no room in a Studio. Pretty good chance no room in a "iMac Pro" even if they return with one in a 'slimmed down' chassis in a year or two.

The problem at the moment is that they are dependent upon the upper end laptops. Doubling a laptop optimized Max class die happens to work OK for an Ultra inside only a Mac Studio. But if Apple is going to get to a broader upper half desktop line up they that is modularly configurable for a wider range than what the laptop die is targeted at. However, it still would have to cover much more than just the Mac Pro. The Mac Studio would have to be inside the scope to get enough volume. Perhaps some other desktops.

If they can make the disaggregated dies and packaging very highly Pref/Watt effective than perhaps can take out the monolithic "Max" class die also. that would boost the chiplet volume produced even more. That would be a more stable economic foundation over the long term. But it isn't going to create Threadripper or Xeon W 3x000 'killer' SoCs. Or some deep commitment to AMD dGPUs for GUI workloads.

Where Apple's approach has bigger issues is compute workloads that can be more easily distributed over multiple GPGPU cards. Not just a single GPU workload, but to multiple ones that partition and farm out work to that scales in closer to "embarrassingly parallel" fashion. Apple's GPU can't scale past a package. Apple can make a very big package with TSMC lastest CoWos-LSI packaging , but it is going to remain limited.

2-3 very high end AMD/Nvidia GPUs lashed together with InfinityFabric/NVLink . No good indication that Apple's UltraFusion links can scale like IF/NVLink can. Metal is also somewhat a problem too because tends to interwine GUI with "Compute". ( deprecating OpenCL is a limitation. No portable , compute focused API is a problem. )
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
When he says "a separate die inside the package" that isn't "alongside" at all in any significant sense. You are tagging disaggregation with disintegration. Those are two different things.
Thanks for the explanation, but I was speaking in generalities as a layman. I'm just the messenger, relaying what a veteran CPU architect, the man who wrote the draft for x86-64, who happens to be friends with many of the Apple Silicon engineers and talks with them regularly, has to say on the matter. While I appreciate your feedback here, I suggest talking with him about it. Personally speaking, I would be much more interested in reading an in-depth conversation between the two of you, rather than correcting my inaccuracies on the subject, which I fully admit is a result of my ignorance in regards to semiconductor design and implementation.
 

Boil

macrumors 68040
Oct 23, 2018
3,283
2,899
Stargate Command
12C/40G​
12C/40G​
80G​
80G​

And maybe we also get PCIe add-in cards from Apple, MPX GPGPUs with 160G & 320G options...?
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,309
3,902
Thanks for the explanation, but I was speaking in generalities as a layman. I'm just the messenger, relaying what a veteran CPU architect,

That is the point. You were not relaying. He was saying there probably wasn't a path to a discrete GPU and you are trying to take his words and push that rock up the hill. That doesn't work. It isn't just you. The notion of "well Apple has gotta have a dGPU" runs rampant through many threads here at MR . People keep repeating it over and over again as if to say it enough times it will become true. That is not necessarily going to happen.


the man who wrote the draft for x86-64, who happens to be friends with many of the Apple Silicon engineers and talks with them regularly, has to say on the matter. While I appreciate your feedback here, I suggest talking with him about it.

If it is your misinterpretation with his words talking to him accomplishes what? I understand with he is saying. He is saying that Apple is extremely happy with their iGPU approach. He has said that in multiple threads Apple Silicon forums at Macrumors(MR). Apple's "Problem" is how do they make one 'really big" GPU and his points are addressing how to get there with some economic reality. That still doesn't push them into "dGPU" land.

The China Times "Lifuka" rumors said that the GPU would be coming with the iMac. Well the iMacs for M1 generation have already shipped.

iMac 21.5" --> iMac 24" M1 ( iGPU)
iMac 27" ---> Mac Studio ( iGPU . one of those is a two die iGPU. Two dies but not discrete in the classic GPU sense. ).

Lifuka is an island that is part of Tonga island system . From the sense of which country does it belong to, it is not discrete. Apple has shipped their iMac replacements. It really shouldn't be in the 'rumor' status anymore about what they are going to do. It is done. Could there have been a prototype with an alternative dual die package ( cpu-gpu combo + gpu core only die )? Perhaps, but if based on same principles and using UltraFusion as a foundation, it wouldn't be that much different than the Ultra that shipped in the "is it an iGPU ?" sense. That only helps in explaining how it could have been described inaccurately. Apple has shipped no dGPU drivers at all for macOS on Apple Silicon.

And yet the 'Lifuka dGPU' rumor keeps rising from the dead like a vampire. That China Times rumor was/is a 'miss'.
I understand why the hyper modularity folks want something like that to be true, but at this point there are no creditable leaks supporting it.


Personally speaking, I would be much more interested in reading an in-depth conversation between the two of you, rather than correcting my inaccuracies on the subject, which I fully admit is a result of my ignorance in regards to semiconductor design and implementation.

I think Apple's big disconnect from dGPUs going forward is at least as much software based as it is to do with silicon implementation details. If over the very long term Apple completely gives up on dGPU or dGPGPU then software issues will explain more of the core motivations. For the notion of Apple moving to putting all the cores in a single (perhaps large) package , there is not a big gap between Maier and I on that particular issue.



P.S. from those other threads you linked to those Craig Hunter charts on the 'off the charts' Ultra performance on NASA USM3D CFD code are nice and light a fire under 'what is apple going to do with quad dies' , but miss what the pragmatic use of a supercomputer with no ECC memory. That is quandary that Apple is in. They can push the envelope, but it trends toward covering a narrower area. Keeping up in the "GFlops for pixel processing" wars with AMD/Nvidia? Yes. Going to see Macs in TOP500 supercomputer deployments? No.
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
That is the point. You were not relaying. He was saying there probably wasn't a path to a discrete GPU and you are trying to take his words and push that rock up the hill. That doesn't work. It isn't just you. The notion of "well Apple has gotta have a dGPU" runs rampant through many threads here at MR . People keep repeating it over and over again as if to say it enough times it will become true. That is not necessarily going to happen.
I'm confused, do you think that I expect a discrete GPU from Apple or that I don't expect one? Personally, I don't know what Apple is going to do, that's why I asked Cliff and posted his response here. If I gave the impression that I knew the answer, or believed to know the answer, then I apologize. My intention wasn't to mislead anyone, including myself.
If it is your misinterpretation with his words talking to him accomplishes what? I understand with he is saying.
Again, if I was misleading anyone, then I apologize. That has nothing to do with me suggesting you speak with him. I suggested such because he is a knowledgeable person, and you certainly seem to be, so I'd be curious to see you two interact. It was intended as an invitation to join us over at that forum, take it or leave it, but it was never anything more than that.
And yet the 'Lifuka dGPU' rumor keeps rising from the dead like a vampire. That China Times rumor was/is a 'miss'.
I understand why the hyper modularity folks want something like that to be true, but at this point there are no creditable leaks supporting it.
Okay? I'm not sure what we are arguing about. Are we actually arguing about something? If you think that I posted Cliff's words to support the idea that a discrete GPU is going to be made, then that wasn't my intention. If you think that I posted Cliff's words to undermine the idea that a discrete GPU is going to be made, then that wasn't my intention.

I've seen the notion that a dGPU is going to be released with the Apple Silicon Mac Pro many times, just as you have. I was curious about the opinions of an actual CPU architect, so I asked one, then reposted his response here. I never intended to go beyond that. Again, if I gave that impression by being unintentionally imprecise, then I apologize.
 
  • Love
Reactions: chfilm
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.