Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Longplays

Suspended
May 30, 2023
1,308
1,156
It is just so incredibly stupid that Apple couldn't add some DDR5 slots that would be much slower for some extended memory in addition to the fast SOC RAM. This isn't an unsolvable problem like the lack of PCIe GPU support, it's just something they refused to do despite that it's the ONE thing some audio pros really need, and the one thing that would have made this new Mac an actual "pro" machine. PCIe slots can be replaced by thunderbolt devices, despite what so many people claim... audio hardware has largely moved on to thunderbolt solutions, and there's always external thunderbolt PCIe cages if you want to keep using non-thunderbolt devices. But RAM for sample libraries is actually something that's important for some people and there's no easy way around that. Some computers since the '80s have had different banks of RAM that function at different speeds or have more direct access than other memory... this really is just negligence and denial on the part of Apple and it's going to lead to the Mac Pro line being eventually discontinued because it won't be taken seriously anymore. Crying about lack of Nvidia cards is a waste of time, as that ship sailed LONG ago... the RAM situation is the real problem, and it would have been so simple for that not to be the case. What a joke.
Why spend extra R&D for ~15,000/year Mac Pros?

The SoC approach makes ~28.6 million/year Mac users happy.

That's the approach of

- Qualcomm NUVIA ARM laptops for Windows 11 in 2024
- >1.2 billion smartphones annually
- Nintendo Switch & APUs in Playstation & Xbox

Apple deciding to standardize to a single LPDDR5 standard is to leverage their annual ~320 million Apple device shipments when buying materials.
 
Last edited:
  • Like
Reactions: AlphaCentauri

Serqetry

macrumors 6502
Feb 26, 2023
341
526
Why spend extra R&D for ~15,000/year Mac Pros?

The SoC approach makes ~28.6 million/year Mac users happy.
Because as it stands now, the Mac Pro is pointless. Either the Mac Studio is good enough or the Mac as a whole isn't good enough... depending on your needs.

It would not have been hard for Apple to extend the SOC design with external slower RAM in addition to the built-in 192GB. There's no excuse.
 

Longplays

Suspended
May 30, 2023
1,308
1,156
Because as it stands now, the Mac Pro is pointless. Either the Mac Studio is good enough or the Mac as a whole isn't good enough... depending on your needs.

It would not have been hard for Apple to extend the SOC design with external slower RAM. There's no excuse.
Swappable parts never made Apple much, if ever, any revenue.

It actually extends the Mac replacement cycle from 4-6 years to 14-16 years via OCLP.

This is in spite of almost all Mac Pro getting a decade official support.

Mac Pro's PCIe slots have a critical need for 2023 target use case.

How would external slower RAM improve efficiency?

I agree that Apple should use larger LPDDR5 memory capacities like 48GB instead of 12GB. Doing so would allow for 768GB RAM with the M2 Ultra.
 
  • Like
Reactions: heretiq

Quu

macrumors 68040
Apr 2, 2007
3,424
6,832
Connected to what...? The SoC is designed to use LPDDR RAM mounted directly on the package.

As crazy as it sounds, you can actually redesign things to do different things.

What he's suggesting is similar to Intel Sapphire Rapids HBM chips which feature HBM stacked memory on the chip next to the die but also support DDR5 memory slots at the same time so you use the on-die memory and the slotted memory simultaneously and the system moves things around to make the most use of the faster memory.

Another example is the Intel and AMD support for NVDIMMs where you have 4-8 high-speed DDR4 memory modules and then you have 2 or more NVDIMMs which use Optane memory or NAND flash and offer insanely high capacities (512GB per module etc).

In those cases again the system prioritises data on the actual real RAM modules and uses the NVDIMM as slower memory essentially. Just like tiered caching with the least accessed contents in memory being evicted from DDR to NVDIMM.
 
  • Like
Reactions: ZombiePhysicist

Longplays

Suspended
May 30, 2023
1,308
1,156
As crazy as it sounds, you can actually redesign things to do different things.

What he's suggesting is similar to Intel Sapphire Rapids HBM chips which feature HBM stacked memory on the chip next to the die but also support DDR5 memory slots at the same time so you use the on-die memory and the slotted memory simultaneously and the system moves things around to make the most use of the faster memory.

Another example is the Intel and AMD support for NVDIMMs where you have 4-8 high-speed DDR4 memory modules and then you have 2 or more NVDIMMs which use Optane memory or NAND flash and offer insanely high capacities (512GB per module etc).

In those cases again the system prioritises data on the actual real RAM modules and uses the NVDIMM as slower memory essentially. Just like tiered caching with the least accessed contents in memory being evicted from DDR to NVDIMM.
How many millions of chips will it cover?

Optimistically Ultra SKUs likely does not move more than 75,000 annually.

Where is the economies of scale of that R&D spend?

If Apple not making that R&D investment means <20% Mac Pro users buying but is a net savings for the whole Mac pro desktop business unit then it's a good call.

Any business looking to cut overhead would phaseout the bottom 20% least profitable customers/product segment.
 
  • Like
Reactions: heretiq

Quu

macrumors 68040
Apr 2, 2007
3,424
6,832
How many millions of chips will it cover?

Optimistically Ultra SKUs likely does not move more than 75,000 annually.

Where is the economies of scale of that R&D spend?

If Apple not making that R&D investment means <20% Mac Pro users buying but is a net savings for the whole Mac pro desktop business unit then it's a good call.

Any business looking to cut overhead would phaseout the bottom 20% least profitable customers/product segment.

Well here's the thing, they put a high-bandwidth interconnect on every M1/M2 Max chip and possibly on every Pro chip too. So that they could be interconnected to create an M1 Ultra or an M2 Ultra.

What use is that interconnect? - The M1 Max was I believe 56 billion transistors making it very expensive. But it has a niche use case, the M1 Ultra (and now M2 Ultra) chips which they'll likely sell less than 250,000 units.

But regardless you can engineer around this problem and make it cost effective. If you look at what AMD did with EPYC they made their physical layer interchangeable and by that I mean, the PCIe connectivity on their chips (128 PCIe lanes) can be used instead for Infinity Fabric for chip-to-chip communication in a dual-socket system. And they can also be used for CXL which allows for memory devices to be connected to expand system memory.

So what AMD has done here is they have 1 physical set of pins that can be used for PCIe, Chip communication (like Apple does with 2xM2 Max's to create an M2 Ultra) or CXL which can be used to directly attach more memory.

Apple could have done the same thing. Now I admit the M2 Ultra doesn't have an extra physical layer just sitting around doing nothing like the M1/M2 Max chips do. But they do have PCIe lanes that are wired into the PCIe slots of the Mac Pro and in an AMD system which supports CXL you can actually plug memory modules that are PCIe physical cards into PCIe slots and have that memory accessible to the CPU as normal system memory.

Apple could do that, they chose not to. In-fact I believe the Mac Pro is PCIe 4.0. Has no 5.0 or CXL compatibility and to be honest I don't expect them to ever add CXL support.
 

Longplays

Suspended
May 30, 2023
1,308
1,156
Well here's the thing, they put a high-bandwidth interconnect on every M1/M2 Max chip and possibly on every Pro chip too. So that they could be interconnected to create an M1 Ultra or an M2 Ultra.

What use is that interconnect? - The M1 Max was I believe 56 billion transistors making it very expensive. But it has a niche use case, the M1 Ultra (and now M2 Ultra) chips which they'll likely sell less than 250,000 units.

But regardless you can engineer around this problem and make it cost effective. If you look at what AMD did with EPYC they made their physical layer interchangeable and by that I mean, the PCIe connectivity on their chips (128 PCIe lanes) can be used instead for Infinity Fabric for chip-to-chip communication in a dual-socket system. And they can also be used for CXL which allows for memory devices to be connected to expand system memory.

So what AMD has done here is they have 1 physical set of pins that can be used for PCIe, Chip communication (like Apple does with 2xM2 Max's to create an M2 Ultra) or CXL which can be used to directly attach more memory.

Apple could have done the same thing. Now I admit the M2 Ultra doesn't have an extra physical layer just sitting around doing nothing like the M1/M2 Max chips do. But they do have PCIe lanes that are wired into the PCIe slots of the Mac Pro and in an AMD system which supports CXL you can actually plug memory modules that are PCIe physical cards into PCIe slots and have that memory accessible to the CPU as normal system memory.

Apple could do that, they chose not to. In-fact I believe the Mac Pro is PCIe 4.0. Has no 5.0 or CXL compatibility and to be honest I don't expect them to ever add CXL support.
AMD probably moves more Threadripper or EPYC chips in a quarter than Ultra chips per generation.

So where's the economies of scale for that?

One thing that mystifies me is why Apple did not opt to use 48GB LPDDR5 memory capacities. It could bump up Ultra RAM limit from 192GB to 768GB thus addressing the #1 nitpick that could be addressed that would not negatively impact Apple all that much.
 
  • Like
Reactions: heretiq

Quu

macrumors 68040
Apr 2, 2007
3,424
6,832
AMD probably moves more Threadripper or EPYC chips in a quarter than Ultra chips per generation.

So where's the economies of scale for that?

Although AMD sells a lot more, the transistor count is a lot lower. Some of their EPYC even the high core count ones are under 40 billion transistors. Apples cost to fab an M2 Ultra would be a lot more for two main reasons, firstly its 134 Billion transistors and secondly it uses two large monolithic dies vs AMD using 8 small dies and 1 medium die (higher yields, cheaper cost).

So basically AMD's profit margins for their chips are very high. But like I say, you can use one physical layer to allow for multiple use cases and the die area needed to switch how they work is negligible.

If Apple wanted to allow CXL support they could certainly have done so. In-fact I believe they're using a PCIe switch chip on the Mac Pro to take 16 PCIe lanes from their SoC and splitting it up for the 6 or 7 PCIe slots (I forget how many) the new Mac Pro has. Those chips actually are very expensive. If they're not getting a great deal it could be a thousand bucks just for one of those. But ya know this is why the systems all cost $6999+

These systems I don't think are profitable enough for Apple to even consider making them on a per-unit cost analysis but it brings them other benefits. Maybe a big studio buys them and stays on the Mac platform, maybe they port their games and apps to macOS etc there are auxiliary less tangible benefits to having a super tower available in the market.

One thing that mystifies me is why Apple did not opt to use 48GB LPDDR5 memory capacities. It could bump up Ultra RAM limit from 192GB to 768GB thus addressing the #1 nitpick that could be addressed that would not negatively impact Apple all that much.

I totally agree with you. There is likely a technical reason, perhaps Micron is unwilling to make stacked packages that high in capacity due to defect rates spoiling too much production etc - I'm just guessing.
 

mcnallym

macrumors 65816
Oct 28, 2008
1,182
911
It is just so incredibly stupid that Apple couldn't add some DDR5 slots that would be much slower for some extended memory in addition to the fast SOC RAM. This isn't an unsolvable problem like the lack of PCIe GPU support, it's just something they refused to do despite that it's the ONE thing some audio pros really need, and the one thing that would have made this new Mac an actual "pro" machine. PCIe slots can be replaced by thunderbolt devices, despite what so many people claim... audio hardware has largely moved on to thunderbolt solutions, and there's always external thunderbolt PCIe cages if you want to keep using non-thunderbolt devices. But RAM for sample libraries is actually something that's important for some people and there's no easy way around that. Some computers since the '80s have had different banks of RAM that function at different speeds or have more direct access than other memory... this really is just negligence and denial on the part of Apple and it's going to lead to the Mac Pro line being eventually discontinued because it won't be taken seriously anymore. Crying about lack of Nvidia cards is a waste of time, as that ship sailed LONG ago... the RAM situation is the real problem, and it would have been so simple for that not to be the case. What a joke.
Whilst certainly possible then what were you doing prior to Mac Pro 2019 if needed more then 128gb RAM
iMac 2020 I believe topped out at 128Gb as did the Mac pro 5,1 with dual CPU and the trashcan. Is only since then that could load it all to RAM on a 2019. Or were you not using Mac before the Mac Pro 2019.

Not saying you wouldn't benefit but wandering what was using prior to Mac Pro 2019.

There is the Intel Sapphire Rapids HBM which I see already mentioned whilst typing so no real technical reason why couldn't if wanted too. However I believe that they are specific Xeon processors as opposed to every Xeon Processor that shipped. I believe that with the latest i9 13900 then added ECC support with W680 chipset to save the entry level Xeon SKU based on the i9 so even Intel cutting costs with Xeon in that instead of same socket Xeon as the i9 then just use the i9.

With the 2023 Mac Pro using the M2 Ultra that shared with the Studio and the Ultra being 2 fused Max then would have to have either

1.) Dedicated SoC for the Mac Pro - I don't see the return on investment to Apple for a dedicated SoC. Another reason likely that Mx Extreme hasn't arrived.
2.) Support in every Max and Ultra Soc whether used or not which going to push up costs further on those SoC for what number of users. Intel don't seem to think it cost effective to put it on every Xeon Chip and don't know how many Xeons Intel ship compared to Apple of the Mx Max and Ultra but suspect more Xeons worldwide.

This is what I found not sure how accurate but I presume others who regularly follow this may have better sources


According to this Intel shipped 7,710,000 server CPUs in Q4 2021

according to this then Apple shipped 7.8million Macs in Q4 2021

So Intel shipping as many Xeon Chips (don't know of other Server CPU's they would ship) as Apple ships Macs in total in Q4 2021 near as dammit.

Based on that I don't see it being economic for Apple to add to all Mx Max and Ultra's if not economic for Intel on Xeon. That Apple Mac would include base Mx and Mx Pro's as well which would expect make the majority of macs shipped.

So I would say that just not economic for Apple to do so for the numbers required.

I see basically other people said all this whilst typing.
 

JouniS

macrumors 6502a
Nov 22, 2020
615
379
One thing that mystifies me is why Apple did not opt to use 48GB LPDDR5 memory capacities. It could bump up Ultra RAM limit from 192GB to 768GB thus addressing the #1 nitpick that could be addressed that would not negatively impact Apple all that much.
Do those 48 GB chips actually exist? Or are they yet another case of someone confusing bits and bytes?

I've understood that 192 GB is as much RAM as can physically fit inside the M2 Ultra package using currently available memory chips. Pretty much everyone who wants more memory uses some kind of memory modules. Not necessarily because they want a modular solution, but because it's not possible to fit enough memory close enough to the processor on a two-dimensional surface.
 

Longplays

Suspended
May 30, 2023
1,308
1,156
Whilst certainly possible then what were you doing prior to Mac Pro 2019 if needed more then 128gb RAM
iMac 2020 I believe topped out at 128Gb as did the Mac pro 5,1 with dual CPU and the trashcan. Is only since then that could load it all to RAM on a 2019. Or were you not using Mac before the Mac Pro 2019.

Not saying you wouldn't benefit but wandering what was using prior to Mac Pro 2019.

There is the Intel Sapphire Rapids HBM which I see already mentioned whilst typing so no real technical reason why couldn't if wanted too. However I believe that they are specific Xeon processors as opposed to every Xeon Processor that shipped. I believe that with the latest i9 13900 then added ECC support with W680 chipset to save the entry level Xeon SKU based on the i9 so even Intel cutting costs with Xeon in that instead of same socket Xeon as the i9 then just use the i9.

With the 2023 Mac Pro using the M2 Ultra that shared with the Studio and the Ultra being 2 fused Max then would have to have either

1.) Dedicated SoC for the Mac Pro - I don't see the return on investment to Apple for a dedicated SoC. Another reason likely that Mx Extreme hasn't arrived.
2.) Support in every Max and Ultra Soc whether used or not which going to push up costs further on those SoC for what number of users. Intel don't seem to think it cost effective to put it on every Xeon Chip and don't know how many Xeons Intel ship compared to Apple of the Mx Max and Ultra but suspect more Xeons worldwide.

This is what I found not sure how accurate but I presume others who regularly follow this may have better sources


According to this Intel shipped 7,710,000 server CPUs in Q4 2021

according to this then Apple shipped 7.8million Macs in Q4 2021

So Intel shipping as many Xeon Chips (don't know of other Server CPU's they would ship) as Apple ships Macs in total in Q4 2021 near as dammit.

Based on that I don't see it being economic for Apple to add to all Mx Max and Ultra's if not economic for Intel on Xeon. That Apple Mac would include base Mx and Mx Pro's as well which would expect make the majority of macs shipped.

So I would say that just not economic for Apple to do so for the numbers required.

I see basically other people said all this whilst typing.
Apple Mac chip distribution is likely this

- M*: ~80%
- M* Pro: ~17%
- M* Max: ~2%
- M* Ultra: ~1%

2022 worldwide Mac shipment was estimated to be 28.6 million. This is split into ~5.72 million desktops (~20%) and ~22.88 million laptops (~80%).

Or ~22.88 million M chips vs ~5.72 million Pro/Max/Ultra chips.

Why do M chips sell better than their higher-end counterparts? It's "good enough" and start at under $1.3k.

When you break down the annual shipment numbers then you start seeing why spending any extra unnecessary R&D for ~1% of all Macs sold does not make any financial sense.

Abandoning the 20% of users of that ~1% Ultra users would yield a net savings for Apple.

Sure the Ultra will never beat any synthetic benchmark that only online forum users care about but it improves the user experience of >99% of Mac users who likely buy more frequently than every 4-6 years.

Who wants to service any customer who replaces every 14-16 years because of OCLP that allows for >decade unoffocial macOS updates; and swappable CPU, dGPU, eGPU, RAM, SSD & logicboard part sales that Apple will never enjoy.
 
Last edited:

Longplays

Suspended
May 30, 2023
1,308
1,156
Do those 48 GB chips actually exist? Or are they yet another case of someone confusing bits and bytes?

I've understood that 192 GB is as much RAM as can physically fit inside the M2 Ultra package using currently available memory chips. Pretty much everyone who wants more memory uses some kind of memory modules. Not necessarily because they want a modular solution, but because it's not possible to fit enough memory close enough to the processor on a two-dimensional surface.
 

Serqetry

macrumors 6502
Feb 26, 2023
341
526
Swappable parts never made Apple much, if ever, any revenue.
This has nothing to do with making Apple more revenue, other than people deciding to buy the new Mac Pro because it will actually meet their needs. This is about extending the maximum RAM the Mac Pro can use, nothing more... because it is necessary for certain applications.

Mac Pro's PCIe slots have a critical need for 2023 target use case.
Much less critical than being able to go above 192GB of RAM. The need for PCIe slots is exaggerated by people who don't even know what actual Mac Pro owners want or need.

How would external slower RAM improve efficiency?
Exactly what I said, and many other audio people have said. You need it to load huge sample libraries into memory. Obviously there are many other uses as well. You don't need fast RAM for something like that. It could be slower than the memory in a 2019 Mac Pro and you wouldn't notice the difference in audio applications.

You're just defending Apple's dumb decision to leave the maximum RAM cut off at what the SOC supports. It's a dumb decision, period... and it's causing a lot of people to stick with their 2019 Mac Pro because the new one just won't work for them at all.
 

macguru9999

macrumors 6502a
Aug 9, 2006
786
363
So... Audio engineers need more RAM to load plug-ins ? When I did some googling, I read that the M1 memory bandwidth is about 68GB/s and M2 100 GB/s (thats Bytes not bits). I THINK I read pcie x16 is about a tenth of this (64Gb/s or 8GB/s)

So question, would a 512GB x16 PCIe 4 cache card in a 2023 Mac Pro be able to provide the RAM cache performance that these users require ? Just asking, I just thought at 10% the speed of the onboard RAM it would still be 10x the speed of an SSD, approx. Experts please comment !
 

Serqetry

macrumors 6502
Feb 26, 2023
341
526
So... Audio engineers need more RAM to load plug-ins ? When I did some googling, I read that the M1 memory bandwidth is about 68GB/s and M2 100 GB/s (thats Bytes not bits). I THINK I read pcie x16 is about a tenth of this (64Gb/s or 8GB/s)

So question, would a 512GB x16 PCIe 4 cache card in a 2023 be able to provide the RAM cache performance that these users require. Just asking, I just thought at 10% the speed of the onboard RAM it would still be 10x the speed of an SSD, approx. Experts please comment !
RAM expansion PCIe cards could certainly solve the problem, if DAWs and plugins supported them. When is the last time someone made a RAM expansion card? All I can think of is things like the Apple IIgs and Macintosh IIci... lol.

RAM expansion PCIe cards could solve a lot of things, including swap space that doesn't wear down SSDs, but it's sad that it's come to this. All Apple had to do was add some DIMM slots and their machine would actually be "pro".
 
  • Like
Reactions: macguru9999

Longplays

Suspended
May 30, 2023
1,308
1,156
Such peculiarities like RAM expansion cards existed back in the days:
Dude... be careful of posting youtube videos on MR... very loud users will start hating you for it even when the video explains your ideas with higher production value.
 

eflx

macrumors regular
May 14, 2020
190
207
It's not a major selling point and there really is no difference. RAM is RAM. I dunno how these things start but those of us who use hundreds of gigabytes of memory for our tasks are under no illusions.

The main difference that Apple has been touting is the CPU and GPU being able to access the same allotment of memory, a unified memory architecture. And combined with those specs are the fast SSD's they've put on the machines that make accessing data more reliable with consistent latency and bandwidth.

But those SSD's are not special. They put out the same kinds of numbers as other high-end SSD's in Windows machines. And in-fact Apple has been halving the amount of NAND packages on some of their lower-end MacBook's like the new MacBook Air if you have a lower capacity SSD option and that one performs slower than commercially available M.2 drives as a result.

Another thing to keep in mind is we are all looking at these fantastically high memory bandwidth numbers like 800GB/s on the M2 Ultra. But that entire memory bandwidth is not actually available to the CPU.

If you look at the research done in the M1 Max for example (400GB/s) the CPU can only use 224GB/s maximum. That's still insanely fast, don't get me wrong. But it's in the realm of high-end workstation CPU's like Threadripper Pro.

When it comes to the M2 Ultra. I would say based on what we know from the M1 Ultra, you will be able to hit 224GB/s upto-4 cores. And if you are utilising 8 cores, 4 from each of the M1 Max dies that make up the M1 Ultra you can hit 450GB/s - So a little over half the advertised 800GB/s.

To utilise all of it you would need to utilise the GPU. Again these numbers are great. 450GB/s for example would be class-leading and double a Threadripper Pro system. It's just not quite the 800GB/s Apple touts because that is only accessible under certain load scenarios, mainly ones that utilise the GPU.

But looking past all this memory bandwidth talk. The real crux is the quantity. There has been talk in this thread about people just not having the right workflow, that their setup is unoptimised etc

To that I say, kind of but not really. See if you're constantly reading things in from the SSD into memory, you just halved your memory bandwidth so straight away you lost performance. It's much more performative to only be reading from memory for your App, not performing reads and writes simultaneously. Memory is not magic, there is a penalty to simultaneous access in this manner.

Secondly to that, we in this thread are mostly talking about end users, not the software developers who are actually making the apps. If your workflow demands a specific application and the developer is unwilling to rewrite how it functions to better make use of swap and instead expects you to have hundreds of gigabytes of memory for storing assets then you're just out of luck.

In my opinion, the Mac Pro and the G5 PowerMac before it were great workstations that could be used to accelerate almost any task. They gave us dual processors, a lot of PCIe slots with the ability to install multiple graphics cards so we could have lots of monitors and later on accelerate our computing needs. We gained 64-bit addressing and an operating system that could take advantage, which allowed for more than 4GB of system memory, which too opened up a new vista of computing for high-end and professional users.

It wasn't so long ago that you needed 8 hard disks in a type of RAID0 just to edit video and scrubbing your timeline while editing was essentially seeing a slide show of one frame here and there etc - This started the whole industry of proxy workflows, editing a much lower resolution and codec efficient proxy of your real footage just to be able to actually edit it in real-time.

As RAM and processors have gotten better and better things like that have gone away. Now we look at the Mac Pro today. No upgradable RAM, no upgradeable graphics, no upgradable CPU. All of these things could be upgraded on the previous model (yes even the CPU, though not supported by Apple of course).

Not having upgrades can be acceptable, we have all relented on that issue when it comes to the MacBook Pro. But that's because Apple has delivered an actually very compelling laptop with insane performance, battery life and specs when it pertains to storage and memory. 96GB of RAM and 8TB of SSD inside the laptop is class-leading for a notebook.

But 192GB of RAM and 8TB SSD is not class-leading in a workstation. I think 512GB of RAM and up to 32TB of SSD would have been enough to quell most people's concerns that is until we look at the GPU situation. No CUDA, no NVIDIA graphics options, no AMD graphics options for that matter. There is no possibility of buying the amount of computing you need today in this box when it comes to machine learning. Often when we talk about upgrades we're thinking about the longevity of a system but in this case we can't even get the specs we need today, at the point of purchase which has not been a thing with the G5 PowerMac, the 2006 Mac Pro or the 2019 Mac Pro.

What you've got here is a Mac Studio with severely gimped PCIe slots because the most useful card people want to put in there (graphics) doesn't work and the integrated GPU is not powerful enough for the use cases where people would want a dedicated graphics card or multiple cards.

In my next machine learning system, I'm aiming for four GPU's. That could be 4 x 4090's or 4 x A100's etc - The M2 Ultra isn't even the equivalent to one of those. But funnily enough, I can put those kinds of cards in the previous Mac Pro. Maybe not four of them but at least two.

If you think 192GB is enough and no one needs that much or the Mac architecture with M series chips works differently and doesn't need so much memory then why does the MacBook Pro 14" and 16" come with a 96GB RAM option? What laptop have you ever seen come with that much memory from the Intel / AMD world? - There's really no difference, RAM is RAM, your working set needs to be in memory to keep the CPU caches fed or instruction processing stalls and you lose performance.

This is why they are offering such high amounts, to begin with, but their SoC design has a physical constraint to how much memory they can physically put so close to the CPU and the further you go the lower the frequency has to be to compensate, this is why there are no physical modules and the RAM is mm's from the die.

My final point: I think the Mac Studio and Mac Pro with M2 Ultra are great computers. For the tasks they specifically target like video editing. The Mac Pro is perhaps a little less great because the PCIe slots it offers are mostly useless at this point but still if you do Video Editing I'm sure it's incredible with those media encoders etc

Thanks for the thorough reply and bit of a reality check. I know RAM is RAM, but for my usage case scenarios (software development and some graphic manipulation mostly) when I compare even my Intel based Mac Pro (2019 loaded up with what too much for what I actually need granted) to the equivalent Windows machine, the amount of ram used to complete the same tasks is noticeably less under MacOS than in the Windows environment.

That is even more apparent when running under Apple's new RISC OS and hardware; things are much more optimized in terms of code, compilers, execution etc. etc.

That's more where I'm coming from, but when it comes to editing a large multi-GB video file or other very large computational tasks where you can't compress things any further etc. RAM is 'RAM' at that point and if you don't have enough, you're relying on swap.

The specs you pointed out about Apples slightly misleading bandwidth numbers are interesting; didn't know that so thanks for sharing. I also agree with you, the previous Mac Pro (I've got the 12 core, 96GB ram and a Vega II Pro 32GB) was much more useful in that sense with expansion and upgradability especially when it came to cramming a ton of GPU power into the machine.

I was looking semi-seriously at upgrading from my current Mac Pro to a new one, or even the Mac Studio for my workload and for me at least it would be an upgrade in terms of power usage and overall speed for my workflow versus my Intel Mac Pro ... but at the end of the day, this Intel Mac Pro is more 'valuable' in a sense and does everything perfectly fine. Again what I've noticed though is that the new Apple architecture seems to be more efficient in many areas, including the usage of RAM for the same tasks. But again you're right; if you have a workflow that requires massive amounts of RAM there's not a complier mixed with hardware architecture you can turn to that will make any real difference in that sense. Clearly short-sighted on my part there :)
 
  • Like
Reactions: Quu

macguru9999

macrumors 6502a
Aug 9, 2006
786
363
I think you should wait until someone tests the mac pro m2 for audio. You might find that you dont need to load all your plug-ins into ram and the performance is really good. Just saying.
 
  • Like
Reactions: AlphaCentauri

MacPoulet

macrumors 6502a
Dec 11, 2012
549
378
Canada
This is a nice anecdotal experience. And I can share my anecdotal experience in a university setting that we have computers and upgrade parts modularly all the time. The difference is that your use case wouldn’t be affected with the 2019 Mac pro while my use case is now affected by the non modularity of the 2023 Mac Pro
Very interesting! We don’t do it that way as per our service contract with Dell. If we modify the systems for our labs, we lose support from them.

Office computers have a bit more leeway as do systems bought for specific research projects.
 

Yebubbleman

macrumors 603
May 20, 2010
5,844
2,437
Los Angeles, CA
It’s an older post, but I just want to point out that this idea the Mac has been hostile to GPU upgrades is a myth, and largely a rewriting of history.

With the exception of the window from 2013 Mac Pro to the introduction of TB3 & eGPU (and even during that time there was official support for 2012 machines with rx580s), Apple has always supported user-upgrades for graphics on Professional level desktops. That support has narrowed at times, expanded at others, but the company has never been “hostile” to it until AS, where the decision is for the Mac to be a UI skin for bigger iPads.

On any custom-built PC tower, let alone any workstation class system from Dell, HP, or similar, I can put in any GPU I want. I can put in more than one. I can mix and match NVIDIA and AMD, if I so choose. On an Intel Mac Pro tower, my options are drastically more limited. I can do NVIDIA through macOS Big Sur at the absolute latest and, even then, my options are way limited. So, we're left with AMD cards that are probably too new to be really appreciated for a 2012 Mac Pro using a 2010 era Xeon and system architecture and MPX modules specific to a 2019 Mac Pro that is likely not going to be able to support anything newer than the MPX modules that were available up until the time that the 2023 Mac Pro came out. Relative to what I can do on any other Intel based workstation, that's hostile to GPU upgrades in my book.


This isn't at all accurate. The M2 Ultra does not even match an RTX 2080 Ti from 2018 in graphics performance let alone a RTX 3090 or RTX 4090 in compute.

These are all single-GPU cards, not multi-GPU. Take a look at the OpenCL benchmark alone comparing the M2 Ultra and some contemporary graphics cards:

View attachment 2223262
The NVIDIA RTX 4060 Ti even beats it which has been panned by reviewers as a waste of sand. And for anyone looking for professional use an RTX 4090 on its own is 2.5x faster.

I was comparing what was available from Apple in the 2019 Mac Pro to the M2 Ultra in the 2023 Mac Pro. I don't doubt that NVIDIA has had better, but that's neither here nor there.

Again you are spinning in circles, I don't need hands on, nor to most people, no GPU expansion, it's DOA
Keep trying to downplay that again and again.

Spinning in circles? What does that even mean in this context?

This computer is a tool, not a toy. Unless you have data that proves that anyone buying a Mac Pro needs aftermarket GPU expansion or else it's a no-sale, then you are assuming that your opinion of what makes a Mac Pro an effective tool is applicable to everyone.

Yes. Actually. Contrary to belief that is popular on this site, GPUs are only ONE of SEVERAL things you can stick into a PCIe slot.
 
Last edited by a moderator:
  • Like
Reactions: AlphaCentauri

Basic75

macrumors 68000
May 17, 2011
1,996
2,342
Europe
What he's suggesting is similar to Intel Sapphire Rapids HBM chips which feature HBM stacked memory on the chip next to the die but also support DDR5 memory slots at the same time so you use the on-die memory and the slotted memory simultaneously and the system moves things around to make the most use of the faster memory.
Intel even gives you three modes for the HBM memory. Apple should have done something similar. https://www.phoronix.com/review/xeon-max-9468-9480-hbm2e
 
  • Like
Reactions: Quu
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.