I'm assuming that for each iteration of the M-chips, there is a limit to the amount of RAM each supports. Are the chips we have now hitting that limit already, or is there more in reserve? The original base M1 topped out at up to 16GB, the base M2 is available with up to 24GB. Are these chips actually capable of hitting, say, a 32GB ceiling or are they already maxed out?
RAM density is getting harder than 'compute logic' density. But a bigger issue is that there is a trade off between more aggregated bandwidth with capacity. If want to get to a TB of memory then probably not going as fsster and vice versa.
What Apple is building is some semi-custom RAM that gets somewhere in the middle. It is somewhat a "poor man's" HBM memory. It is less bandwidth than HBM but higher capacity. And it is higher bandwidth than DIMMs , but lower capacity.
The density issues are going to get bypass a bit by just stacking the RAM dies higher. (like HBM does. ) This cranks up costs. So another trade off between "as cheap and commodity as possible" and better performance.
Apple goes 'extra wide' The M1 Max has twice as many memory controllers as the MP 2019's W-3200 does. Apple goes very wide like HBM. And semi-custom stacks RAM dies as HBM does. But there are limits to the trade off.
I don't see how an AS Pro can top out at 1.5TB RAM.
Does Apple even want to go there? The > 1TB capacity was mainly a side-effect of repurposing an Intel sever chip architecture as a workstation chip. It was a design decision pretty much unilaterally made by intel. Apple piggybacked on top of it more so for the extra 'tax' they would get to throw on top more so than deep need to cover that capacity.
Without, ECC abilities > 1TB is best dubious, if not a huge folly. No ECC in the 256GB - 1TB range is a bigger gap than some capacity above 1TB.
I guess that the AS comparison to an Intel MP having all the power of 1.5TB squeezed out of it could get away with a lot less, but how much, and can Apple achieve that limit? And more to the point, how much will that cost? Maybe Intel is the way...
As unified , uniform access memory > 1TB is likely out of reach for Apple using the "poor man's " HBM technique.
Apple isn't making only a CPU. the bandwidth drop to get to > 1TB would negate the iGPU they have. Extremely unlikely apple is going to give up there lead in that area just for capacity.
Apple probably has some pretty accurate demographics of just how many folks are actually using > 300 GB , > 500 GB , and > 1TB amounts of data. if the number of folks in any of those groups is really very , very small then it really is not worth going through triple backflips trying trying to undo all the trade offs to fit in that group. Or even delivery systems in that subgroup. It just 'happens to be almost free' before (as an Intel 'hand me down' from server CPU packages) , but now would not be. In short, if there is no humongous group of sever people to pay for server features ... then may not get server 'paid for' features.