Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Schismz

macrumors 6502
Sep 4, 2010
343
394
I'm losing interest in the new Mac Pro as its taking so long so just snapped up a 4090 for my PC and keen to see what its like for for Davinci resolve.
For reference, My Mac Studio Ultra renders out quicker than my PC with a 3090 card so looking forward to comparing the two.
I don't understand. How is the new Mac Pro "taking so long" ...? They're still well within the standard MP upgrade cycle of kicking another one out the door every half decade to 7 years.
 

orionquest

Suspended
Mar 16, 2022
871
788
The Great White North
Maybe that’s why Apple have been so quiet so they can see what the 4090 can actually do
Apple could care less what Nivida is doing, I'm sure they are aware of it. If they cared then they would have allowed Nivida cards all this time, instead they don't. Pretty sure they will make some new hardware which will be close enough to whatever Nivida has in marketing speak.

The real problem here is will Apple still allow GPU upgrades for the Mac Pro? As we can tell the 4090 is a great update and it's just a matter of swapping out GPU's. The way Apple is going you won't be able to tap into this performace potiental at all because everything is integrated, or you pay a hefty price to be allowed to do it as with the current Mpro.
 

mattspace

macrumors 68040
Jun 5, 2013
3,202
2,883
Australia
Apologies for being off topic, but I'd love to hear more about your thoughts on PTGui. I use Microsoft ICE for this which works great but there is definitely some quality loss that shouldn't be there. It also hasn't been updated in over a decade...

So I have a masochistic streak with Pano shooting - I've always set myself full 360x180 spheres in the most difficult locations, figuring that if I can get it right with the hardest difficulty, I've got it figured. I tend to try shooting on narrow footbridges, for example, where you have to get all the alignment perfect, and more often than not, it doesn't quite work, but occasionally I get one dead-on.

As a stitcher, I've moved to PTGui after a number of years with AutoPano Giga (before that, RealViz / Autodesk Stitcher, earlier still QTVRAS), who folded soon after the GoPro buyout. Like APG, PTGui is (AFAIK) effectively a front end to Helmet Deutch's PanoTools, from the 1990s. It's software that exists to monetise an existing technology, and that's its biggest problem. The UI/UX follows the tech, not the other way around. This was APG's biggest failing - at its core it was an image matching engine, and its masking tool was the thing they were effectively selling. People wanted vector shape masking, and that wasn't what they had to offer.

PTGui feels like "Pro" software, so in other words, the UI is janky, and just suffices to get the job done, because it has to service a small user base. Capture One has a similar thing - everything feels a little unpolished, and limited by the off-the-shelf cross-platform UI toolchain they had to use (Qt or whatever it is). So it works, and you tolerate it, because it gets results.

I think in around 2001 I paid AUD$700+ for RealViz Stitcher, and today PTGui Pro is $465, so it's pretty good value.
 
Last edited:
  • Like
Reactions: gammamonk

t0mat0

macrumors 603
Aug 29, 2006
5,473
284
Home
We have maybe a month or so to wait per their own 2 year transition timeline (Nov 2020-Nov 2022).
Pretty exciting considering what they've been able to achieve with other Apple Silicon.
chiplet design, fast expensive memory, their work on a high bandwidth connect - if they want to go cutting edge they will, whether it’s process node, cost of design, features - a mean to an end of their goals of the product and experience.

Would Apple roll in the Mac Pro to what is being expected in an October event if it’s ready? No idea but we’re 2 more years closer to finding out since Apple Silicon first got announced.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
They could always write a driver for Nvidia cards.

Its just that they don't want to


There are two sides to that issues. One, Apple really can't do a very good job with zero Nvidia help or resources. Apple only wants a "Metal first" approach to drivers. Two, Nvidia only wants a CUDA first approach to drivers (and they can't really properly write GPU drivers without some help from Apple either).
Neither one really needs the other financially so neither side is likely to give in.

The folks who label this "all Apple's fault" are in just about much denial as the folks who label it "all Nvidia's fault". In the first case Nvidia is a lousy partner, so Apple dump them. ('halt and catch fire with each new OS release' GPU drivers. That is lousy. Public finger pointing .. again lousy). Nvidia's actions are a major blocker to a mac solution also. They spin it as though they are on "your side" but they are really on their own side first and foremost. Apple same thing with similar agenda ( Apple solution first ).

This overlap Mac Pro area is really too small. Nvidia can sell a relatively few more data center GPUs and make up the revenue easy. Apple similar on the other side of the spectrum.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
Yes, at the moment we only have speculations. However, the RX 6900XT performance is similar (except for raytracing with Optix) to the RTX 3090, and the RX 7900XT will have two next-generation GPUs on the card.

The RX 7900 having "two GPUs on the card" is likely a now dated rumor. There is only one GPU chip in recent leaks about the 7900. In recent leaks, there is just one GPU cores chiplet (GCD) and multiple Memory/cache chiplets (MCD) .

AMD-NAVI-31-GPU-1536x783.jpg






The notion that the 7900 is going to outcompete the 4090 in total core die area is likely wrong. Nvidia threw the NVLink complex overboard to add more cores to the to 4090. Nvidia manically was shooting for the benchmark crown here at any cost. I really doubt AMD is doing the same thing with the 7900. Pretty good chance AMD is going to go for Performance/$ more so than performance at any cost. Not that will be slow but they probably won't be throwing stuff overboard just to hit some number far out on the diminishing returns power curve.

AMD will be fully free later to do a '7950' with bigger caches at a higher price point and to close some performance gap. More data on new fab process likely also clock it higher into diminishing returns power curve to also close the gap. Those aren't going to be high volume sellers though. Those 7950's won't help them take share.

AMD reportedly also focused this generation on reimplementing the InifintyCache so they could build monolithic chips with smaller caches and not take a performance hit. ( different cache policy and implementation that works better across workloads ). That also was likely a higher priority than RT hardware.



The "two GPUs on card" is the AMD MI 210.

https://www.anandtech.com/show/17326/amd-releases-instinct-mi210-accelerator-cdna-2-on-a-pcie-card


The CDNA solution evolution probably has a better fix with direction Apple is going than the x800/x900 series. There is no graphics output there. It is basically a 'compute GPU'. So not in direct competition with the Apple GPU.
Some folks saw that (which are been released for a while) and though AMD was going to bring that to the consumer space (RDNA3). They are not. That two die solution presents as two different GPUs to the OS and applications. That isn't where AMD is going in the high end consumer gaming space.

There is pretty good chance that the much of the chiplet die edge space being soaked up by six memory chiplet interconnect is probably wiping out any space for InfinityFabric connections. [ i.e., in picture above the sides of GCD die that are covered by MCD chiplets are not going to provision IF at all. ] The main core die still has 16 lane PCIe-v5 + CXL 1.x to provision. That is likely the 'bottom' free GCD edge above. Then there are 4-6 DPv2.0 streams to provision out the bottom/top free edge also.

Adding a IF connection out on the GCD die would likely contribute to a reduction in some specialized cores ( video en/decompression , etc.). Because AMD is squeezed for free edge space , there is pretty good chance they made a similar call that Nvidia did ( different reasons but toss the IF link just like NVlink got tossed. )

AMD GPU packages with InifintyFabric may just shrink back to the CDNA die family (and disappear completely from the RDNA family going forward). AMD has enough money at this point to not have to do just one die to persue both markets. And going to see CXL 2.0 ( over PCI-e v5/6) rolled out to RNDA for more CPU+GPU coupling in potentially heterogeneous environments.




It can be expected that they will strongly improve the units responsible for raytracing (which Apple unfortunately does not use).

Some improvement should be expected , but strongly improve is likely stretching expectations. Doubtful AMD is going to try to jump two generations of ray tracing in one leap in some massive panic to push RT hardware as being the absolute #1 objective.

AMD went to chiplets a generation before Nvidia. That is likely the #1 feature focus this generation. As much as a 2nd gen RT happens to do better with a limited budget that will be fine, but unlikely AMD is throwing tons of complexity at RT hardware this iteration ( to get to parity with 3rd gen Nvidia hardware). If the chiplet disagregration didn't work extremely well then this whole Navi31 line up is a bust. RT performance solely by itself can't really fix that as a holistic GPU value proposition.

So, AMD goals in RT is more likely in 2090-3090 range than 4090 range.

As far as Apple (and AMD since they are looped in at the lowest level driver work ) not using 6000 series ray tracing ... that remains to be seen if that is permanently true or not.


So in theory, the RX 7900XT should be comparable to the RTX 4090 and Apple could use it if they feels that the desktop Apple Silicon GPU cannot compete with Nvidia's new cards.

Use it where? On a 2019 chassis that two generations behind in x16 PCI-e v3 ? MP 2019 would likely get discontinued in a 1-1.5 years. Are they going to make R&D investment back on the card sales in that time? questionable.

Or use it with what Apple Silicon SoC? Apple hasn't even done a SoC with x8 PCI-e v4 let alone any kind of x16 PCi-e v4/5 worth of bandwidth. There are zero 3rd party GPU drivers for macOS Apple Silicon version. They are going to rocket from nothing shipping to super highly tuned , super performance drivers in less than 6 months. Very probably not. ( go look at Intel's drama with new ARC drivers. ). If Apple was going to let AMD in the door they could have done it at WWDC 2022 with drivers for eGPUs via Thunderbolt. They did not. Apple's major "big news" for driver space was macOS DriverKit drivers working for iPadOS.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
File 1 (runtime 11 min, 14 secs)
An older SD mp4 (720x480) that was super scaled x3 within Resolve, with a 75% amount of temporal noise reduction and general sharpening and colour correction.

3090 did it at 53fps and render time of 5 mins, 5 secs
4090 did it at 92ps and render time of 3 mins, 2 secs
Mac Studio did it at 59fps and render time of 4 mins, 34 secs

studio/4090 = 64%


************

File 2 (runtime 21 min, 05 secs)
An older SD mp4 (720x480) that was super scaled x3 within Resolve, with max amount of temporal noise reduction and general sharpening and colour correction.

3090 did it at 57fps and render time of 9 mins, 7 secs
4090 did it at 107fps and render time of 5 mins, 0 secs
Mac Studio did it at 63fps and render time of 8 mins, 23 secs

studio/4090 = 59%


************

File 3 (runtime 29 min, 16 secs)
A 1080p HD mp4 (1920x1080) that was super scaled x2 within Resolve, with 75% amount of temporal noise reduction and general sharpening and colour correction.

3090 did it at 40fps and render time of 18 mins, 40 secs
4090 did it at 72fps and render time of 10 mins, 39 secs
Mac Studio did it at 44fps and render time of 16 mins, 31 secs

Studio/4090 = 61%


************

File 4 (runtime 31 min, 14 secs)
A 1080p HD mp4 (1920x1080) with 10% amount of temporal noise and no other correction.

3090 did it at 117fps and render time of 8 mins, 2 secs
4090 did it at 174fps and render time of 5 mins, 26 secs
Mac Studio did it at 80fps and render time of 11 mins, 43 secs

Studio/4090 = 46%



If Apple can get 1.8-1.9x performance out of a "quad" solution over a duo one, then each of those over 50% would be same 'ballpark' (or better) as the 4090. It would be seriously more expensive, but the W6800X (and up ) MPX modules already are.


Nvidia tossed a substantial amount of die space in the 4090 at RT and AI/ML (and incrementally at some video codecs not being used here. AV1)
 

cpnotebook80

macrumors 65816
Feb 4, 2007
1,210
524
Toronto
I rendered some projects on my 3090 and then fitted my new (and very large!) 4090 and ran these non scientific tests for comparison.
I'm not really bothered about other online benchmarks such as gaming and Blender etc as they're irrelevant to myself but the tests I've done are what I do day in, day out so any extra performance helps me out with my work - and the results are pretty impressive.
For reference, I also did the same tests on my Mac Studio Ultra which outperforms all but one Resolve test over the 3090.


PC specs are:
i9-12900KS 3.40 GHz
64GB Ram
Latest PC version of Resolve (18.0.4 build 5)

Mac Studio Ultra
20 core CPU
64 core GPU
64GB RAM
Latest MAC version of Resolve (18.0.2 build 78)

All files were pulled from my NAS running 10Gb Ethernet and rendered out onto my PC's secondary SSD in h264/mp4 format and the Macs internal SSD drive.

File 1 (runtime 11 min, 14 secs)
An older SD mp4 (720x480) that was super scaled x3 within Resolve, with a 75% amount of temporal noise reduction and general sharpening and colour correction.

3090 did it at 53fps and render time of 5 mins, 5 secs
4090 did it at 92ps and render time of 3 mins, 2 secs
Mac Studio did it at 59fps and render time of 4 mins, 34 secs

************

File 2 (runtime 21 min, 05 secs)
An older SD mp4 (720x480) that was super scaled x3 within Resolve, with max amount of temporal noise reduction and general sharpening and colour correction.

3090 did it at 57fps and render time of 9 mins, 7 secs
4090 did it at 107fps and render time of 5 mins, 0 secs
Mac Studio did it at 63fps and render time of 8 mins, 23 secs

************

File 3 (runtime 29 min, 16 secs)
A 1080p HD mp4 (1920x1080) that was super scaled x2 within Resolve, with 75% amount of temporal noise reduction and general sharpening and colour correction.

3090 did it at 40fps and render time of 18 mins, 40 secs
4090 did it at 72fps and render time of 10 mins, 39 secs
Mac Studio did it at 44fps and render time of 16 mins, 31 secs

************

File 4 (runtime 31 min, 14 secs)
A 1080p HD mp4 (1920x1080) with 10% amount of temporal noise and no other correction.

3090 did it at 117fps and render time of 8 mins, 2 secs
4090 did it at 174fps and render time of 5 mins, 26 secs
Mac Studio did it at 80fps and render time of 11 mins, 43 secs

************

I also did a file conversion/upscale using the latest version of Topaz Video Enhance as it tests the GPU with a 17 min, 29 sec clip.

3090 did it at 15 min, 22 sec
4090 did it at 13 min, 47 sec
Mac Studio did it at 20 min, 02 sec

************

Overall, very happy with 4090 *for my use* considering Resolve doesn't appear to be fully optimized for it yet with the card only being released less than 48 hours ago.
I also expected to pay over £2000 for it as prices were vague until 2pm on launch day but I got one for £1699. I paid just £100 less for the 3090 back on launch day and with raging inflation, expensive as these cards are, the price was ok as there was lots of talk of a £2k+ price.
It also seems less noisy than the the 3090 and I don't hear the fans spin up as much and is much quieter than I was expecting.
The Mac Studio Ultra is a great little machine though using less power and a way smaller footprint.

Thanks for this and it's super helpful esp since i did use topaz video enhance when I had the ultra in in the spring and it did render quick at the time for small projects. The lead with the 4090 increased but will be interesting to see the AMD gpu for comparison one day. Its been 2 decades since using a PC and i wouldn't know where to start if i did make the switch unless its an ATX small form factor.But the biggest issue is the power draw. I dont game but being a content creator for work, time is of the essence.

Also for interest, I purchased the topaz video enhance for my personal use and converted an hour and half low res 1080p family video to 4k on my Mbp 13" m1 and it look 2 days lol. The file size was like 50gb!
 

4wdwrx

macrumors regular
Jul 30, 2012
116
26
Looking to buy the 4090 FE when it restocks.

I heard current CPUs bottleneck the 4090. Since the Mac Pro CPU is few generations back, I am thinking it would hit bottle neck worst than the 12th Gen and 7000 Ryzen.
 
  • Like
Reactions: cpnotebook80

Matt2012

macrumors member
Aug 17, 2012
99
76
One thing to note about my tests is that they were done just over a day after the release of the 4090 so I doubt Resolve and esp Topaz have been optimised much yet.
 

ZombiePhysicist

macrumors 68030
May 22, 2014
2,807
2,707
studio/4090 = 64%




studio/4090 = 59%




Studio/4090 = 61%




Studio/4090 = 46%



If Apple can get 1.8-1.9x performance out of a "quad" solution over a duo one, then each of those over 50% would be same 'ballpark' (or better) as the 4090. It would be seriously more expensive, but the W6800X (and up ) MPX modules already are.


Nvidia tossed a substantial amount of die space in the 4090 at RT and AI/ML (and incrementally at some video codecs not being used here. AV1)

All until the 5090 comes out and youre still stuck with the apple crap and have to, BEST CASE SCENARIO, sell your entire rig for a new apple one if they bother to try to stay competitive (odds of that are low too).

If there is no 3rd party display card support, the Mac Pro is DOA. Full stop.
 

richinaus

macrumors 68020
Oct 26, 2014
2,385
2,141
If the 3080ti sold 2-3M units per year than perhaps. But it does not . So Apple probably is not in a panic. In the Mac Studio intro they explicitly said most Mac Pro buyers got an W5700X. The top, top , top end of the spectrum is not where the bulk of the users are.


If they sell 10-15K less Mac Pros per year , that really don't matter much to the overall mac ecosystem if they make progress across the rest of the line up. (e.g., swap 30K MBP 14"/16" new users for 15K MP users walking away ... that is far from a net loss in user base numbers. Or bump < $9K MP sales up 10% and > $11K sales go down 10% .. also not a net loss. )

The next Mac Pro likely will have a "low volume" tax attached to it because it always was in the sub 100K/year run rate zone.
it was a joke :) I know they aren't.
 

exoticSpice

Suspended
Jan 9, 2022
1,242
1,951
All until the 5090 comes out and youre still stuck with the apple crap and have to, BEST CASE SCENARIO, sell your entire rig for a new apple one if they bother to try to stay competitive (odds of that are low too).
Keep in mind the 4090 is bottlenecked by the 12th gen and Ryzen 5000 CPUs. The Xeon in the 2019 Mac Pro is pathetic and would 1000% bottleneck the 4090.

The Mac is dead as a professional workstation platform because of it, I don't care if you can upgrade to the 4090 in the 7,1 because the CPU would make the experience crap.

For best results use the 13th gen Raptor Lake or Ryzen 7000 series CPUs when building a PC with the RTX 4090.

The CPU is just as important as the GPU. The best thing about the PC ecosystem is the fact that you can upgrade every component and that to me is excellent.

Apple is against standards that are pro-consumer, period.

(This not an 'attack' at you, just pointing out the CPU bottlenecks). Using a 4090/5090 in a 2019 Mac Pro is insulting to the card itself imo)
 
Last edited:
  • Like
Reactions: cpnotebook80

ZombiePhysicist

macrumors 68030
May 22, 2014
2,807
2,707
Keep in mind the 4090 is bottlenecked by the 12th gen and Ryzen 5000 CPUs. The Xeon in the 2019 Mac Pro is pathetic and would 1000% bottleneck the 4090.

The Mac is dead as a professional workstation platform because of it, I don't care if you can upgrade to the 4090 in the 7,1 because the CPU would make the experience crap.

For best results use the 13th gen Raptor Lake or Ryzen 7000 series CPUs when building a PC with the RTX 4090.

The CPU is just as important as the GPU. The best thing about the PC ecosystem is the fact that you can upgrade every component and that to me is excellent.

Apple is against standards that are pro-consumer, period.

(This not an 'attack' at you, just pointing out the CPU bottlenecks). Using a 4090/5090 in a 2019 Mac Pro is insulting to the card itself imo)

I disagree to some degree. Some tasks are basically fully gpu bound, and if that’s what you do, it can be worthwhile. Someone put A 6800xt in A 5,1 and got a much better frame rate playing games than they did on the Mac studio. So as always, depends on what you’re using it for.

That said, I agree, that you generally want some reasonable balance between the cpu and gpu for a heavy array of asks.
 

Xenobius

macrumors regular
Dec 10, 2019
179
471
The RX 7900 having "two GPUs on the card" is likely a now dated rumor....

Are you suggesting that the RTX 4090 will be 70-80% faster than 7900XT? For me it makes no sense at all. Perhaps these 'new' rumours are about the 7700XT rather than the 7900XT?
 

exoticSpice

Suspended
Jan 9, 2022
1,242
1,951
I disagree to some degree. Some tasks are basically fully gpu bound, and if that’s what you do, it can be worthwhile. Someone put A 6800xt in A 5,1 and got a much better frame rate playing games than they did on the Mac studio. So as always, depends on what you’re using it for.

That said, I agree, that you generally want some reasonable balance between the cpu and gpu for a heavy array of asks.
Yep some tasks are fully GPU bound but using a 5,1 CPU the OS itself will 'feel' slow you know even with a SSD. CPU single core speed is also important for a smooth experience like opening apps and web browsing and for the CPU to keep up with GPU.
 
Last edited:
  • Like
Reactions: ZombiePhysicist

exoticSpice

Suspended
Jan 9, 2022
1,242
1,951
Are you suggesting that the RTX 4090 will be 70-80% faster than 7900XT? For me it makes no sense at all. Perhaps these 'new' rumours are about the 7700XT rather than the 7900XT?
I would think the RTX 4090 would be 10-15% faster than a 7900XT in gaming.

In 3D, lol do I even need say? The 4090 will be king in 3D for a long time until the RTX 4090 ti or RTX 5090 comes.
RTX is too powerful in 3D.

This time Nvidia is a on better node as well.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
Are you suggesting that the RTX 4090 will be 70-80% faster than 7900XT?

No, you are suggesting that. There is almost nothing in what I wrote tagging the specific mark different between 4090 and 7900. AMD doesn't desperately need a "4090 killer" right now. They have a higher need more market share.

In some narrow RT or ML niche that Nvidia ( or Nvidia fan boys) would likely cherry pick, that might be a decent figure. In overall, aggregate mainstream performance it highly likely is not. And to put a Mac focus on it, Apple isn't super pressed about Nvidia hardware specific RT or ML wins. ( otherwise Apple would have folded to Nvidia trying to play "tail wags the dog' games. )


For me it makes no sense at all. Perhaps these 'new' rumours are about the 7700XT rather than the 7900XT?


The 7700XT is in the table on that first link I posted as a reply back in post 33. For Navi32, AMD drops the number MCDs ( from 8 to 6), lower the cache and memory bandwidth while 'reusing' all the same MCD work ( fewer chiplets to get it out the door). If using a same GCD. Could be a smaller one.

Putting out the 7700XT sooner doesn't make much sense in a context where there are glut of AMD 6xxx and Nvidia 3xxx hitting the market. Again, that may have been a 'old' plan roll out order (e.g., before cryptocraze ending and Nvidia pulling the 4090 release forward. ). The 7900 may not be a 4090 'killer' but it is far, far more competitive with the 4090 than a 7700XT will be. 7700XT is suppose to be around a 6900. That was great when 6900 were priced well over $1K. They aren't. And some rumors point to AMD shrinking the PCI-e lane width... so bigger impedance mismatch with x16 PCI-e v3 system.


Some other rumors (from that post) are tagging 7700XT to the Navi33 monolithic implementation. (Presumable the 7800 would be major consumer of Navi32 package ) I'm not so sure desktop dGPU make much sense there either ( unless AMD is chopping back the margins. ). The monolithic RDNA3 implementations are even less "two die" credible .

The x8 PCI-e lane Navi33 is better aligned with the new 7000 Ryzen. Again the 7000 Ryzens are getting huge uptake right now before the more affordable motherboards ship in high volume. Navi33 in Q1-Q2 2023 would be far better aligned with the more mainstream AMD motherboards being available. Making that first out of the gate doesn't make much sense without a broader base to sell into. If still have cryptocraze and wide spread high spending going on (not high inflation) that might make sense. Could throw out anything in volume and folks would buy it. That isn't the market right now.


A 7950XT could be easily done by just 3D L3 cache stacking so have 384MB cache instead of 192MB cache of the 7900. UP tweak some clocks for higher power profile and they have a different card using the same foundational parts. It is more a gimmick product to a relatively low volume of buyers. AMD doesn't really need that right now. They probably have a lot of dies ordered up that they need to now sell (or start eating losses ).

AMD could shift some Navi33 into the high end laptop market if they do have a big perf/watt advantage. Nvidia's solution is way down the schedule so have long window. Intel has goofed there so they aren't problem there either.
"used" and inventory bloat dGPU add-in-cards are not in the way on new laptops.




P.S. Prehaps your "two die" rumors were based on smashing two Navi32 dies together.

7950 here is "2 GCD + ? MCDs "



That is pretty doubtful. Picture from same article
Navi-31-32-33-die-chip-leaks.webp




As the Navi GCD die gets smaller the available edge space also gets smaller. Some "seamless" , "presents as a single" GPU die interconnection connector is going to take more space , not less. Still have the PCI-e x16 and multiple DP outputs to feed out.

I think folks are trying to apply the desktop Ryzen approach to the Graphics one and it is substantively different. AMD is just going to throw multiple GCDs in there like CPU CCDs. While still generally a "hub and spoke" chiplet set up they have largely reversed the roles here on I/O. For Ryzen the I/O is in the hub and the cores are in the spokes. For GPUs there memory I/O (not all the i/O but the very high bandwidth I/O) is in the spokes and the cores are in the hub.

It isn't going to 'scale with chiplets' the same way. The GCDs are changing here just like the I/O hubs change between desktop and server package differences for the 'hub' with common spokes. GPU cores and CPU cores have different latency , caching , and bandwidth constraints. So pushing CPU cores off into another die is a slightly different problem than pushing GPU cores off into a different die. There have been multiple die, unified memory CPU systems for decades. That has not been true of GPUs.


Far more affordable path for AMD to go with the MCD chiplets without 3D cache for 7900 and just stack more RAM for the 7900 and play with clocks. If have underwear in a twist over hardware RT performance if the 3D stack cache manages to hold a substantively higher percentage of the BVH data structures that spill out of the L2 caches then the RT performance will go substantially up. Not revolutionary better, but enough to make a difference and command a high card price point. Doesn't require any massive die space expansion on the GCD at all. AMD is actually 'shrinking' the L3 "Inifinity Cache" with the 7000 series ( because suppose to work more effectively with less capacity now. ). The 3D cache additions probably can be used to fill in the edge cases where than doesn't work quite as well the limited RT updates don't cover. Decent chance RT is one of those edge cases. But that will require some finely tuned drivers to specific applications which they probably won't have on day one (or day 30). Shipping the 7900 , then getting the bugs out , and only then tweaking for the "bigger cache" 7950 makes far, far, far more sense software wise.
 
  • Like
Reactions: Xenobius

deconstruct60

macrumors G5
Mar 10, 2009
12,368
3,936
Perhaps these 'new' rumours are about the 7700XT rather than the 7900XT?


P.P.S. if the Navi33 is a reoptimized Navi21 die, then there is better chance that all the Infinitity fabric links didn't get snipped off. ( i.e., one left ). That might enable the legacy AMD top end card set up consisting of a dual GPU package. AMD doing a "W6800X Duo" variation for the high end creator market ( two GPU packages in one 3.5 wide slot instead of two older one package cards in two 2 wide slots). If they can more easily stuff two Navi33 dies onto a 400+W card. Part of the 4090 appeal to some folks will be able to move from two to one card set ups. AMD can do same thing in a different way in addition to the 7900. And there would be an AMD "Pro" card variation as well.

That might interest Apple if the board redesign costs from the 6800X module they already had were relatively, very low and the driver updates didn't cost much either. ( package size about the same , memory layout about the same with only adjustments to wire speed (if required). No major overall power consumption adjustments (so reuse heat sink and most power management). If it is a refactored Navi21 die then the driver changes should be far more limited than if switched to MCD+GCD architecture. In short, if there is a very cheap path to a more Perf/W efficient "6800 GPU" update in basically the same MPX module, then Apple might take it for the MP 2019.

The substantive different new GPU architecture probably won't be a cheap upgrade for Apple.




P.S. If Navi33 has x8 PCI-e v5 then a card that has a switch that splits x16 PCI-e v5 ---> two x8 PCI-e v5 streams will feed both GPU packages just fine. For 2023 era CPUs in the Windows market, that won't be a problem. It is a bit of a mismatch with a x16 PCI-e v3 MP 2019. Not so hot if trying to feed in max number of Afterburner outputs into the card. But for on 90+% totally on card computational tasks then probably not that much of a hit in most cases. Probably wouldn't completely stop selling the W6800's right away. (and if sharing substantive number of subcomponent parts across both modules that helps defray new card costs if sell both. )
 
Last edited:

Matty_TypeR

macrumors 6502a
Oct 1, 2016
638
548
UK
The 4090 at 4k doesn't take to much of a hit with PCie V3 over V4 even PCie v1 can see improvements. lets hope AMD's 7000 series can do the same, if Apple allow it. a 2% loss in performance with V3 over V4 thats not bad.

assassins-creed-valhalla-3840-2160.png
cyberpunk-2077-3840-2160.png
doom-eternal-3840-2160.png
f1-2022-3840-2160.png
relative-performance_3840-2160.png
 
Last edited:

4wdwrx

macrumors regular
Jul 30, 2012
116
26
I disagree to some degree. Some tasks are basically fully gpu bound, and if that’s what you do, it can be worthwhile. Someone put A 6800xt in A 5,1 and got a much better frame rate playing games than they did on the Mac studio. So as always, depends on what you’re using it for.

That said, I agree, that you generally want some reasonable balance between the cpu and gpu for a heavy array of asks.
Same points can be said about the Apple Silicon.

Since max configuration Mac Pro is brought up in contrast to Apple Silicon, I think is valid point on the CPU bottleneck.

Even though the 7,1 can physically upgrade, unfortunately, it cannot maximize the resource. Unless Apple surprises us with new motherboard and CPU upgrade kit :).

The 4090 will only work under Windows, hopefully with no issues. My 3090 FE works great, but other venders may cause stability issues.
 

LEOMODE

macrumors 6502a
Jun 14, 2009
557
56
Southern California
well i installed the 4090fe today, just waiting for a PSU cable to turn up, post is slow at the moment, and i will post up some results. It is a mammoth card lol


Is it bigger than 3090FE? How did you hook up the power with belkin, and how is the power draw when loaded? I also will install it alongside with 6900XT.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.