Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

macman4789

macrumors 6502
Original poster
Jul 12, 2007
318
22
Hi,

I wondered if anyone could input on this query. With the M2 Pro chip being a ‘pro’ class chip with integrated GPU, I wondered how ‘pro’ they are? The Max chips are obviously extremely high end and are pitched at the most demanding of tasks - sort of ‘you’ll know if you need a Max.’ The standard M2’s are no slouch and can complete tasks such as 4K video editing quite competently so my question really is where do the Pro chips sit?

What sort of tasks/usage are they designed for? It’s hard to directly compare to PC GPU’s and I’ve tried to research this but it’s really hard considering power/wattage etc and the integrated vs discreet.

Is there a level that could be identified? I know it’s not the equivalent of an RTX 4090 but equally it’s stronger than the old AMD discreet GPU’s from the old Intel 2019 MacBook Pro’s I think?

I’m guessing it should fly through photo editing and video editing tasks upto 4K?

Thanks
 

crsh1976

macrumors 68000
Jun 13, 2011
1,575
1,779
Simple (stupid) answer: the base chip is able to do the same stuff as the Pro or the Max or the Ultra, it's not just the fastest at it.

As it is with any other CPU and GPU, you get a faster chip to get things done faster - if it's a requirement for you, your professional usage, gaming, and it comes down to whether the extra money the faster chips cost makes sense for your needs.
 

padams35

macrumors 6502
Nov 10, 2016
468
302
Tech reports list the Apple M2 GPU as having a core boost up to 1.4GHz with 8x TMUs and 4x ROPs per core. Apple Silicon is optimized for compute and 2D image/video processing, where it performs about like an AMD Navi 5000 series GPU with comparable specs. Ie, M2-Pro(16core GPU) is a bit better than a RX 5500XT while the 19-core is almost as good as an RX 5700XT.

However 3D rendering/gaming performance is slower than compute scores would suggest with the M2-Pro 16-core benchmarking about the same as the 2019 laptop Radeon Pro 5500M in games.
 

macman4789

macrumors 6502
Original poster
Jul 12, 2007
318
22
Simple (stupid) answer: the base chip is able to do the same stuff as the Pro or the Max or the Ultra, it's not just the fastest at it.

As it is with any other CPU and GPU, you get a faster chip to get things done faster - if it's a requirement for you, your professional usage, gaming, and it comes down to whether the extra money the faster chips cost makes sense for your needs.
That’s an interesting way of looking at it. I didn’t think of it that way. Would there not be certain tasks that the base M2 couldn’t do that the Max Ultra could? Edit 8K for example?
 

macman4789

macrumors 6502
Original poster
Jul 12, 2007
318
22
Tech reports list the Apple M2 GPU as having a core boost up to 1.4GHz with 8x TMUs and 4x ROPs per core. Apple Silicon is optimized for compute and 2D image/video processing, where it performs about like an AMD Navi 5000 series GPU with comparable specs. Ie, M2-Pro(16core GPU) is a bit better than a RX 5500XT while the 19-core is almost as good as an RX 5700XT.

However 3D rendering/gaming performance is slower than compute scores would suggest with the M2-Pro 16-core benchmarking about the same as the 2019 laptop Radeon Pro 5500M in games.
Thanks for this. When you say it is optimized for compute and image/video processing, are you saying they perform better or as good as top end PC GPU cards? The cards you quoted e.g. RX 5700XT are the cards that were in the old Intel iMacs 2018-19 ish if I recall correctly? What confuses me is these cards are 5ish years old and didn’t really perform as fluidly as the M2 Pro does for things like Photoshop/Lightroom or video editing?

I know games are a kind of niche as there’s not really anything like it that taxes the GPU and CPU plus all the levels of translation going on.
 

ozziegn

macrumors 65816
Aug 16, 2007
1,294
831
Central FL Area
This is what amazes me about the M2 Pro. I just bought one and I decided to try one of my favorite games from the late 90s which was Half Life 2. I was using a computer that I built which had the highest end video card back then. I used to run the game at 1600 resolution with 4XX AA enabled and the game ran "okay". Typical frame rates were around 90fps. Fast forward today with my M2 Pro Mini. I installed the game and it runs flawlessly at full 2500 resolution with 4XX AA turned on in the video options of the game. It does this flawlessly at over 200fps. Simply amazing at how much power this machine has.
 
  • Like
Reactions: Benhama

picpicmac

macrumors 65816
Aug 10, 2023
1,034
1,448
With the M2 Pro chip being a ‘pro’ class chip with integrated GPU, I wondered how ‘pro’ they are?
Here I was set to write a long diatribe on the emergence of latter-day classism as it has manifested in the computer-sales world... but then I figured it would just bore people so let's move on:

What sort of tasks/usage are they designed for?
Apple assigns the word "pro" in two different ways:
1) to upcharge an item (see my mini-diatribe above);
2) to identify a computer that is able to drive more than two displays.

Would there not be certain tasks that the base M2 couldn’t do that the Max Ultra could?

  • The M2 can only drive two displays (without using tricks like DisplayLink.)
  • The M2 Pro can drive three displays. This is important because when the M2 Pro is put inside a laptop that laptop can still drive two external displays.
  • The Max and Ultra chips can drive even more displays.

The graphics processing units are added to be able to hand the demands of parallel processing with multiple displays while running graphics-intensive applications.
 

crsh1976

macrumors 68000
Jun 13, 2011
1,575
1,779
That’s an interesting way of looking at it. I didn’t think of it that way. Would there not be certain tasks that the base M2 couldn’t do that the Max Ultra could? Edit 8K for example?
In terms of tasks, no - no difference at all. It's all about performance.

Otherwise, as picpicmac detailed, different classes of chips come with different features like the number of TB4 channels, number of external screens supported and the maximum resolutions at which they can be driven, etc.
 
Last edited:

MediaGary

macrumors member
May 30, 2022
38
22
Hi,

I wondered if anyone could input on this query. With the M2 Pro chip being a ‘pro’ class chip with integrated GPU, I wondered how ‘pro’ they are? The Max chips are obviously extremely high end and are pitched at the most demanding of tasks - sort of ‘you’ll know if you need a Max.’ The standard M2’s are no slouch and can complete tasks such as 4K video editing quite competently so my question really is where do the Pro chips sit?

What sort of tasks/usage are they designed for? It’s hard to directly compare to PC GPU’s and I’ve tried to research this but it’s really hard considering power/wattage etc and the integrated vs discreet.

Is there a level that could be identified? I know it’s not the equivalent of an RTX 4090 but equally it’s stronger than the old AMD discreet GPU’s from the old Intel 2019 MacBook Pro’s I think?

I’m guessing it should fly through photo editing and video editing tasks upto 4K?

Thanks
There are a wide range of video tasks that a GPU is expected to do. So many things fall within the arena of '4k video editing' (paraphrasing) that it's really necessary to check task-by-task. I was pleasantly surprised to see how well the M2 Mini Pro (base 6P-4E/16GPU model) managed the very demanding task of 4k video noise reduction within DaVinci Resolve.

I had accidentally allowed the maximum auto-ISO of a video shoot to float up to 25600, rather than the reasonable 6400. With that, I had 2.5 hours each of four 4k cameras to process through noise reduction. My Win10 machine with a 12GB RTX 3080 Ti, managed the job at an average speed of ~16fps. A quick test of the same project task in the M2 Mini Pro ran at ~7fps.

While the RTX 3080 Ti was 2.3x faster than the Mac Mini Pro's GPU the FP32 ratings of each would have lead me to expect a 5.5x difference in performance between them, not a 2.3x difference.

While normal color correction and grading adjustments feel pretty similar between the two machines, while using the Mini, I have had to disable some nodes while editing to keep responsiveness up while navigating through edits. On the other hand, the hardware decoding for 10-bit 4:2:2 H.264 within the M2 Mini Pro far, far outperforms the software decoding within the Win10 machine's 3950x processor, so the Win10 machine is mostly relegated to specific aspects of video media preparation, emergencies, and of course, the comfort of having a ready-to-rock backup for meeting deadlines.
 
  • Like
Reactions: Chuckeee

macman4789

macrumors 6502
Original poster
Jul 12, 2007
318
22
There are a wide range of video tasks that a GPU is expected to do. So many things fall within the arena of '4k video editing' (paraphrasing) that it's really necessary to check task-by-task. I was pleasantly surprised to see how well the M2 Mini Pro (base 6P-4E/16GPU model) managed the very demanding task of 4k video noise reduction within DaVinci Resolve.

I had accidentally allowed the maximum auto-ISO of a video shoot to float up to 25600, rather than the reasonable 6400. With that, I had 2.5 hours each of four 4k cameras to process through noise reduction. My Win10 machine with a 12GB RTX 3080 Ti, managed the job at an average speed of ~16fps. A quick test of the same project task in the M2 Mini Pro ran at ~7fps.

While the RTX 3080 Ti was 2.3x faster than the Mac Mini Pro's GPU the FP32 ratings of each would have lead me to expect a 5.5x difference in performance between them, not a 2.3x difference.

While normal color correction and grading adjustments feel pretty similar between the two machines, while using the Mini, I have had to disable some nodes while editing to keep responsiveness up while navigating through edits. On the other hand, the hardware decoding for 10-bit 4:2:2 H.264 within the M2 Mini Pro far, far outperforms the software decoding within the Win10 machine's 3950x processor, so the Win10 machine is mostly relegated to specific aspects of video media preparation, emergencies, and of course, the comfort of having a ready-to-rock backup for meeting deadlines.
That’s interesting, so you use it as your main editing machine and have the PC as a backup? It would be interesting to see how different or not performance is with an M2 Max compared to the 3080.

Do you discern any difference between the M2 Pro and 3080 apart from noise reduction?
 

macman4789

macrumors 6502
Original poster
Jul 12, 2007
318
22
Here I was set to write a long diatribe on the emergence of latter-day classism as it has manifested in the computer-sales world... but then I figured it would just bore people so let's move on:


Apple assigns the word "pro" in two different ways:
1) to upcharge an item (see my mini-diatribe above);
2) to identify a computer that is able to drive more than two displays.



  • The M2 can only drive two displays (without using tricks like DisplayLink.)
  • The M2 Pro can drive three displays. This is important because when the M2 Pro is put inside a laptop that laptop can still drive two external displays.
  • The Max and Ultra chips can drive even more displays.

The graphics processing units are added to be able to hand the demands of parallel processing with multiple displays while running graphics-intensive applications.
Thank you for this response. Apple does well to ‘up-sell’ people to the next highest chip! But the second point you make about number of displays is a clear way to differentiate the GPU’s. But my understanding is that is not just the only difference? I.e. greater graphical processing etc?
 

picpicmac

macrumors 65816
Aug 10, 2023
1,034
1,448
But my understanding is that is not just the only difference? I.e. greater graphical processing etc?

Whether some software makes use of multiprocessors is up to the developer. Surprisingly many applications are not optimized to fully take use of the abilities of these SoCs.

"GPU" is a marketing term, spawned long ago in the early days of game consoles, by Sony and then Nvidia.

In the old days multiprocessing units were just called out for specific tasks they did.

In the case of Apple's own M series, the SoCs have display processing units on them, the base M series has 2, the "Pro" has three, the "Max" has four. They are what drive the displays, either over TB or HDMI. These control the actual, physical, display.

M series also have a set of "GPU" cores that do the heavy lifting for creating the virtual display. Graphics processing involves moving large blocks of data (representing objects to be seen on a display), color calculations, sprites, etc., including the floating point operations needed for such. Because these GPU designs include operations to manipulate large arrays of data they are used sometimes for other than graphics purposes.

But unless the developer exploits these the user may not enjoy all that the hardware can deliver. How well developers use Metal is always a question in my mind.
 

MediaGary

macrumors member
May 30, 2022
38
22
That’s interesting, so you use it as your main editing machine and have the PC as a backup? It would be interesting to see how different or not performance is with an M2 Max compared to the 3080.

Do you discern any difference between the M2 Pro and 3080 apart from noise reduction?
Apart from noise reduction, most other color-related tasks are lightweight for both the 3080 Ti and the M2's GPU. I will be experimenting with 'Speed Warp' and the AI-related 'depth map'. Both of those functions are noted for their heavyweight requirements.

In general, if I was editing ProRes or BRAW, or All-I H.264 in either machine, the Win10 machine would have the performance edge because of the overall higher FP32 TFLOP rated capacity and VRAM bandwidth (~910 GB/sec in the 3080 Ti). However, since I'm generally working with 5-camera multicam, the smaller file sizes of H.264 (the Lumix GH5S doesn't do H.265) are necessary to maintain "reasonable" overall project sizes, and that's why the M2 is the winner for my workflow.

I doubt that I have any way to do formal testing that would be significant for users that don't have my specific post production effects requirements.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.