Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Torty

macrumors 65816
Oct 16, 2013
1,128
855
People still use vintage M3 cpu:s? Can’t imagine anything productive can be done with those anymore, but I guess you can hold on to them for sentimental reasons.
Haha. But let's honest: It isn't the latest and greatest anymore. If you get a M3 now you get last gen's chip.
 
  • Like
Reactions: Dr_Charles_Forbin

Torty

macrumors 65816
Oct 16, 2013
1,128
855
It's like in bakery. For sure you can eat yesterday's bread and it will be fine but you are not willing to pay the same price as for today's fresh bread.
 
  • Haha
Reactions: NetMage

Dr_Charles_Forbin

Contributor
May 11, 2016
411
177
Is anyone else sick of these damn benchmarks? You know what benchmark counts? Does it meet your foreseeable future. What’s the quality like? I don’t care if the M4 is 8x as fast on some benchmark unless I’m doing something like video processing or something that’s CPU intensive. IMHO, the reason they can combine CPU and GPU on one chip is that they typically don’t get used simultaneously. They’ve turned what was an objective measure into marketing. Sorry for ranting, I’m just sick of people making so much out of it and I can’t help but think of the VW diesel scandal where they change the configuration for the test. When they stop pushing security updates for my 2015 MBP, I’m probably going with a 15” M1 Air. It’s proven itself reliable and will suit my needs.
 

eno12

macrumors regular
Jun 6, 2005
123
126
Is anyone else sick of these damn benchmarks? You know what benchmark counts? Does it meet your foreseeable future. What’s the quality like? I don’t care if the M4 is 8x as fast on some benchmark unless I’m doing something like video processing or something that’s CPU intensive. IMHO, the reason they can combine CPU and GPU on one chip is that they typically don’t get used simultaneously. They’ve turned what was an objective measure into marketing. Sorry for ranting, I’m just sick of people making so much out of it and I can’t help but think of the VW diesel scandal where they change the configuration for the test. When they stop pushing security updates for my 2015 MBP, I’m probably going with a 15” M1 Air. It’s proven itself reliable and will suit my needs.
It is the best way to see how the processors compare generation to generation. A lot of us use the raw compute and care about the relative performance. If the M1 works for you then that is fine, just get that generation.
 

Dr_Charles_Forbin

Contributor
May 11, 2016
411
177
It is the best way to see how the processors compare generation to generation. A lot of us use the raw compute and care about the relative performance. If the M1 works for you then that is fine, just get that generation.
It is, but what is really telling you? Are you using that capacity? If so, that’s fine.
 

ApplesAreSweet&Sour

macrumors 68000
Sep 18, 2018
1,962
3,583
I've always wondered -Does performance really increase from one generation of iPad Pro to the next one if nobody runs a benchmark?

I mean, apart from maybe getting hamstrung by 8GBs RAM vs 16GBs RAM in Final Cut Pro for iPad 2, how would one benefit when moving from an older model to something as OP as M4 on an iPad?

iPhone Pro Plus XXL iPad has no need for all of this power.

I have to assume this is just Apple flexing on competitors as we're seeing more Windows ARM laptops dropping this year.

Such an odd move to put so much power in something so limited.

Are we now also getting A18 Pro chips in iPhones 16 Pro?

Makes no sense.
 

Dr_Charles_Forbin

Contributor
May 11, 2016
411
177
How else can Apple justify selling you an iPad for less than an MBA? They have to sell you on something. Don’t get me wrong, if you’re in that segment of the market that benefits from more horsepower.. like a graphic designer .. go for it and see if it makes a difference. I have machine envy too.. I’m certainly not immune to it. My Gen 9 iPad suits my needs but I’d love to upgrade to a 12.9 display. I’d love a Mac Pro instead of my M2 Ultra Studio but I can’t cost justify it.
 

chucker23n1

macrumors G3
Dec 7, 2014
8,608
11,420
But don’t we already know the speed of the Snapdragon X Elite?

We know benchmark results from preproduction devices, sure. We don't really know "if you put an SXE in a chassis similar to a 13-inch MacBook Air or 14-inch MacBook Pro, how will performance compare to the M3 or M3 Pro?"

Isn’t it in between the M2 and M3?

Early benchmarks seem to suggest: roughly equal to M1 in single-threaded, roughly equal to M4 or M3 Pro in multi-threaded. The latter sounds good, but it seems the SXE draws power more akin the M3 Pro to achieve that, so if you were to put it in an Air-like chassis, it would probably have weaker performance.
 
  • Like
Reactions: Tagbert

chucker23n1

macrumors G3
Dec 7, 2014
8,608
11,420

I think Andy's take that "SME doesn't count because the M1 doesn't have it" doesn't make sense. SME is there for the taking as a new ISA feature, and it looks like Apple has implemented it (and perhaps some more ARMv9 stuff) in M4. That's what newer CPU designs are all about. Granted, at first, many apps won't take advantage of it, but such is the march of progress.
 

Jumpthesnark

macrumors 65816
Apr 24, 2022
1,078
4,682
California
Does that mean the MacMini will skip the M3 and go straight to the M4?
No one here knows the answer to that, as we don't work for Apple. And if someone does, they're not saying. ;)

But yes, it definitely seems like the Mac desktops that never had an M3 chip would go directly to M4 chips.

A shame for those of us who have been waiting for an M3 Pro or Max (for example) to upgrade our desktop Macs, but if the delay isn't bad and the machines are worth the wait (and our current hardware stays alive), then that might end up being in our favor.
 

seek3r

macrumors 68020
Aug 16, 2010
2,334
3,356
Crappy argument, there’s nothing inherently wrong with increasing clock speed except when it starts to hit thermal walls, but Apple has continuously improved perf/watt, so a higher clock for M4 at actually lower power consumption than the lower clocked M1s is fine. And the stepping is good enough that it doesnt clock up unless needed too. IPC isnt the only way to increase performance without going down the thermal limit route
 

Alameda

macrumors 65816
Jun 22, 2012
1,004
609


Benchmarks for the new M4 iPad Pro models have popped up on Geekbench, giving us an idea of how much faster Apple's second-generation 3-nanometer chips are compared to the M3, M2, and other prior-generation Apple silicon chips.

iPad-Pro-M4-Silver-and-Space-Black-Feature-Purple.jpg

The 10-core variant of the M4 chip earned an average single-core score of 3,695 and an average multi-core score of 14,550 across 10 benchmarks. When it comes to single-core performance, the M4 is faster than the M3 Max MacBook Pro, and it's comparable to the M2 Max in multi-core performance.

For context, here are the single-core and multi-core scores of prior chips (all max CPU/GPU variants):
  • M4 - 3,695/14,550
  • A17 Pro - 2,908/7,234
  • M2 - 2,540/9,360
  • M2 Pro - 2,651/14,295
  • M2 Max - 2,802/14,800
  • M1 - 2,272/8,208
  • M3 - 3,087/11,702
  • M3 Pro - 3,112/15,286
  • M3 Max - 3,128/20,957
Compared to the M2 in the prior version of the iPad Pro, the M4 is 46 percent faster when it comes to single-core performance, and 55 percent faster in multi-core performance. Apple didn't use the M3 in an iPad, but it is up to 24 percent faster than the M3.

Apple said that the M4 delivers up to 1.5x faster CPU performance than the M2 in the prior-generation iPad Pro, which is accurate based on the benchmarks we've seen so far.

Apple plans to bring the M4 chip family to all of its products across 2024 and 2025, with the first M4 Macs slated for later this year.

Article Link: iPad Pro's M4 Chip Outperforms M3 by Up to 25%
Is it 25% faster at actually doing anything useful, or just in displaying benchmark scores?
 
  • Like
Reactions: Chuckeee

Dr_Charles_Forbin

Contributor
May 11, 2016
411
177
No one here knows the answer to that, as we don't work for Apple. And if someone does, they're not saying. ;)

But yes, it definitely seems like the Mac desktops that never had an M3 chip would go directly to M4 chips.

A shame for those of us who have been waiting for an M3 Pro or Max (for example) to upgrade our desktop Macs, but if the delay isn't bad and the machines are worth the wait (and our current hardware stays alive), then that might end up being in our favor.
My M2 Ultra Studio suits my needs quite sufficiently. Sometimes you just have to pull the trigger. Then again, I was coming from a 2014 mini.
 

Dr_Charles_Forbin

Contributor
May 11, 2016
411
177
Crappy argument, there’s nothing inherently wrong with increasing clock speed except when it starts to hit thermal walls, but Apple has continuously improved perf/watt, so a higher clock for M4 at actually lower power consumption than the lower clocked M1s is fine. And the stepping is good enough that it doesnt clock up unless needed too. IPC isnt the only way to increase performance without going down the thermal limit route
Ok, I’m clearly not an engineer.. my last dealings with was an undergrad in a Computer Architecture class. But what I’m reading is that the processor doesn’t clock up until needed. But you’re not at this level all the time so who knows what the thermal output if it’s constantly upclocked. Which is again why benchmarks are like statistics.. they can’t be taken at face value without context.
 

MrGimper

macrumors G3
Sep 22, 2012
8,653
12,211
Andover, UK
I think Andy's take that "SME doesn't count because the M1 doesn't have it" doesn't make sense. SME is there for the taking as a new ISA feature, and it looks like Apple has implemented it (and perhaps some more ARMv9 stuff) in M4. That's what newer CPU designs are all about. Granted, at first, many apps won't take advantage of it, but such is the march of progress.
How useful is “object detection” really going to be tho. Genuine question.
 

chucker23n1

macrumors G3
Dec 7, 2014
8,608
11,420
How useful is “object detection” really going to be tho. Genuine question.

Geekbench has to cherry-pick a bunch of tasks that are 1) things people do, 2) things that are compute-intensive, 3) things that will realistically get faster over time.

Back in the 1990s, benchmarks would do things like “draw a hundred rectangles and other shapes”, “OK, now add a gradient”, “now apply Gaussian blur to the result”, and it was useful because it would take long enough that you could actually watch your computer do that. But today, an iPhone can apply a Gaussian blur to an entire photo in real-time, thousands of times a second, to the point where the camera app just applies it live and calls it “portrait mode”. So that’s reason A it’s no longer a useful benchmark. Reason B follows that: CPU vendors have no incentive to optimize that further because it’s already absurdly fast.

So instead, they optimize for other compute-intensive tasks, and while object detection often won’t even run on the CPU anyway since we have a GPU and NPU now, that is still in principle an area that’s slightly slow and somewhat useful, so vendors have been making some strides, and benchmarks have been measuring it. Which gets us back to portrait mode: it’s not just a blur across the entire image, of course. It first detects your face and hair, or that of your pet, and then blurs everything else.

And that’s just one area where millions of people use object detection every day.
 

MrGimper

macrumors G3
Sep 22, 2012
8,653
12,211
Andover, UK
Geekbench has to cherry-pick a bunch of tasks that are 1) things people do, 2) things that are compute-intensive, 3) things that will realistically get faster over time.

Back in the 1990s, benchmarks would do things like “draw a hundred rectangles and other shapes”, “OK, now add a gradient”, “now apply Gaussian blur to the result”, and it was useful because it would take long enough that you could actually watch your computer do that. But today, an iPhone can apply a Gaussian blur to an entire photo in real-time, thousands of times a second, to the point where the camera app just applies it live and calls it “portrait mode”. So that’s reason A it’s no longer a useful benchmark. Reason B follows that: CPU vendors have no incentive to optimize that further because it’s already absurdly fast.

So instead, they optimize for other compute-intensive tasks, and while object detection often won’t even run on the CPU anyway since we have a GPU and NPU now, that is still in principle an area that’s slightly slow and somewhat useful, so vendors have been making some strides, and benchmarks have been measuring it. Which gets us back to portrait mode: it’s not just a blur across the entire image, of course. It first detects your face and hair, or that of your pet, and then blurs everything else.

And that’s just one area where millions of people use object detection every day.
Thank you for such a thorough reply! Makes total sense.
 
  • Like
Reactions: chucker23n1

DeepIn2U

macrumors G5
May 30, 2002
12,899
6,909
Toronto, Ontario, Canada
Cue all the "it's too much power for such a limited OS" comments.
Exactly! Beyond Games and specifics that programmers can use (via VS Basic, XCode and the like) nothing really that end users will have direct contact or benefit using.

Pity there really isn't anything to take advantage of all that power on the iPad "Pro" that you don't have to pay a subscription for.
End users:
Games and the AI or similar taking advantage in the opponents level of difficulty or adjustments that 'seem' random. Music Producers and artists using application specifics really will not get 1st hand interaction an use for this chips AI prowess.

The FACT Apple ONLY took 30 mins and within that all but less than 10mins total time to highlight AI or NPU/Neural Engine aspects being USED/prepared really goes to show just how limiting iPadOS really is. Those that deny this are oblivious to the value of their dollar being spent.

JUST because Apple 'claims' such and such doesn't mean it really is - especially since they did NOT SHOW such metrics in real world use! That alone should be the consumers eye or AI-opener lol.
Just imagine M4 Pro, M4 Max, and M4 Ultra benchmarks.

Yeah benchmarks that are meaningful in a desktop OS that can actually take advantage of such.
 
  • Haha
Reactions: NetMage

Confused-User

macrumors 6502a
Oct 14, 2014
584
625
This post is definitely a pile of crap. It's full of lies, half-truths, and/or misunderstandings and misdirections. I, leman, name99, and some others have written extensively in multiple threads here over the past week about this so I'm not going to repeat it all here. I will just summarize: Big clock boosts are impressive design changes by themselves, and they require what would otherwise be seen as significant IPC increases to offset other inefficiencies that come up as you boost clocks.

Apple consistently goes after low-hanging fruit, while also strategically working on longer-term projects. Clocks were the low-hanging fruit these last few years. They may still be for another generation or two, or perhaps we're at the end of the big jumps. Thinking that Apple can't do anything else is risible.
 

Aries79

macrumors member
Jun 24, 2010
54
85
IMHO, the reason they can combine CPU and GPU on one chip is that they typically don’t get used simultaneously.

There are many situations in which they are both used simultaneously. The most performance hungry app I use, uses both the CPU (maximizing a single core) and GPU, so both the GPU benchmarks and the single core benchmarks are important for me.
 
  • Like
Reactions: NetMage
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.