Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

transpo1

macrumors 65816
Jul 15, 2010
1,030
1,707
Most people don't like Mac's to begin with, so how can they like the notch? Mac's are barely outselling iPads, that is how bad Mac's are doing.

And what rational human being would prefer less screen real estate with a big ass notch? You know it is not true.

If the 16" M1 Max MacBook Pro is a "professional" machine, then more screen real estate > crappy onboard 1080p camera any day of the week. Because "professionals" who need a good camera, would use a dedicated camera of high quality.
I’m a professional and I never want to use a separate webcam for video calls again unless filming something for broadcast or exhibition. And I’m glad Apple used the notch because the 1080p quality of the new cam is pretty good. It you didn’t have a notch you’d have a whole bar along the top of the display which would increase the size/weight…
 

ZipZilla

macrumors 6502
Dec 7, 2003
440
611
Will we get iMacs with big screens, computers that aren't gimped at 8GB and full screens with no notch?
 

TechnoMonk

macrumors 68000
Oct 15, 2022
1,917
2,767
They better release the M4 Max with 256 GB support, I might upgrade from my M1 max MBP16 a year early, if the RAM goes up, and also if it can match 50-70% of Nvidia XX90 GPU.
 

transpo1

macrumors 65816
Jul 15, 2010
1,030
1,707
They should have just blacked out the status bar to match the notch to make it look seamless. Apple hardware has been killing it it’s just the software has been garbage and uninspiring since they got rid of forstall
But then you’d have less screen real estate. They do black out the status bar when you’re browsing the web.
 

jdawgnoonan

macrumors 6502a
Apr 22, 2007
683
973
Jefferson, WI
Apple is so behind on AI. It's even looking like they are getting more behind and not even catching up. People keep saying Apple is often late but better, as if that some sort of vindication. But that's not even true -- Apple does not always come up with something better. But what I want is for Apple to be consistently earlier and better.
You are saying that they are behind based on Copilot? Hahaha. Copilot is mainly annoying and has caused advertising even in places like the developer tools for Edge. It is just great to be reviewing the communication from the browser to the server and have to read the ads that are interjected in the middle of it. AI will be great once we get past the spin cycle for it.
 

Le0M

macrumors 6502a
Aug 13, 2020
870
1,211
Considering all Apple silicon macs and recent iPhones have neural engine, it's kinda misleading to say that M4 will "AI focused", cuz they all are. It's just gonna be faster, but even the low-end M1 can certainly deal with on-machine AI.
 
  • Like
Reactions: gusmula

Realityck

macrumors G4
Nov 9, 2015
10,409
15,677
Silicon Valley, CA
M4 will get exclusive AI functionality with next macos. M3 has a very slow neural engine, according Wikipedia it got ca. half the speed of the iPhone 15 pro.

View attachment 2367651
of interest to the NPU discussion
see https://www.notebookcheck.net/Apple...-the-A17-Pro-A16-Bionic-designs.764280.0.html

One aspect of the provenance of the M3 design that is crystal clear is that it shares the same, or very similar, GPU architecture to the A17 Pro. Apple calls the A17 Pro graphics “the biggest GPU redesign in Apple’s history,” while it says the M3 graphics “represents the biggest leap forward graphics architecture ever for Apple silicon.” With the A17 Pro launched on 13 September and the M3 chips launched on 31 October, it is safe to assume the architecture is going to be very similar, especially given the brand new hardware accelerated ray tracing feature is common to the two. The only exception may apply to the “Dynamic Caching” feature in the M3 GPU that wasn’t called out as a feature of the A17 Pro GPU. Otherwise, the A17 Pro and M3 GPUs share much more in common than do the A16 Bionic's GPU and the M3 GPU.

Things change, however, when we look at the M3 series NPU (Neural Processing Unit). As in previous M series chips, the M3 “Neural Engine” (as Apple calls it), has 16 cores and produces 18 TOPS (trillion operations per second). This is almost 2x slower than the NPU in Apple’s latest smartphone chip, which is somewhat unexpected. The NPU in the Apple M1 is good for 11 TOPS (the same as the NPU in the A14 Bionic), while the NPU in the Apple M2 was 15.8 TOPs (the same as the NPU in the A15 Bionic). As you might have guessed, the NPU in the M3 is much closer to the NPU in the A16 Bionic’s 17 TOPS, indicating that this is the basis for the M3 series NPUs. The 1 TOP differential can be explained by the performance gained through the switch to the 3nm node used for the M3 against the 5nm (N4P) node used for the A16 Bionic. Or it could simply be the result of a clock boost.

As for the CPU architecture, that is a little more difficult to discern, but there is a very good case for it to be made that it is more likely to be in line with the A16 Bionic. While the shared NPUs is one clue that the CPU architecture is also the same, the next-generation GPU architecture that we have now seen debut on the Apple A17 Pro and M3 series was actually originally intended for the A16 Bionic. An exclusive report [sub.req] from The Information in December of 2022 said that Apple’s engineers were “too ambitious” with this ray tracing-capable GPU. Prototype A16 Bionic chips running the new GPU architecture suffered from overheating and drew too much power. This “unprecedented” blunder came amidst an exodus of top chip talent from Apple, including former A series lead architect Gerard Williams who left to found Nuvia and is now SVP of Engineering with Qualcomm following its acquisition of Nuvia. Williams was recently on stage at Qualcomm’s Snapdragon Summit to launch the impressive M2 Max-beating Snapdragon X Elite chip.

What this is suggestive of is that the M3 series chips were probably intended to be entirely based on the A16 Bionic, including its proposed next-generation GPU. Apple pivoted with the A16 Bionic by improving graphics memory bandwidth to boost performance over the A15 Bionic, but delayed the implementation of the new GPU architecture until the A17 Pro. Apple will have undoubtedly had all the technical documentation and schematics laid out for the M3 with A16 architecture - including the new GPU architecture. However, where they had to rush a fix for the A16 GPU, they would have still had time to get the GPU sorted out as planned for the M3 series, while also making sure it was good to go for the A17 Pro.

Given the M3's NPU connection with the A16 Bionic - and the fact that what was supposed to be the A16 Bionic GPU ended up in the M3 - it seems highly likely that both the A16 Bionic CPU and NPU architecture also form the basis of the M3. Thus, the Apple M3 series looks to be a hybrid of the A16 Bionic and A17 Pro, but an accidental hybrid. It received the CPU, NPU and GPU cores always intended for the chip - just by what looks to be a more circuitous route.
 

novagamer

macrumors regular
May 13, 2006
165
198
of interest to the NPU discussion
see https://www.notebookcheck.net/Apple...-the-A17-Pro-A16-Bionic-designs.764280.0.html

One aspect of the provenance of the M3 design that is crystal clear is that it shares the same, or very similar, GPU architecture to the A17 Pro. Apple calls the A17 Pro graphics “the biggest GPU redesign in Apple’s history,” while it says the M3 graphics “represents the biggest leap forward graphics architecture ever for Apple silicon.” With the A17 Pro launched on 13 September and the M3 chips launched on 31 October, it is safe to assume the architecture is going to be very similar, especially given the brand new hardware accelerated ray tracing feature is common to the two. The only exception may apply to the “Dynamic Caching” feature in the M3 GPU that wasn’t called out as a feature of the A17 Pro GPU. Otherwise, the A17 Pro and M3 GPUs share much more in common than do the A16 Bionic's GPU and the M3 GPU.

Things change, however, when we look at the M3 series NPU (Neural Processing Unit). As in previous M series chips, the M3 “Neural Engine” (as Apple calls it), has 16 cores and produces 18 TOPS (trillion operations per second). This is almost 2x slower than the NPU in Apple’s latest smartphone chip, which is somewhat unexpected. The NPU in the Apple M1 is good for 11 TOPS (the same as the NPU in the A14 Bionic), while the NPU in the Apple M2 was 15.8 TOPs (the same as the NPU in the A15 Bionic). As you might have guessed, the NPU in the M3 is much closer to the NPU in the A16 Bionic’s 17 TOPS, indicating that this is the basis for the M3 series NPUs. The 1 TOP differential can be explained by the performance gained through the switch to the 3nm node used for the M3 against the 5nm (N4P) node used for the A16 Bionic. Or it could simply be the result of a clock boost.

As for the CPU architecture, that is a little more difficult to discern, but there is a very good case for it to be made that it is more likely to be in line with the A16 Bionic. While the shared NPUs is one clue that the CPU architecture is also the same, the next-generation GPU architecture that we have now seen debut on the Apple A17 Pro and M3 series was actually originally intended for the A16 Bionic. An exclusive report [sub.req] from The Information in December of 2022 said that Apple’s engineers were “too ambitious” with this ray tracing-capable GPU. Prototype A16 Bionic chips running the new GPU architecture suffered from overheating and drew too much power. This “unprecedented” blunder came amidst an exodus of top chip talent from Apple, including former A series lead architect Gerard Williams who left to found Nuvia and is now SVP of Engineering with Qualcomm following its acquisition of Nuvia. Williams was recently on stage at Qualcomm’s Snapdragon Summit to launch the impressive M2 Max-beating Snapdragon X Elite chip.

What this is suggestive of is that the M3 series chips were probably intended to be entirely based on the A16 Bionic, including its proposed next-generation GPU. Apple pivoted with the A16 Bionic by improving graphics memory bandwidth to boost performance over the A15 Bionic, but delayed the implementation of the new GPU architecture until the A17 Pro. Apple will have undoubtedly had all the technical documentation and schematics laid out for the M3 with A16 architecture - including the new GPU architecture. However, where they had to rush a fix for the A16 GPU, they would have still had time to get the GPU sorted out as planned for the M3 series, while also making sure it was good to go for the A17 Pro.

Given the M3's NPU connection with the A16 Bionic - and the fact that what was supposed to be the A16 Bionic GPU ended up in the M3 - it seems highly likely that both the A16 Bionic CPU and NPU architecture also form the basis of the M3. Thus, the Apple M3 series looks to be a hybrid of the A16 Bionic and A17 Pro, but an accidental hybrid. It received the CPU, NPU and GPU cores always intended for the chip - just by what looks to be a more circuitous route.
Exactly. The NPU needs to get the same treatment the GPU does with having more cores per model level, at least. There are going to be a lot of annoyed M3 Mac Studio / Mac Pro buyers (assuming those products get updated with M3) if they care about this type of performance because laptops coming in 6-8 months are going to wreck them on that front.

Apple did a great job with fast large pools of unified memory aiding ML training and the M1 was somewhat competitive at the time to the point that they highlighted it as the best laptop for ML/AI development, and it's been notably (and rightfully) absent in their marketing since then because it isn't competitive unless your model can't fit into memory. It's a joke that the iPhone is faster than the M3 at some operations, but the M2 and M3 are weird products on a lot of levels. The DMP needs a slight revision to close the security hole with regard to memory addressing, plus they could really use things like Thunderbolt 4 and AV1 encoding and are behind the curve there.

The M4 will be pretty great assuming these things are addressed, and I think they will be.
 
Last edited:
  • Like
Reactions: atonaldenim

rukia

macrumors regular
Jul 18, 2021
208
684
You realize that it is only a 1080p webcam right? Android phones have a very small little dot in the screen for a 4K camera. So give me a break, there is no way that we have a big ass notch due to a 1080p camera.

Unlike sensor size (ie full frame, APS-C, 1", etc) resolution has nothing to do with the size of the camera. Regardless of how many pixels you cram into a specific sensor size the required image circle, hence lens size, remains the same. F-stop and focal length do have an impact on required size.
 

impulse462

macrumors 68020
Jun 3, 2009
2,089
2,874
Glorified applied statistics with no contextual understanding isn’t as flashy as “AI” for branding.

I can’t wait for the AI bubble to burst, and I was actually proud of Apple last WWDC for sticking to the correct term of Machine Learning where the rest of the industry just rebranded it to “AI”. I’m dismayed that they’ve apparently decided to join in on the horrible misuse of terms now…
Transformers, the neural network architecture that underpins LLMs do have contextual understanding. This from the multiheaded attention module which calculates dot products for each token, for every other token from a dataset that has been tokenized (this can be strings, images, etc).

now if you want to argue that this particular method of contextual understanding is flawed, we can, but even older models before transformers (RNN, seq2seq) do feature contextual understanding. i suspect you mean that this "contextual understanding" is based on patterns in a dataset, rather than actual comprehension; and I would agree with you on that, but these models are much more than "glorified" statistics.
 

The Clark

macrumors 6502a
Dec 11, 2013
776
2,232
Canada
I've always been a bit of a bit fearful of AI and so have my head in the sand a bit but - can anyone with chip knowledge explain how a chip enhances AI? I thought it was all done on request and return by supercomputers guzzling water? I also have no idea what 'neural engines' on a chip do. Is enhanced AI different to that? If someone could help an old man out here?
Chips designed for AI, like neural processing units (NPUs) or neural engines, are specialized hardware optimized for running AI algorithms efficiently. They accelerate tasks like deep learning, which powers many AI applications today. These chips can perform calculations in parallel, which is crucial for handling the massive amounts of data involved in AI tasks.

Neural engines specifically are designed to simulate the behavior of neurons in the brain, enabling faster and more efficient processing of neural networks. Enhanced AI refers to the improvement in AI capabilities achieved through advancements in hardware, software, and algorithms working together. It’s not just about processing requests but also about learning from data and making predictions or decisions. So, while supercomputers are still used for some AI tasks, specialized chips bring AI capabilities to smaller devices like smartphones, making AI more accessible.
 

Tagbert

macrumors 603
Jun 22, 2011
5,748
6,722
Seattle
People said that about M3. We're still up against process node limitations and transistors are barely shrinking. N3E is nearly equivalent in performance to N3B, but with better yields.
We are up against people’s unrealistic expectations of how much improvement each generation of chip can provide and a tendency to think that a 15-20% improvement is lame. M1 was a huge jump because of where the pre-AS Apple laptops were with their Intel chips. Once we jumped up to the new plateau, each step up from that was never going to be such a big increase.
 

Tagbert

macrumors 603
Jun 22, 2011
5,748
6,722
Seattle
We just got M3. This industry is going crazy. What ppl can’t do with M3 that needs M4. Seriously. New tech is rolled out just for profit. Sell sell sell. New new new.
Relax, there have always been computer and processor updates, typically annual and sometimes more often. In most cases, the year over year difference isn’t huge. Like on the phones, you need to wait 2,3, or 4 years and then an upgrade really does feel like an upgrade. The annual upgrades are about keeping up with competition and providing upgrades for people who still have older machines.

It only feels a little overwhelming if you spend a lot of time on a rumor site like this where it is a constant stream of new new new. For normal people who might be thinking of a new computer, none of the rumors mean much. They just go buy what they need when they need it.
 

Torty

macrumors 65816
Oct 16, 2013
1,127
853
M4 chips will not be significantly larger or have significantly more transistors. So if the M4 NPU will be a significant enhancement, what is going to be sacrificed? Fewer GPU?

Or perhaps the significance of the update for M4 is more just fancy marketing.
Transistors are even bigger with N3E. So they take the NPU cores from A17 pro which gives a 100% speed boost including the higher clock speed vs. M3. They should reach ca. 80 TOPs with M4.
 
  • Like
Reactions: transpo1 and ric22

Rychiar

macrumors 68030
May 16, 2006
2,605
5,733
Waterbury, CT
what workload are you worried that the m2 ultra can't handle right now that you think the m3 ultra can?
Nothing really I just felt weird buying An M2 when the M3 was already out. I think Apple should rethink the way they release these things now if they wanted to skip M3 and throw an M4 in the next max studio I think that would be great.
 
  • Like
Reactions: krell100

Chuckeee

macrumors 68000
Aug 18, 2023
1,989
5,513
Southern California
Apple is said to be nearing production of the M4 processor, and it is expected to come in at least three main varieties. Chips are codenamed Donan for the low-end, Brava for the mid-tier, and Hidra for the top-end.
So if this is true, Apple is going from 4 processor tiers (e.g. Mx, MxPro, MxMax, MxUltra) to just 3. So the new mid-tier will be taking the place of both MxPro and MxMax?

While there is some overlap with the current 4 tiers (e.g.; between a loaded MxPro and lowest level MxMax). And there is some logic for greater delineation between tiers, this is a big departure from how Apple has been marketing its hardware (just look at the ranges of iPads)
 
Last edited:

dieselm

macrumors regular
Jun 9, 2009
194
124
The
So if this is true, Apple is going from 4 processor tiers (e.g. Mx, MxPro, MxMax, MxUltra) to just 3. So the new mid-tier will be taking the place of both MxPro and MxMax?
The Bloomberg article said the low end chip goes into the Macbook Air, low-end Macbook Pro, and the Mac Mini.
The Middle chip powers the higher-end macbook pros.
The High end chip goes into the Desktop (Mac Studio and Mac Pro).

A couple ways this could go.

1. The MxPro and MxMax are somehow one chip, and the high end M4 is the M4Ultra. This seems wasteful and unlikely.

2. There's an M4Ultra chip that is on a later path. Like there's an M3,M3Pro,M3Max today and no M3Ultra.
The M2Ultra was 2xM2Max with an interposer, so perhaps M3Ultra is coming soon too (and it would be a monster).

In this universe, the article was a little off and we still have the mid and high end chips powering the high-end Macbook Pros. Then the Desktops will run M4Max and 2xM4Max (M4Ultra).

Now there's still the matter of the reported maximum memory of 512GB for the desktop machines. Will the M3Max have its maximum memory moved up to 256GB? That would be exceptional for a laptop, yielding an M4Ultra (2XM4Max) with 512GB. Or has Apple figured out a way to enable a 4XM4Max configuration?


Other ideas?
 
Last edited:
  • Wow
Reactions: gusmula

Chuckeee

macrumors 68000
Aug 18, 2023
1,989
5,513
Southern California
A couple ways this could go.

1. There's an M4Ultra chip that is on a later path. Like there's an M3,M3Pro,M3Max today and no M3Ultra.

2. The MxPro and MxMax are somehow one chip, and the high end M4 is the M4Ultra. This seems unlikely.

3. We have the same M4,M4Pro,M4Max and Apple has a way to combine M4Max chips (or M3Max chips for this gen) with some interconnect to make a 2xM4Max, and the Mac Studio/Pro will be available with M4Max and 2xM4Max.

Other ideas?
1) Looking back at the source and questioning Gurman”s accuracy?

2) Expanding the idea of chiplets and interconnects, where a tier 2.5 chip is based on interconnection between multiple mid or low tier chips.

3) Binning of higher level chips (or lower clock rate)
 

Bonte

macrumors 65816
Jul 1, 2002
1,165
506
Bruges, Belgium
well, that’s settled then, my next upgrade from my current M1 Mini is going to be the M4 Studio. If I can get my hands on one, I bet this upgrade cycle is going to be huge.

I still hope Apple is also working on efficient AI dedicated server chips, they sure need them.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.