You say that like apple doesn't do the same. But apple doesn't just overcharge for apple tech but also "off the self" tech like ram and storage.True but nvidia is currently charging a crazy premium.
You say that like apple doesn't do the same. But apple doesn't just overcharge for apple tech but also "off the self" tech like ram and storage.True but nvidia is currently charging a crazy premium.
They need the manufacturing capacities of the 3nm process for the new iPhones and iPads. And it's not a problem to run as many M2 Ultra in parallel as needed for whatever they do.It seems crazy to use M2 chips to power AI when the M3 and M4, by Apple's own admission, come with a hugely upgraded Neural Engine. What on earth are they thinking?! Maybe there's just a huge surplus of unsold M2 Ultra chips that they need to do something with...
I might actually buy one if they released it.Return of XServe, anyone?
Kinda like Amazon’s stores where they claimed to use AI when instead it was just a bunch of cameras with low paid Indian workers watching them.So it won’t be processed “on device” then…
Do you pay to use ChatGPT?My experience is the opposite. I am a scientist and professor (I like to use my brain) and I use ChatGPT almost every day to help speed up my research and work. In fact, every person I know who uses ChatGPT or something similar regularly are people who think and do a lot -- researchers, artists, professors, engineers, etc. I'm sure my sample is biased, but LLMs have been one of the best tools ever developed for much of what I do. They help me get things done much more efficiently.
If I ever tried writing “NVIDIA chips are power-consuming” somewhere on reddit (even in r/macgaming!), they would have downvoted me into void.Nvidia 4090 or whatever may have more power in certain situations, but if you are paying for all of that in green energy then Apples M chips start looking more attractive.
Actually, the right question is, why would Tim Cook agree to lock Apple into a Cuda subscription for all their AI needs? Not only are the premiums on NVIDIA servers way too bloated but they also want Apple to build on top of their proprietary stack when Google, Intel, Qualcomm, ARM, etc. will release open source UXL stack for GPU's by year end. No way that Tim Cook will allow Apple to be fleeced by NVIDIA. He stated in earnings call last week that Apple is using hybrid solutions for now such as AWS, Google cloud. So these other back ends for them may use some NVIDIA servers but these appear to be stop gap solutions until Apple Silicon AI server chip is readied. IMO there is zero chance of Apple wanting to lock themselves into an NVIDIA / Cuda stack for their own AI iCloud server farms.They could do the same with GPU.
The right question is, how many M2 Ultra does it take to reach b200 in AI performance ?
Well said. The purpose of shifting from intel to apple silicone was greater control of the evolution and lifecycle of Apple product.Actually, the right question is, why would Tim Cook agree to lock Apple into a Cuda subscription for all their AI needs? Not only are the premiums on NVIDIA servers way too bloated but they also want Apple to build on top of their proprietary stack when Google, Intel, Qualcomm, ARM, etc. will release open source UXL stack for GPU's by year end. No way that Tim Cook will allow Apple to be fleeced by NVIDIA.
I hope you realise that AI is already powering lots of products and features that people know and love it:In my experience with talking about it with people, the only people who support and want AI, are people who tend to not like thinking. Those of us that like to use our brains, are fine without AI. Just my experience. YMMV
I think apple will just bake the costs of the ai features into iCloud and increase the annual subscription cost by 1-3% every year."Introducing our magical, incredible iCloud AI Pro Max... starting at only $69/month.
We think you'll love it!"
You are missing the point, Apple, according to this article, is not making an AI specific chip, but is trying to use their outdated M2 Ultra to compete with hardware that are much faster in AI and more efficient ultimately.Actually, the right question is, why would Tim Cook agree to lock Apple into a Cuda subscription for all their AI needs? Not only are the premiums on NVIDIA servers way too bloated but they also want Apple to build on top of their proprietary stack when Google, Intel, Qualcomm, ARM, etc. will release open source UXL stack for GPU's by year end. No way that Tim Cook will allow Apple to be fleeced by NVIDIA. He stated in earnings call last week that Apple is using hybrid solutions for now such as AWS, Google cloud. So these other back ends for them may use some NVIDIA servers but these appear to be stop gap solutions until Apple Silicon AI server chip is readied. IMO there is zero chance of Apple wanting to lock themselves into an NVIDIA / Cuda stack for their own AI iCloud server farms.
As another poster commented above, they both have same chip architecture, i.e., ARM. Thus far, Apple focused on efficiency and thermal management. Now that they are turning their attention to AI server chips, Apple is more than capable of catching up to NVIDIA. Their future AI server platform already has 2 billion plus growing device user base they can tap on as data sources for their LLM. Why do you suppose NVIDIA has had so much insider selling? They don't have a clue where they will be in one or two years as UXL for GPU's matures.
Well said. Nvidia are years ahead of their nearest rivals.You are missing the point, Apple, according to this article, is not making an AI specific chip, but is trying to use their outdated M2 Ultra to compete with hardware that are much faster in AI and more efficient ultimately.
It's like trying to mine crypto with GPU and compete with ASIC miners that are built specifically for that task.
If Apple makes their own optimized hardware for AI, something that Google did for Gemini's LLM and was successful with it, is a whole other story.
Apple also thought that they can compete with Qualcomm for modem chip because they were successful to make their own SoC, but failed miserably, I wouldn't claim they can catch up to Nvidia in AI anytime soon unless it actually happens, but that's Apple's Tim Cook in a nutshell these days, try to catch up with everyone else because you are always behind.
Those who believe Apple can compete with Nvidia in AI using M2 Ultra are the same people who believed 8GB of ram on mac is 16GB on windows .
Let's face it, we all know exactly why Apple is not going to use Nvidia GPU's.
View attachment 2376505
That makes sense ... one good thing about Copilot is to create a summary of a huge design document.One feature I'm hoping for: Summarize full ebooks or specific sections, mark the most important things in books app. Cool addition would also be: read the book - make every book an audio book. This would be a great AI feature.
LLM context windows generally aren't large enough for full books (yet).One feature I'm hoping for: Summarize full ebooks or specific sections, mark the most important things in books app.
Apple could already do this today, since they have that feature for web pages in Safari (https://support.apple.com/guide/iphone/use-siri-to-listen-to-a-webpage-iph449fc616c/ios). They won't do this for ebooks, however, because that would conflict with audiobook rights, and publishers would pull their books from Apple. Besides, it would diminish Apple's profits from audiobook sales.Cool addition would also be: read the book - make every book an audio book. This would be a great AI feature.
Yes. I could use Microsoft’s version for free or other options, but I also use some of the other plugin GPTs.Do you pay to use ChatGPT?
I thought Apple could have released a Mac Pro that was upgradable by adding boards/blades: processing, storage, etc. These boards would be interconnected using something like InfiniBand or a proprietary bus based on a torus topology.Apple needs to allow multiple studios to have cluster of GPU and memory.
AI itself is worth it at any cost. We've been using generative AI for a few years now and are starting to take it for granted, Apple can't delay a few years to sort out a better deal for themselves.Actually, the right question is, why would Tim Cook agree to lock Apple into a Cuda subscription for all their AI needs?
So apple should risk their 2Trillion dollar evaluation byt kissing Jensen's ring and become a captive customer?AI itself is worth it at any cost. We've been using generative AI for a few years now and are starting to take it for granted, Apple can't delay a few years to sort out a better deal for themselves.