On device is great, hopefully it will allow for automation scripting that is as fast as running scripts and doesn't require one to upload something to a server for processing.
well yeah... what else can run these models?Sadly, almost more than 90% models are CUDA based.
Great idea for this magically AI future everyone is allegedly clamoring for is locked in technically to one vendor. I guess the EU is going to step in? 😉Sadly, almost more than 90% models are CUDA based.
Nvidia is dominating AI just because they are using Nvidia based ecosystem, software, technology and more. CUDA is a great example. And Nvidia invested AI for a decade. Yes, AI chip or NPU is much more efficient but not powerful enough to beat Nvidia GPU for now and even if they are powerful, they are only limited by their own AI models.well yeah... what else can run these models?
We are borg.Yeah, I actually laughed when I read the outcome.
Chat GPT could very well be one of us.
Hey now I paid for those scratches; I heard they used Steve Jobs personal finger bones to scratch test each screenOnly reason they release this is they need help from the community.
They also need to stop using words like empower and enrich peoples lives.
1. Selling Macbook Pros with scratches in the screen is not empowering nor enriching.
2. Selling Homepods then kill it slowly, only to resurrect it with even lower downgrades is not empowering nor enriching.
3. Still selling accessories with lightning is not empowering nor enriching.
4. Selling Goggles that pose a health risk causing blindness is not empowering nor enriching.
5. Releasing upgrades that cripple performance or introducing hardware glitches to force upgrades is not empowering nor enriching.
6. Need to sum up this and more is not empowering nor enriching.
It's clear they are behind and need help. OpenAI's GPT-4 has 1.76 trillion parameters. Apple's OpemELM model has 3 billions parameters.
1.76 trillion parameters is a little like turbo boost on a Hummer. Wasteful in power, costly and given to delusions.It's clear they are behind and need help. OpenAI's GPT-4 has 1.76 trillion parameters. Apple's OpemELM model has 3 billions parameters.
Apple seems to be staking out ground in the on-device AI model domain as this article demonstrates. Despite the cynical comments and attempt to cast shade on OpenELM, advancements like MLX and Ferret demonstrate that Apple is focused on building infrastructure not just models. Apple has an audience of billions of devices and privacy oriented users that are by themself a huge market. If Apple can deliver on-device, ecosystem-optimized tools, the CUDA advantage could possibly become less of and perhaps even a non-issue. I’m really hoping to see some consequential advancement in Apple ecosystem AI development tools at WWDC. 🙏🏽Nvidia is dominating AI just because they are using Nvidia based ecosystem, software, technology and more. CUDA is a great example. And Nvidia invested AI for a decade. Yes, AI chip or NPU is much more efficient but not powerful enough to beat Nvidia GPU for now and even if they are powerful, they are only limited by their own AI models.
It's better than nothing but it's a long way to go especially since Mac is only limited for 2D based software like video, music, photo, and more.
Did you miss the part about this being an AI than can run locally on your device? Not everything is about how big your parameters are.It's clear they are behind and need help. OpenAI's GPT-4 has 1.76 trillion parameters. Apple's OpemELM model has 3 billions parameters.
And does that 1.76 trillion parameter model run on a phone without cloud servers?
Ffs, this is not LLM..
[Hey ChatGPT, please generate a comment in the style of a typical MacRumors average user, with a touch of acid humor, regarding this piece of news.]
"Oh great, Apple's finally joining the open-source party—just a decade late and probably still with strings attached somewhere in those 'open' terms. They're throwing us a bone with OpenELM, but let's be real, they’re probably just doing it to lure in some AI hotshots tired of their corporate overlords. Now we just have to sit back and wait for iOS 18, where they'll inevitably limit these models to the latest hardware, forcing us all to upgrade. Because, you know, my current iPhone can't possibly handle a couple more AI tricks without combusting."
Que? They open sourced webkit and that gave Google a chance to dethrone Internet Explorer with Chrome.Apple finally decides to join the open-source party, huh? Only a decade late and I bet there are still some hidden strings attached to their so-called 'open' terms. They're throwing us a bone with OpenELM, but let's be real, they're probably just trying to lure in some AI hotshots tired of their corporate overlords. And now we have to wait for iOS 18, where they'll undoubtedly limit these models to the latest hardware, forcing us all to upgrade. Because, you know, my current iPhone surely can't handle a few more AI tricks without spontaneously combusting. Classic Apple move.
Que? They open sourced webkit and that gave Google a chance to dethrone Internet Explorer with Chrome.
My experience with my basic M1pro macbook is quite good using llama3 or mistral, so I would expect that an 8 GB iPhone with an optimised model could be quite ok.I wonder if Macs with 8GB memory will be able to run whatever Apple bakes into macOS based on it?
I’d be surprised if even the 15 pro will be able to run anything based on this.
New iPhones for everyone. Hopes Tim Cook.
My experience with my basic M1pro macbook is quite good using llama3 or mistral, so I would expect that an 8 GB iPhone with an optimised model could be quite ok.
KHTML is discontinued...Well, not a great example as WebKit was originally the open source KHTML component from KDE before Apple got their paws on it...
Food for thought....what makes you any different than these so called "experts" you speak of? If Apple can't even get siri to do miniscule type functions without screwing that up, that what makes you think they'll get AI right?Does ChatGPT even attempt to run on device? You know, the whole point of this?
The thing I’ve noticed about all these AI hype-people is they certainly know what the “leader” of the pack day to day is, but somehow can’t imagine smaller models being a better solution for a given task. Instead of a one size fits all approach, what’s wrong with invoking a trained AI that is smaller but specifically suited for the task at hand *automatically based on the context of what you’re currently doing*?
Just some food for thought…