Gemma 4 on iPhone
421 points
7 hours ago
| 30 comments
| apps.apple.com
| HN
pmarreck
6 hours ago
[-]
Impressive model, for sure. I've been running it on my Mac, now I get to have it locally in my iPhone? I need to test this. Wait, it does agent skills and mobile actions, all local to the phone? Whaaaat? (Have to check out later! Anyone have any tips yet?)

I don't normally do the whole "abliterated" thing (dealignment) but after discovering https://github.com/p-e-w/heretic , I was too tempted to try it with this model a couple days ago (made a repo to make it easier, actually) https://github.com/pmarreck/gemma4-heretical and... Wow. It worked. And... Not having a built-in nanny is fun!

It's also possible to make an MLX version of it, which runs a little faster on Macs, but won't work through Ollama unfortunately. (LM Studio maybe.)

Runs great on my M4 Macbook Pro w/128GB and likely also runs fine under 64GB... smaller memories might require lower quantizations.

I specifically like dealigned local models because if I have to get my thoughts policed when playing in someone else's playground, like hell am I going to be judged while messing around in my own local open-source one too. And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

Note: I tried to hook this one up to OpenClaw and ran into issues

To answer the obvious question- Yes, this sort of thing enables bad actors more (as do many other tools). Fortunately, there are far more good actors out there, and bad actors don't listen to rules that good actors subject themselves to, anyway.

reply
c2k
6 hours ago
[-]
I run mlx models with omlx[1] on my mac and it works really well.

[1] https://github.com/jundot/omlx

reply
pmarreck
1 hour ago
[-]
Holy hell, how new is this? I've never heard of it, looks great!
reply
barbazoo
5 hours ago
[-]
> And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

I checked the abliterate script and I don't yet understand what it does or what the result is. What are the conversations this enables?

reply
SL61
3 hours ago
[-]
LLMs are very helpful for transcribing handwritten historical documents, but sometimes those documents contain language/ideas that a perfectly aligned LLM will refuse to output. Sometimes as a hard refusal, sometimes (even worse) by subtly cleaning up the language.

In my experience the latest batch of models are a lot better at transcribing the text verbatim without moralizing about it (i.e. at "understanding" that they're fulfilling a neutral role as a transcriber), but it was a really big issue in the GPT-3/4 era.

reply
dolebirchwood
3 hours ago
[-]
I have a project where I'm using LLMs to parse data from PDFs with a very complicated tabular layout. I've been using the latest Gemini models (flash and pro) for their strong visual reasoning, and they've generally been doing a really good job at it.

My prompt states that their job is to extract the text exactly as it appears in the PDF. One data point to be extracted is the race of each person listed. In one case, someone's race was "Indian". Gemini decided to extract it as "Native American". So ridiculous.

reply
janalsncm
3 hours ago
[-]
According to Gemini, Native America is the most populous country.
reply
devmor
2 hours ago
[-]
I was attempting to help someone who runs a small shop selling restored clothing set up a gemini pipeline that would restage images she took of clothing items with bad lighting, backgrounds, etc.

Basically anything that showed any “skin” on a mannequin it would refuse to interact with. Even just a top, unless she put pants on the mannequin.

It was infuriating.

reply
pmarreck
4 hours ago
[-]
1) Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).

2) Asking questions about sketchy things. Simply asking should not be censored.

3) I don't use it for this, but porn or foul language.

4) Imitating or representing a public figure is often blocked.

5) Asking security-related questions when you are trying to do security.

6) For those who have had it, people who are trying to use AI to deal with traumatic experiences that are illegal to even describe.

Many other instances.

reply
tshaddox
1 hour ago
[-]
> Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).

When’s the last time you tried this? ChatGPT and Gemini have no trouble responding with all the common criticisms of Islam.

reply
peyton
3 hours ago
[-]
The manufacturing of biologics can be heavily censored to an absurd degree. I don’t know about Gemma 4 in particular.
reply
pmarreck
2 hours ago
[-]
Really? That's fascinating. Why is that?
reply
spijdar
5 hours ago
[-]
Realistically, a lot of people do this for porn.

In my experience, though, it's necessary to do anything security related. Interestingly, the big models have fewer refusals for me when I ask e.g. "in <X> situation, how do you exploit <Y>?", but local models will frequently flat out refuse, unless the model has been abliterated.

reply
tredre3
4 hours ago
[-]
From what I've seen gemma 4 doesn't refuse a lot regarding sex, it only needs little nudging in the right direction sometimes.

But it does refuse being critical of the usual topics: israel, islam, trans, or race.

So wanting to discuss one of those is the real reason people would use an uncensored model.

reply
throwuxiytayq
5 hours ago
[-]
The in-ter-net is for porn
reply
rav3ndust
4 hours ago
[-]
that song is going to be stuck in my head all day now. lol
reply
golem14
12 minutes ago
[-]
That whole musical is just fantastic!
reply
eloisant
4 hours ago
[-]
I tried it on my mac, for coding, and I wasn't really impressed compared to Qwen.

I guess there are things it's better at?

reply
nkohari
4 hours ago
[-]
You're comparing apples to oranges there. Qwen 3.5 is a much larger model at 397B parameters vs. Gemma's 31B. Gemma will be better at answering simple questions and doing basic automation, and codegen won't be it's strong suit.
reply
kgeist
4 hours ago
[-]
Qwen3.5 comes in various sizes (including 27B), and judging by the posts on HN, /LocalLlama etc., it seems to be better at logic/reasoning/coding/tool calling compared to Gemma 4, while Gemma 4 is better at creative writing and world knowledge (basically nothing changed from the Qwen3 vs. Gemma3 era)
reply
Mil0dV
3 hours ago
[-]
Does this also apply to gemma's 26B-A4B vs say Qwens 35B-A3B?

I'm not sure if I can make the 35B-A3B work with my 32GB machine

reply
tredre3
4 hours ago
[-]
Gemma 4 31B is still not impressive at coding compare to even Qwen 3.5 27B. It's just not its strong suit.

So far gemma 4 seems excellent at role playing, document analysis, and decent at making agentic decisions.

reply
gigatexal
3 hours ago
[-]
This has been my experience as well, Qwen via Ollama locally has been very very impressive.
reply
magospietato
6 hours ago
[-]
Haven't built anything on the agent skills platform yet, but it's pretty cool imo.

On Android the sandbox loads an index.html into a WebView, with standardized string I/O to the harness via some window properties. You can even return a rendered HTML page.

Definitely hacked together, but feels like an indication of what an edge compute agentic sandbox might look like in future.

reply
bossyTeacher
4 hours ago
[-]
>there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

Mind giving us a few of the examples that you plan to run in your local LLM? I am curious.

reply
pmarreck
2 hours ago
[-]
I'm not sure what you're angling at but I already gave a set of questions that are ethically legitimate yet routinely censored by the public models:

https://news.ycombinator.com/item?id=47654013

Not to mention that doing what the big model makers do literally dumbs the model down.

They should at least allow something like letting you prove your age and identity to give you access to better/unaligned models, maybe even requiring a license of some sort. Because you know what? SOMEONE in there absolutely has access to the completely uncensored versions of the latest models.

reply
satvikpendem
27 minutes ago
[-]
I tried 1 and a few others with hypothetical situations, public models answer perfectly fine it looks like.
reply
karimf
5 hours ago
[-]
This app is cool and it showcases some use cases, but it still undersells what the E2B model can do.

I just made a real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B. I posted it on /r/LocalLLaMA a few hours ago and it's gaining some traction [0]. Here's the repo [1]

I'm running it on a Macbook instead of an iPhone, but based on the benchmark here [2], you should be able to run the same thing on an iPhone 17 Pro.

[0] https://www.reddit.com/r/LocalLLaMA/comments/1sda3r6/realtim...

[1] https://github.com/fikrikarim/parlor

[2] https://huggingface.co/litert-community/gemma-4-E2B-it-liter...

reply
nothinkjustai
4 hours ago
[-]
Parlor is so cool, especially since you’re offering it for free. And a great use case for local LLMs.
reply
karimf
4 hours ago
[-]
Thanks! Although, I can't claim any credit for it. I just spent a day gluing what other people have built. Huge props to the Gemma team for building an amazing model and also an inference engine that's focused for edge devices [0]

[0] https://github.com/google-ai-edge/LiteRT-LM

reply
PullJosh
6 hours ago
[-]
This is awesome!

1) I am able to run the model on my iPhone and get good results. Not as good as Gemini in the cloud, but good.

2) I love the “mobile actions” tool calls that allow the LLM to turn on the flashlight, open maps, etc. It would be fun if they added Siri Shortcuts support. I want the personal automation that Apple promised but never delivered.

3) I am so excited for local models to be normalized. I build little apps for teachers and there are stringent privacy laws involved that mean I strongly prefer writing code that runs fully client-side when possible. When I develop apps and websites, I want easy API access to on-device models for free. I know it sort of exists on iOS and Chrome right now, but as far as I’m aware it’s not particularly good yet.

reply
buzzerbetrayed
4 hours ago
[-]
For me the hallucination and gaslighting is like taking a step back in time a couple of years. It even fails the “r’s in strawberry” question. How nostalgic.

It’s very impressive that this can run locally. And I hope we will continue to be able to run couple-year-old-equivalent models locally going forward.

reply
dimmke
2 hours ago
[-]
I haven't seen anybody else post it in this thread, but this is running on 8GB of RAM. It's not the full Gemma 4 32B model. It's a completely different thing from the full Gemma 4 experience if you were running the flagship model, almost to the point of being misleading.

It's their E2B and E4B variants (so 2B and 4B but also quantized)

https://ai.google.dev/gemma/docs/core/model_card_4#dense_mod...

reply
zozbot234
1 hour ago
[-]
The relevant constraint when running on a phone is power, not really RAM footprint. Running the tiny E2B/E4B models makes sense, this is essentially what they're designed for.
reply
1f60c
2 hours ago
[-]
Strangely, reasoning is not on by default. If you enable it, it answers as you'd expect.
reply
janandonly
5 hours ago
[-]
OP Here. It is my firm belief that the only realistic use of AI in the future is either locally on-device for almost free, or in the cloud but way more expensive then it is today.

The latter option will only bemusedly for tasks that humans are more expensive or much slower in.

This Gemma 4 model gives me hope for a future Siri or other with iPhone and macOS integration, “Her” (as in the movie) style.

reply
crazygringo
5 hours ago
[-]
> or in the cloud but way more expensive then it is today.

Why? It's widely understood that the big players are making profit on inference. The only reason they still have losses is because training is so expensive, but you need to do that no matter whether the models are running in the cloud or on your device.

If you think about it, it's always going to be cheaper and more energy-efficient to have dedicated cloud hardware to run models. Running them on your phone, even if possible, is just going to suck up your battery life.

reply
mbesto
4 hours ago
[-]
> It's widely understood that the big players are making profit on inference.

This is most definitely not widely understood. We still don't know yet. There's tons of discussions about people disagreeing on whether it really is profitable. Unless you have proof, don't say "this is widely understood".

reply
igtt
16 minutes ago
[-]
The reality is we can’t trust accounting earnings anyway.

We need to see the cash flows.

reply
zozbot234
4 hours ago
[-]
The big players are plausibly making profits on raw API calls, not subscriptions. These are quite costly compared to third-party inference from open models, but even setting that up is a hassle and you as a end user aren't getting any subsidy. Running inference locally will make a lot of sense for most light and casual users once the subsidies for subscription access cease.

Also while datacenter-based scaleout of a model over multiple GPUs running large batches is more energy efficient, it ultimately creates a single point of failure you may wish to avoid.

reply
janalsncm
2 hours ago
[-]
> It's widely understood that the big players are making profit on inference.

If you add in the cost of training, it’s not profitable.

Not including the cost of training is a bit like saying the only cost of a cup of coffee is the paper cup it’s in. The only way OpenAI gets to charge for inference is by selling a product people can’t get elsewhere for much cheaper, which means billions in R&D costs. But because of competition, each model effectively has a “shelf life”.

reply
tybit
2 minutes ago
[-]
At least Anthropic claims that they are profitable on a per model basis. But since both revenue and training costs are growing exponentially, and they need to pay for model N training today, and only get revenue for model N-1 today, the offset makes it look worse than it is.

Obviously that doesn’t help them turn a profit, until they can stop growing training costs exponentially.

So it’s really a race to see whether growth in revenue or training costs decelerates first.

reply
huijzer
4 hours ago
[-]
Laptop/desktop could work. Most systems are on charger most of time anyway
reply
nothinkjustai
4 hours ago
[-]
> It's widely understood that the big players are making profit on inference.

Are they? Or are they just saying that to make their offerings more attractive to investors?

Plus I think most people using agents for coding are using subscriptions which they are definitely not profitable in.

Locally running models that are snappy and mostly as capable as current sota models would be a dream. No internet connection required, no payment plans or relying on a third party provider to do your job. No privacy concerns. Etc etc.

reply
nl
2 hours ago
[-]
> Plus I think most people using agents for coding are using subscriptions which they are definitely not profitable in.

Where on earth do people get this idea? Subscriptions that are based around obscure, vendor defined "credits" are the perfect business model for vendors. They can change the amount you can use whenever they want.

It's likely they occasionally make a loss on some users but in general they are highly profitable for AI companies:

> Anthropic last month projected it would generate a 40% gross profit margin from selling AI to businesses and application developers in 2025

and

> OpenAI projected a gross margin of around 46% in 2025, including inference costs of both paying and nonpaying ChatGPT users.

https://archive.is/aKFYZ#selection-1075.0-1083.119

reply
nothinkjustai
29 minutes ago
[-]
Both of those companies are losing hella money, dude just cuz they say they “expect” to be profitable doesn’t mean they are.
reply
zozbot234
4 hours ago
[-]
You can pick models that are snappy, or models that are as capable as SOTA. You don't really get both unless you spend extremely unreasonable amounts of money on what is essentially a datacenter-scale inference platform of your own, meant to service hundreds of users at once. (I don't care how many agent harnesses you spin up at once, you aren't going to get the same utilization as hundreds of concurrent users.)

This assessment might change if local AI frameworks start working seriously on support for tensor-parallel distributed inference, then you might get away with cheaper homelab-class hardware and only mildly unreasonable amounts of money.

reply
jrflowers
4 hours ago
[-]
> It's widely understood that the big players are making profit on inference.

I love the whole “they are making money if you ignore training costs” bit. It is always great to see somebody say something like “if you look at the amount of money that they’re spending it looks bad, but if you look away it looks pretty good” like it’s the money version of a solar eclipse

reply
skybrian
2 hours ago
[-]
The reason it matters is that if they are making a profit on inference, then when people use their services more, it cuts their losses. They might even break even eventually and start making a profit without raising the price.

But if they're losing money on inference, they will lose more money when people use their services more. There's no way to turn that around at that price.

reply
_pdp_
4 hours ago
[-]
If you can run free models on consumer devices why do you think cloud providers cannot do the same except better and bundled with a tone of value worth paying?
reply
amelius
4 hours ago
[-]
A local model running on a phone owned and controlled by the vendor is still not really exciting, imho.

It may be physically "local" but not in spirit.

reply
0dayman
5 hours ago
[-]
this is not that first step towards your dream
reply
kennywinker
5 hours ago
[-]
Did you really watch “Her” and think this is a future that should happen??

Seriously????

reply
jfreds
5 hours ago
[-]
I don’t think OP’s point has anything to do with AI companions.

The big benefit of moving compute to edge devices is to distribute the inference load on the grid. Powering and cooling phones is a lot easier than powering and cooling a datacenter

reply
kennywinker
17 minutes ago
[-]
Local ai is probably a good direction, i agree. But there was a part of their point that had to do with ai companions, the bit where they say we are closer to “her”-like ai companions. That was the bit i was responding to.
reply
satvikpendem
1 hour ago
[-]
What does what they said have anything to do with Her? Local LLMs are better than big corporations owning your data and offering LLMs for a huge cost.
reply
kennywinker
19 minutes ago
[-]
I get the local ai thing. I agree it’s probably a good direction. The bit that has to do with the movie “her” is the bit at the end where they are excited about “her”-like companions on our phones.
reply
sambapa
5 hours ago
[-]
Torment Nexus sounds fun
reply
kennywinker
16 minutes ago
[-]
Watch out! We got an info hazard here! Danger danger
reply
aninteger
4 hours ago
[-]
Having Scarlett Johansson's voice might not be so bad or even something less robotic.
reply
kennywinker
3 hours ago
[-]
That happened already, in typical ai fashion: blatant theft https://www.nbcnews.com/tech/scarlett-johansson-legal-action...
reply
nothinkjustai
28 minutes ago
[-]
How do you steal a frequency?
reply
kennywinker
20 minutes ago
[-]
Do you genuinely think a “frequency” is what makes a human voice recognizable?

That’s like using someone’s face in an app and then saying “how can you steal pixels?”

reply
esafak
3 hours ago
[-]
Unfortunately, one man's dystopia is another's utopia.
reply
rudedogg
5 minutes ago
[-]
This is fun, FYI you don’t have to sign in/up with a Google account. I hesitated downloading it for that reason.
reply
jeroenhd
6 hours ago
[-]
reply
davecahill
22 minutes ago
[-]
I really like Enclave for on-device models - looks like they're about to add Gemma 4 too: https://enclaveai.app/blog/2026/04/02/gemma-4-release-on-dev...
reply
satvikpendem
26 minutes ago
[-]
This is also on Android and has an option to use AICore with the NPU which can run much faster than even the GPU models.
reply
nout
21 minutes ago
[-]
How do you get it running on Android?
reply
satvikpendem
16 minutes ago
[-]
It's the same app, Google AI edge gallery.
reply
allpratik
4 hours ago
[-]
Nice! Tried on iPhone 16 pro with 30 TPS from Gemma-4-E2B-it model.

Although the phone got considerably hot while inferencing. It’s quite an impressive performance and cannot wait to try it myself in one of my personal apps.

reply
dhbradshaw
3 hours ago
[-]
My son just started using 2B on his Android. I mentioned that it was an impressively compact model and next thing I knew he had figured out how to use it on his inexpensive 2024 Motorolla and was using it to practice reading and writing in foreign languages.
reply
TGower
5 hours ago
[-]
These new models are very impressive. There should be a massive speedup coming as well, AI Edge Gallery is running on GPU, but NPUs in recent high end processors should be much faster. A16 chip for example (Macbook Neo and iphone 16 series) has 35 TOPS of Neural Engine vs 7 TFLOPS gpu. Similar story for Qualcomm.
reply
api
5 hours ago
[-]
That’s nuts actually for such a low power chip. Can’t wait to see the M series version of that.

I’m sure very fast TPUs in desktops and phones are coming.

reply
zozbot234
5 hours ago
[-]
The Apple Silicon in the MacBook Neo is effectively a slimmed down version of M4, which is already out and has a very similar NPU (similar TFLOPS rating). It's worth noting however that the TFLOPS rating for Apple Neural Engine is somewhat artificial, since e.g. the "38 TFLOPS" in the M4 ANE are really 19 TFLOPS for FP16-only operation.
reply
hadrien01
6 hours ago
[-]
Is it me or does the App Store website look... fake? The text in the header ("Productiviteit", "Alleen voor iPhone") looks pixelated, like it was edited on Paint, the header background is flickering, the app icon and screenshots are very low quality, the title of the website is incomplete ("App Store voor iPho...")
reply
lateforwork
4 hours ago
[-]
Here's the US version of the same page: https://apps.apple.com/us/app/google-ai-edge-gallery/id67496...

The design quality is still poor. But that's the new Apple. Design is no longer one of their core strengths.

reply
giarc
6 hours ago
[-]
It's the dutch version, see /nl/ in the url.

If you just go to https://apps.apple.com/ it does look better, but I agree, still a bit "off".

reply
throwatdem12311
6 hours ago
[-]
Issues caused by a low effort localization?

On my iPhone it opens on the App Store app, so it looks fine to me.

reply
piperswe
6 hours ago
[-]
What browser are you using? I don't see any of this behavior on Firefox...
reply
hadrien01
6 hours ago
[-]
Firefox on Windows, but it looks about the same in Edge

Screenshot of the header: https://i.imgur.com/4abfGYF.png

reply
morpheuskafka
6 hours ago
[-]
It looks like there is some sort of glow effect on the text that isn't rendering right on your browser? It arguably doesn't have the best contrast, but seems to be as intended in Safari 26.3. Looks similar on Chrome macOS too: https://imgur.com/yq5PrKm.
reply
t-sauer
5 hours ago
[-]
Renders equally weird for me on Firefox on Windows 11. Firefox on MacOS looks good though.

Edit: Seems like mix-blend-mode: plus-lighter is bugged in Firefox on Windows https://jsfiddle.net/bjg24hk9/

reply
OJFord
3 hours ago
[-]
Firefox on Android: 'Google AI' (in app name) is clipped off the top; the Apple 'share' button is clipped on the bottom.
reply
j0hax
6 hours ago
[-]
Everything renders crystal clear with Firefox on GrapheneOS.
reply
ezfe
6 hours ago
[-]
Nothing weird on my side
reply
burnto
5 hours ago
[-]
My iPhone 13 can’t run most of these models. A decent local LLM is one of the few reasons I can imagine actually upgrading earlier than typically necessary.
reply
deckar01
5 hours ago
[-]
It doesn’t render Markdown or LaTeX. The scrolling is unusable during generation. E4B failed to correctly account for convection and conduction when reasoning about the effects of thermal radiation (31b was very good). After 3 questions in a session (with thinking) E4B went off the rails and started emitting nonsense fragment before the stated token limit was hit (unless it isn’t actually checking).
reply
carbocation
6 hours ago
[-]
It would be very helpful if the chat logs could (optionally) be retained.
reply
rotexo
2 hours ago
[-]
E4B is pretty good for extracting tables of items from receipt scans and inferring categories, wish this could be called from within a shortcut to just select a photo and add the extracted table to the clipboard
reply
dwa3592
5 hours ago
[-]
I think with this google starts a new race- best local model that runs on phones.
reply
dwa3592
5 hours ago
[-]
I wonder why the cut off date for 3n-E4B-it is Oct, 2023. That's really far in the past.
reply
satvikpendem
1 hour ago
[-]
Because that's Gemma 3, not 4.
reply
XCSme
4 hours ago
[-]
Gemma 4 is great: https://aibenchy.com/compare/google-gemma-4-31b-it-medium/go...

I assume it is the 26B A4B one, if it runs locally?

reply
adrian17
3 hours ago
[-]
No, only E2B and E4B.
reply
neurostimulant
3 hours ago
[-]
I'm able to sweet talk the gemma-4-e2b-it model in an iphone 15 to solve a hcaptcha screenshot. This small model is surprisingly very capable!
reply
thot_experiment
3 hours ago
[-]
Gemma 4 E4B is an incredible model for doing all the home assistant stuff I normally just used Qwen3.5 35BA4B + Whisper while leaving me with wayy more empty vram for other bullshit. It works as a drop in replacement for all of my "turn the lights off" or "when's the next train" type queries and does a good job of tool use. This is the really the first time vramlets get a model that's reliably day to day useful locally.

I'm curious/worried about the audio capability, I'm still using Whisper as the audio support hasn't landed in llama.cpp, and I'm not excited enough to temporarily rewire my stuff to use vLLM or whatever their reference impl is. The vision capabilities of Gemma are notably (thus far, could be impl specific issues?) much much worse than Qwen (even the big moe and dense gemma are much worse), hopefully the audio is at least on par with medium whisper.

reply
yalogin
1 hour ago
[-]
Are these models open source? If so this is Google’s attempt to collect user data from their models.
reply
mc7alazoun
2 hours ago
[-]
Would it work locally on a Mac Pro M4 24gb? If so I'd really appreciate a step-by-step guide.
reply
weberer
1 hour ago
[-]
These E2B and E4B models are very small so that they can fit into phones with around 8gb of RAM. You can get away with a much larger model. Just run:

    brew install ollama 

    ollama run gemma4:26b-a4b-it-q4_K_M
reply
tithos
1 hour ago
[-]
Most of the models are not available. I’m guessing they will become available soon enough… At least I hope.
reply
Waterluvian
2 hours ago
[-]
I see a phenomenal opportunity for old phone re-use by arraying them in some dock and making them be my "home AI."
reply
rickdg
5 hours ago
[-]
How do these compare to Apple's Foundation Models, btw?
reply
simonw
4 hours ago
[-]
So much better. Hard to quantify, but even the small Gemma 4 models have that feels-like-ChatGPT magic that Apple's models are lacking.
reply
snarkyturtle
4 hours ago
[-]
AFM had a 4096 token context window and this can be configured to have a 32k+ token context window, for one.
reply
garff
4 hours ago
[-]
How new of an iPhone model is needed?
reply
__natty__
5 hours ago
[-]
That's a great project! I just wondered whether Google would have a problem with you using their trademark
reply
tech234a
4 hours ago
[-]
This is an app published by Google itself
reply
dzhiurgis
4 hours ago
[-]
I recently got to a first practical use of it. I was on a plane, filling landing card (what a silly thing these are). I looked up my hotel address using qwen model on my iPhone 16 Pro. It was accurate. I was quite impressed.

After some back and forth the chat app started to crash tho, so YMMV.

reply
beeflet
5 hours ago
[-]
reply
lzzqrd
2 hours ago
[-]
Could you clarify what you mean by 'open-ended' in this context, since both initiatives are essentially open-source?
reply
lol8675309
3 hours ago
[-]
It’s gotta be free!?!? Right!?!? Oh oh wait
reply