Google Gemma 4 Runs Natively on iPhone with Full Offline AI Inference
198 points
11 hours ago
| 24 comments
| gizmoweek.com
| HN
veunes
54 minutes ago
[-]
I noticed the inference is routed through the gpu rather than the Apple neural engine. Google’s engineers likely gave up on trying to compile custom attention kernels for Apple’s proprietary tensor blocks iirc. While Metal is predictable and easy to port to, it drains the battery way faster than a dedicated NPU. Until they rewrite the backend for the ANE, this is just a flashy tech demo rather than a production-ready tool
reply
tjoff
32 minutes ago
[-]
I'm certainly fine with it drawing some power.

Running background processes might motivate the use of NPU more but don't exactly feel like a pressing need. Actively listen to you 24/7 and analyze the data isn't a usecase I'm eager to explore given the lack of control we have of our own devices.

reply
the_pwner224
20 minutes ago
[-]
> Google’s engineers likely gave up on trying to compile custom attention kernels for Apple’s proprietary tensor blocks iirc.

The AI Edge Gallery app on Android (which is the officially recommended way to try out Gemma on phones) uses the GPU (lacks NPU support) even on first party Pixel phones. So it's less of "they didn't want to interface with Apple's proprietary tensor blocks" and more of that they just didn't give a f in general. A truly baffling decision.

reply
temp7000
4 hours ago
[-]
Is it me, or does the article sound like LLM output?

The pattern "It's not mere X — it's Y", occurs like 4 times in the text :v

reply
Andrex
2 hours ago
[-]
I can't believe you'd impugn the high moral standards of "gizmoweek dot com".
reply
odo1242
2 minutes ago
[-]
It does in fact sound like LLM output
reply
BeetleB
2 hours ago
[-]
I don't care if it's written by an LLM.

The problem with the article is the complete lack of details. No benchmarks on the iPhone capable models. No details, whatsoever.

Human or LLM - the article is a whole lot of nothing.

reply
doliveira
2 hours ago
[-]
Funnily enough, to me these aphorisms (?) sound almost like the replicant test in Blaze Runner. Like these are the unit bit of "nudging"
reply
veunes
49 minutes ago
[-]
This article is all fluff because real benne marketing. If they mentioned that a 4B model on an iPhone 16 drains 15% of the battery for a single long prompt and triggers hard thermal throttling after 20 seconds, nobody would be clicking on headlines about "commercial viability" fwiw
reply
Domenic_S
24 minutes ago
[-]
I ran several Gemma 4 quants on my 24gb mac mini, and with proper context size tuning they're quick enough I guess, but I would really love to see them working well on an iphone with 2/3gb of ram...
reply
caminante
3 hours ago
[-]
Ran it through Claude, Grok, whatever...for me, they all flagged issues (no sources, punchy phrases with repetition,...) with these content farms.

My favorite: couldn't even prove the author is a real person. They all found no record!

reply
itissid
3 hours ago
[-]
As someone said we live in a strange but amazing era, where although it has never been easier to be deceived, but its _also_ much easier to uncover said deception especially on the internet.
reply
walthamstow
2 hours ago
[-]
It's much faster and simpler to assume everything on the internet is crooked
reply
ryandvm
2 hours ago
[-]
Or at least think you've uncovered deception. It's not clear to me yet that any of these "AI detectors" are reliable, and if they are, it's just an arms race.
reply
figmert
4 hours ago
[-]
> :v

I guess I found the millennial. I haven't seen that in so long!

reply
Den_VR
3 hours ago
[-]
:<
reply
neals
2 hours ago
[-]
:')
reply
Andrex
2 hours ago
[-]
>_>
reply
xiconfjs
34 minutes ago
[-]
\o/
reply
yangm97
2 hours ago
[-]
Analog emojis FTW
reply
Melatonic
35 minutes ago
[-]
¯\_(ツ)_/¯
reply
altruios
1 hour ago
[-]
It is like the AI is training us to avoid certain language patterns. I rebel at the hostage of weak language: for strong language is next.
reply
Melatonic
37 minutes ago
[-]
The mighty semi colon prepares for its return !
reply
wtyvn
19 minutes ago
[-]
Smells like slop to me, looks like the site exists solely to garner search hits.
reply
mtremsal
4 hours ago
[-]
An AI slop pattern so widespread it’s now referred to as “it’s not pee pee it’s poo poo”.
reply
lynndotpy
40 minutes ago
[-]
It's not just a widespread pattern –––––––––––––––– it's a sign of things to come.
reply
Domenic_S
23 minutes ago
[-]
You didn't just nail it ------------ you cut to the core of the issue.
reply
Cider9986
2 hours ago
[-]
I haven't heard that—that's good.
reply
kbouw
4 hours ago
[-]
You would be correct. Ran the article through GPTZero, 100% AI.
reply
subscribed
3 hours ago
[-]
These detectors are a scam falsely flagging non-native English speakers: https://plagiarismcheckerai.app/ai-detector-false-positives-...

At this point relying on their judgement is beyond folly.

reply
cubefox
3 hours ago
[-]
It's both ironic an confusing that this website itself promotes an AI detector.
reply
xd1936
3 hours ago
[-]
reply
rationalist
3 hours ago
[-]
reply
71bw
4 hours ago
[-]
Would not trust any of these tools in the slightest.
reply
devmor
3 hours ago
[-]
AI detectors that use text as a basis are not real. It is fundamentally impossible for them to exist.
reply
HarHarVeryFunny
2 hours ago
[-]
Huh?

LLM output doesn't have the variety of human output, since they operate in fixed fashion - statistical inference followed by formulaic sampling.

Additionally, the statistics used by LLMs are going be be similar across different LLMs since at scale its just "the statistics of the internet".

Human output has much more variety, partly because we're individuals with our own reading/writing histories (which we're drawing upon when writing), and partly because we're not so formulaic in the way we generate. Individuals have their own writing styles and vocabulary, and one can identify specific authors to a reasonable degree of accuracy based on this.

It's a bit like detecting cheating in a chess tournament. If an unusually high percentage of a player's moves are optimal computer moves, then there is a high likelihood that they were computer generated. Computers and humans don't pick moves in the same way, and humans don't have the computational power to always find "optimal" moves.

Similarly with the "AI detectors" used to detect if kids are using AI to write their homework essays, or to detect if blog posts are AI generated ... if an unusually high percentage of words are predictable by what came before (the way LLMs work), and if those statistics match that of an LLM, then there is an extremely high chance that it was written by an LLM.

Can you ever be 100% sure? Maybe not, but in reality human written text is never going to have such statistical regularity, and such an LLM statistical signature, that an AI detector gives it more than a 10-20% confidence of being AI, so when the detector says it's 80%+ confident something was AI generated, that effectively means 100%. There is of course also content that is part human part AI (human used LLM to fix up their writing), which may score somewhere in the middle.

reply
ben_w
1 hour ago
[-]
> LLM output doesn't have the variety of human output, since they operate in fixed fashion - statistical inference followed by formulaic sampling.

This is the wrong thing to look at; your chess analogy is much stronger, the detection method similar (if you can figure out a prompt that generates something close to the content, it almost certainly isn't human origin).

But to why the thing I'm quoting doesn't work: If you took, say, web comic author Darren Gav Bleuel, put him in a sci-fi mass duplication incident make 950 million of him, and had them all talking and writing all over the internet, people would very quickly learn to recognise the style, which would have very little variety because they'd all be forks of the same person.

Indeed, LLMs are very good at presenting other styles than their defaults, better at this than most humans, and what gives away LLMs is that (1) very few people bother to ask them to act other than their defaults, and (2) all the different models, being trained in similar ways on similar data with similar architectures, are inherently similar to each other.

reply
newsoftheday
26 minutes ago
[-]
What if the prompt includes, "Produce output that doesn't sound like an AI generated it."?
reply
blixt
2 hours ago
[-]
I made this offline pocket vibe coder using Gemma 4 (works offline once model is downloaded) on an iPhone. It can technically run the 4B model but it will default to 2B because of memory constraints.

https://github.com/blixt/pucky

It writes a single TypeScript file (I tried multiple files but embedded Gemma 4 is just not smart enough) and compiles the code with oxc.

You need to build it yourself in Xcode because this probably wouldn't survive the App Store review process. Once you run it, there are two starting points included (React Native and Three.js), the UX is a bit obscure but edge-swipe left/right to switch between views.

reply
mandeepj
1 hour ago
[-]
You might find it useful - https://news.ycombinator.com/item?id=45129160

I think react native can be switched with swift

reply
juancn
12 minutes ago
[-]
Gemma4 is still power hungry since it tends to activate pretty much every weight.

qwen3-coder-next uses a lot less since it seems to only activate ~3B parameters at a time.

My guess is that this is still close to tech demo, and a lot of performance is left on the table.

reply
rich_sasha
1 hour ago
[-]
Offline or not, I'm sure Google uploads every keystroke, phone orientation, photo, WiFi endpoints and your shoe size when you interact with it. To enhance your experience.
reply
adrian17
29 minutes ago
[-]
They released the source (well, currently only the Android version) at https://github.com/google-ai-edge/gallery .

At a glance, I see they do gather analytics about how much the app is used (model downloads, model invocations etc) without message content, pretty much just the model used.

reply
tsycho
1 hour ago
[-]
> ...your shoe size

The funny thing is that a lot of Google's internal training content uses an imaginary product "gShoe", and discusses the privacy implications of data that such a shoe might collect :D

reply
declan_roberts
54 minutes ago
[-]
Apple is paying Google $1billion for an AI strategy that runs on device. We're seeing the preview of what that will look like.
reply
codybontecou
5 hours ago
[-]
Unfortunately Apple appears to be blocking the use of these llms within apps on their app store. I've been trying to ship an app that contains local llms and have hit a brick wall with issue 2.5.2
reply
Gareth321
4 hours ago
[-]
I think Apple will become increasingly draconian about LLMs. Very soon people won't need to buy many of their apps. They can just make them. This threatens Apple's entire business model.
reply
raw_anon_1111
3 hours ago
[-]
It came out in the Epic trial that 90% of App Store revenue comes from in app purchases of loot boxes and other pay to win mechanics.

Apple doesn’t care about revenue from a random TODO app.

reply
mrkpdl
4 hours ago
[-]
But… why would I put the effort into getting an llm to make me an app when a there’s an existing app that I don’t have to maintain? I don’t want to have to make every app I use?
reply
orrito
4 hours ago
[-]
There's a huge difference between local apps that cost one time 3-10$ and apps that ask for a subscription between 5 to 20$ per month. the first category will remain and might become more popular as quality increases, the second category will be oblitereated as the value isn't there, even if all the buyers are rich. The second group takes up a much larger part of the pie than the first though, so apple's revenue will decrease.
reply
davidmurdoch
2 hours ago
[-]
All apps that don't have a tangible component, legal protection (like music, tv, movies), or a personality behind it will trend towards $0.
reply
StilesCrisis
4 hours ago
[-]
Apple's business model isn't really affected by 2% of its users choosing not to spend $100/yr on the App Store. That isn't even a blip on the radar.

A kid playing Roblox can spend more than that in a good weekend.

reply
borborigmus
4 hours ago
[-]
VibeOS. It’s just an LLM from which all other userspace is vibed.
reply
username223
26 minutes ago
[-]
vibe-ls(1) - often list directory contents, but maybe do something else.

Where can I get this amazing technology?

reply
Forgeties79
4 hours ago
[-]
I guess I am not seeing why would I want to abandon most (if any) of my simple, small, purpose-built apps that always do the exact thing I want for a private company’s ever-changing LLM that will approximate what I’m asking and approximate its response utilizing far more resources.

I’m sure there are things on my phone it could replace (though I struggle to think of them) but there are plenty it can’t. My black magic camera app, web browsers, local send, libby/hoopla…

I can’t really think of any apps I use every day - or every week - that an LLM would replace. I’m not coding on my smartphone and aside from that an LLM is basically a more complex, somewhat inconsistent search engine experience right now for most people. Siri didn’t replace any of my apps, for instance. Why would chatGPT?

TL;DR: what apps would an LLM replace on my iPhone?

reply
CubsFan1060
4 hours ago
[-]
Though of course Apple's rules aren't always consistent, I have 2 separate apps currently on my phone that can/are running this (Google's Edge Gallery and Locally AI)
reply
codybontecou
40 minutes ago
[-]
They've been slowly cutting them off of updates and/or taking them off the app store entirely.

See Anywhere and Replit. Anywhere was the #1 or #2 app and was taken off the app store entirely before being put on and then taken off again.

Last I checked, Replit hasn't received an update on the iOS app store in over two months due to reviews denying them.

reply
cyanydeez
4 hours ago
[-]
Can't be just a SaaSpocolypse. LLMs with the right harness could obliterate much of the TODO+ apps with a general assistant.

But it's more likely it's just walled garden + security theatre that'll keep them from allowing outside apps.

reply
varispeed
4 hours ago
[-]
Wouldn't trust AI to run TODO, especially weak models. They can hallucinate tasks, forget to remind etc.
reply
tapvt
3 hours ago
[-]
LLMs are stateless. But given an actual database of task-shaped items and some work, I could see the potential.

With a canonical source of truth, and set input/output expectations, the potential blast radius is quite small.

reply
wpm
2 hours ago
[-]
And the end results is.....? What? A todo app that takes 16GB of RAM?
reply
bigyabai
2 hours ago
[-]
Nothing that Mac and Windows users aren't already used to.
reply
Forgeties79
37 minutes ago
[-]
It’s tempting to be flippant about MacOS/windows but in all seriousness, the resources required for an LLM to do the job of a typical lighter weight app/software is a serious consideration. No amount of bloat matches what an LLM needs.
reply
pj_mukh
3 hours ago
[-]
Is this an issue with Cactus compute stuff as well?
reply
MillionOClock
4 hours ago
[-]
What is your app doing? Just LLM inference?
reply
codybontecou
39 minutes ago
[-]
It's a custom agent harness with on-device models and the ability to swap between models.

Basically, a "toy" app to showcase where we are with coding agents on-device.

reply
amelius
1 hour ago
[-]
Seriously, how do people put up with being nannied by Apple?

Come on folks, their IT hardware may be nice but supporting them is not worth it.

reply
saagarjha
4 hours ago
[-]
Use of the LLMs to do what?
reply
karimf
5 hours ago
[-]
Related: Gemma 4 on iPhone (254 comments) - https://news.ycombinator.com/item?id=47652561
reply
redbell
5 hours ago
[-]
Another related submission from 22 days ago : iPhone 17 Pro Demonstrated Running a 400B LLM (+700pts, +300cmts): https://news.ycombinator.com/item?id=47490070
reply
zozbot234
4 hours ago
[-]
That's very impressive but it's streaming in weights from flash storage. That's not really viable in a mobile context, it will use way too much power. Smaller models are way more applicable to typical use, perhaps with mid-sized models (like the Gemma4 26A4B model) using weights offload from SSD for rare uses involving slower "pro" inference.
reply
mfro
3 hours ago
[-]
Strangely, it is super fast on my 16 Plus, but with longer messages it can slow down a LOT, and not because of thermal throttling. I wish I could see some diagnostic data.
reply
steve-atx-7600
2 hours ago
[-]
Inference from an LLM is O(tokens^2)
reply
halJordan
28 minutes ago
[-]
Only in the naive implementations of attention
reply
conception
3 hours ago
[-]
I’m pretty excited about the edge gallery ios app with gemma 4 on it but it seems like they hobbled it, not giving access to intents and you have to write custom plugins for web search, etc. Does anyone have a favorite way to run these usefully? ChatMCP works pretty well but only supports models via api.
reply
Chrisszz
3 hours ago
[-]
I just installed Google Ai Edge Gallery on my iPhone 16 pro, here are the results of the first benchmark with GPU, Prefill Tokens=256, Decode Tokens=256, Number of runs: 3. Prefill Speed=231t/s, Decode Speed=16t/s, Time to First Token=1.16s, First init time=20s
reply
declan_roberts
55 minutes ago
[-]
I really hope this is a preview of the replacement for Siri that Google is creating bc these models are fantastic for their size!
reply
halJordan
34 minutes ago
[-]
Google is not creating a replacement for anything.

Apple is getting a base Gemini model (not a Gemma), and it will run on Apple private compute. Apple foundational models will remain the on device model

reply
jimbokun
2 hours ago
[-]
I feel like UX and API design are very under explored.

What are the possibilities of an Android or iOS device where the OS is centered around a locally running LLM with an API for accessing it from apps, along with tools the LLM can call to access data from locally running apps? What’s the equivalent of the original Mac OS?

Do apps disappear and there’s just a running dialog with the LLM generating graphical displays as needed on demand?

reply
usmanshaikh06
5 hours ago
[-]
ESET is blocking this site saying:

Threat found This web page may contain dangerous content that can provide remote access to an infected device, leak sensitive data from the device or harm the targeted device. Threat: JS/Agent.RDW trojan

reply
zache6
3 hours ago
[-]
Same on my device.
reply
mistic92
6 hours ago
[-]
It runs on Android too, with AI Core or even with llama.cpp
reply
srslyTrying2hlp
3 hours ago
[-]
Its more impressive when Apple does it because they are so far behind.

I remember being excited when Apple got widgets because then I could add my 'Next Alarm time' to my home screen. Made my company work phone usable on trips.

I wonder when they are going to get NVIDIA cards or CUDA? Then they can actually run LLMs and not just trick people into buying it under the 30 year old idea of 'Unified Memory'.

reply
bigyabai
3 hours ago
[-]
It's kinda funny that macOS supported CUDA when it was a tech demo, but then ideologically objects to it once it's a $3 trillion business.

They've had to be dragged kicking and screaming away from the NPU model only to admit that GPGPU tech was the right choice.

reply
srslyTrying2hlp
2 hours ago
[-]
Yeah I remember that. Very Apple of them.

'Cool demo' -> Doesnt convert to tangible things.

Wont attempt to compete with companies better than them, but go their own route. "oh look it consumes low power!" (Things no one cared about).

They are the Nintendo of tech.

reply
deckar01
2 hours ago
[-]
They still don’t render the markdown (or LaTeX) it outputs.
reply
pabs3
5 hours ago
[-]
> edge AI deployment

Isn't the "edge" meant to be computing near the user, but not on their devices?

reply
stingraycharles
5 hours ago
[-]
No it does not. This is about as “edge” as AI gets.

In a general sense, edge just means moving the computation to the user, rather than in a central cloud (although the two aren’t mutually exclusive, eg Cloudflare Workers)

reply
davidmurdoch
2 hours ago
[-]
For sure. 1000%. Anyone disagreeing with this has lost their marbles.

For those that have lost their marbles: sure, people use words incorrectly, but that does mean we all have to use those words incorrectly.

In compute vernacular, "edge" means it's distributed in a way that the compute is close to the user (the "user" here is the device, not a person); "on device" means the compute is on the device. They do not mean the same thing.

reply
pgt
5 hours ago
[-]
Your device is the ultimate edge. The next frontier would be running models on your wetware.
reply
elcritch
5 hours ago
[-]
Not just running it on your wetware, but charging you for it.

Can't wait until AI companies go from mimicking human thought to figuring how to licensing those thoughts. ;)

reply
acters
5 hours ago
[-]
Man can't wait for AI in my brain. And then intelligence will be pay to win.
reply
hhh
5 hours ago
[-]
It depends, because edge is a meaningless term and people choose what they want for it. In 2022, we set up a call with a vendor for ‘edge’ AI. Their edge meant something like 5kW, and our edge was a single raspberry pi in the best case.
reply
DoctorOetker
3 hours ago
[-]
does anyone know of a decent but low memory or low parameter count multilingual model (as many languages as possible), that can faithfully produce the detailed IPA transcription given a word in a sentence in some language?

I want to test a hypothesis for "uploading" neural network knowledge to a user's brain, by a reaction-speed game.

reply
estimator7292
3 hours ago
[-]
Espeak-ng.

You don't need a neural network. Traditional NLP is far better at this task. The keyword you're looking for is "phoenemizer"

reply
DoctorOetker
2 hours ago
[-]
can Espeak-ng provide the IPA transcription? or does it produce sound?

I'm surprised traditional NLP being better than ML models for this task, can you point me to a benchmark analysis pointing out that non-neural Espeak-ng is better than ML models?

Also, I asked for a neural model for another reason as well, I still want semantic knowledge present, I want more than pronunciation, but before I use myself as a test subject, I want to make sure I get the proper pronunciation in case the highly speculative "uploading game" works... I don't want to early systematically mis-train myself on pronunciation...

reply
bearjaws
4 hours ago
[-]
Would love to see a show down of performance on iPhone vs Googles Tensor G5, which in my experience the G5 is 2 full generations behind performance wise.
reply
the_inspector
3 hours ago
[-]
You are referring to the edge models, right? E2B and E4B, not the bigger ones (26B, 31B)...
reply
logicallee
5 hours ago
[-]
For those who would like an example of its output, I'm currently working through creating a small, free (cc0, public domain) encyclopedia (just a couple of thousand entries) of core concepts in Biology and Health Sciences, Physical Sciences, and Technology. Each entry is being entirely written by Gemma 4:e4b (the 10 GB model.) I believe that this may be slightly larger than the size of the model that runs locally on phones, so perhaps this model is slightly better, but the output is similar. Here is an example entry:

https://pastebin.com/ZfSKmfWp

Seems pretty good to me!

reply
everyday7732
4 hours ago
[-]
What's your goal? Do you have a project you want the encyclopedia for?
reply
grimmai143
3 hours ago
[-]
Do you know of a way of running these models on Android? Also, what does the thermal throttling look like?
reply
robmccoll
3 hours ago
[-]
Edge Gallery by Research at Google
reply
ValleZ
5 hours ago
[-]
There are many apps to run local LLMs on both iOS & Android
reply
srslyTrying2hlp
3 hours ago
[-]
Once you realize Apple and Nintendo have 'understandings' with media outlets, you will start to realize this is just marketing.
reply
andsoitis
10 hours ago
[-]
is there a comparison of it running on iPhone vs. Android phones?
reply
jeroenhd
4 hours ago
[-]
Running Gemma-4-E2B-it on an iPhone 15 (can't go higher than that due to RAM limitations) versus a Pixel 9 Pro, I don't really notice much of a difference between the two. The Pixel is a bit faster, but also a year more recent.

The model itself works absolutely fine, though the iPhone thermal throttles at some point which really reduces the token generation speed. When I asked it to write me a business plan for a fish farm in the Nevada desert, it slowed down after a couple thousand tokens, whereas the Pixel seems to just keep going.

reply
veunes
33 minutes ago
[-]
It’s likely a llama.cpp backend issue. On the Pixel, inference hits QNN or a well-optimized Vulkan path that distributes the SoC load properly. On the iPhone, everything is shoved through Metal, which maxes out the GPU immediately and causes instant overheating. Until Apple opens up low-level NPU access to third-party models, iPhones will just keep melting on long-context prompts
reply
lrvick
6 hours ago
[-]
You can run Android on just about anything so it boils down to Linux GPU benchmarks.
reply
fsiefken
5 hours ago
[-]
That doesn't answer the question, I'm curious too. I think there's a speed and battery advantage on the A19 Pro chip compared to the Snapdragon 8 Elite Gen 5 chip, but to know for sure one has to run the same model used in the most efficient way on both machines (flagships ios and android).
reply
srslyTrying2hlp
3 hours ago
[-]
I dont think you should have been downvoted. Processing and memory are the only thing that matters. (Unless we are being so nontechnical now that we just say things like Pixel 9 is great...)
reply
bossyTeacher
6 hours ago
[-]
Is the output coherent though? I am yet to see a local model working on consumer grade hardware being actually useful.
reply
the_pwner224
4 hours ago
[-]
I have a 128 GB Strix Halo tablet (same as the other commenter here with the Framework Desktop). I'm using the larger Gemma 4 26B-A4B model (only 28 GB @ Q8) and it's been working great and runs very fast.

It's a 100% replacement for free ChatGPT/Gemini.

Compared to the paid pro/thinking models... Gemma does have reasoning, and I have used the reasoning mode for some tax & legal/accounting advice recently as well as other misc problems. It's worked well for that, but I haven't tried any real difficult tasks. From what I've heard re. agentic coding, the open weight models are ~18-24 months behind Anthropic & Google's SOTA.

Qwen 3.5 122B-A10B should just fit into 128 GB with a Q4/5 and may be a bit smarter. There's apparently also a similar sized Gemma 4 model but they haven't released it yet, the 26B was the largest released.

reply
veunes
24 minutes ago
[-]
Sure, 26B models on beefy desktop silicon are finally nipping at the heels of commercial APIs, but this is a mobile thread. On a phone with 8GB of RAM and passive cooling, your tokens per second (t/s) are going to fall off a cliff after the first minute of sustained compute
reply
zozbot234
4 hours ago
[-]
There's a 31B dense model in the Gemma 4 series that's obviously going to be smarter (though a whole lot slower) than the MoE 26A4B.
reply
the_pwner224
4 hours ago
[-]
I tried it and it was unusably slow at ~5-6 TPS. 26A4B gets close to 40 TPS which is faster than you can read, and still pretty quick with reasoning enabled.
reply
RobMurray
1 hour ago
[-]
I'm working on a visual description app for the blind. even Gemma 4 E2B can give very useful image descriptions while at the same time taking questions as audio. It's also much faster than most of the current popular cloud based apps like Be My Eyes.
reply
jeroenhd
5 hours ago
[-]
Google's models work quite well on my Android phone. I haven't found a use case beyond generating shitposts, but the model does its job pretty well. It's not exactly ChatGPT, but minor things like "alter the tone of this email to make it more professional" work like a charm.

You need a relatively beefy phone to run this stuff on large amounts of text, though, and you can't have every app run it because your battery wouldn't last more than an hour.

I think the real use case for apps is more like going to be something like tiny, purpose-trained models, like the 270M models Google wants people to train and use: https://developers.googleblog.com/on-device-function-calling... With these things, you can set up somewhat intelligent situational automation without having to work out logic trees and edge cases beforehand.

reply
lrvick
6 hours ago
[-]
I run qwen3.5 122b on a Framework Desktop at 35/ts as a daily driver doing security and OS systems and software engineering.

Never paid an LLM provider and I have no reason to ever start.

reply
mixermachine
5 hours ago
[-]
What spec of Framework Desktop do you run this on?
reply
the_pwner224
4 hours ago
[-]
If you're looking to buy new hardware, also consider the Asus Rog Flow Z13. It has the same chip as the Framework desktop and is ~20% cheaper ($2,700) for the 128 GB spec while coming in a tablet/laptop form factor. It's capped at a slightly lower power but Strix Halo scales down very well in TDP - I never even use the max power mode on my Z13 because you don't really get any extra perf.

The only downside is that I suspect the Framework would be a decent bit quieter under load (not that this thing is abnormally loud). As well as you're limited to a single M.2 2230 internal SSD slot in this (I believe Micron recently launched a 4 TB model, but generally you'll max out at 2 TB without using an external enclosure).

I don't have anything against the Framework, I'm sure it's a great machine, but the Z13 is an incredible portable all-in-one device that can handle everything from general PC use to gaming to tablet/entertainment to LLMs & high perf.

reply
breisa
5 hours ago
[-]
There is only one and for this model you need the one with 128GiB RAM.
reply
caminante
3 hours ago
[-]
reply
fsiefken
5 hours ago
[-]
Qwen3.5-9b and Qwen3.5-27b are pretty coherent on my 24G android phone
reply
dpacmittal
5 hours ago
[-]
Which android phone has 24G?
reply
jfoster
6 hours ago
[-]
It can write (some) code that works. Just roughly guessing from my use, but I think of it as being a bit like ChatGPT circa-2024 in terms of capability & speed.

Disappointing if you compare it to anything else from 2026, but fairly impressive for something that can run locally at an OK speed.

reply
logicallee
5 hours ago
[-]
It's highly coherent (see my other comment for an example of its text output) and yes, it's useful. I am starting to use Gemma 4:e4b as my daily driver for simple commands it definitely knows, things that are too simple to use ChatGPT for. It is also able to code through moderately difficult coding tasks. If you want to see it in action, I posted a video about it here[1] (the 10 GB one is at the 2 minute mark and the 20 GB one says hello at 5 minutes 45 seconds into the video.) You can see its speed and output on simple consumer grade hardware, in this case a Mac Mini M4 with 24 GB of RAM.

[1] https://youtube.com/live/G5OVcKO70ns

reply
a_paddy
6 hours ago
[-]
I can try it for you
reply