Social media can be intellectually stimulating and educational, but it's also easy to get sucked into ideological sniping and flamewars, even if you didn't go looking for it. The emotional and intellectual energy spent flaming strangers on the Internet is a complete waste of human capital.
With an API like this, I assume you could have a browser extension that could de-snarkify content before showing it to you. You could ask the LLM to preserve all factual content from the post, but to de-claw any aggressive or snarky language. If you really wanted to have fun, you could ask it to turn anything written in an aggressive tone into something that sounds absurd or incompetent, so that the more aggressive the post, the more it would make the author look silly.
This could have a double benefit. For the reader, it insulates them from the personal attacks of random strangers on the Internet. Don't get me wrong, there is a time and a place for real, charged arguments about important issues that affect us all. But there is little to be gained from having those fights with strangers; on the contrary, I think it poisons the body politic when strangers are screaming at each other.
For the writer, it takes away any incentive to be snarky or rude. If other people filter their content this way, there's no point in trying to be mean to them, and no "race to the bottom" for who can be more nasty.
I want the option to engage with the substance of new developments in the world, technology, etc. without the drama. I don't want to be drawn into the drama of strangers (who could, for all I know, just be bots or ragebaiting AIs).
If I want drama, there's plenty of it on TV, or I could talk to my friends about what is going on with people I actually know.
The anti-pattern, in my mind, is logging on to engage with substantive content and to be inadvertently drawn into flamewars with strangers.
Sure, you might say this sort of thing is boiling flavor out of your food, but... boiling the bacteria out of what you consume isn't a bad thing.
But... It's the type of idea that is unpredictable as it comes into contact with reality. If it works, it probably works very differently from the initial idea of how it will work.
I see the merit in such a proposal. It's the linguistic equivalent to boiling the food you consume, instead of eating it raw with all the associated bad stuff.
The problem is, as you said, that this plan is unlikely to be as rosy as it's portrayed and probably has a lot of drawbacks in real life.
Interesting to think about and explore, though.
I mean... you would be basically taking a complex thing, transforming and reconstructing it. What we want out of social media isn't a simple, legible function. The positives. You'd have to discover them.
If someone starts building with the intitial idea above, my guess is that they'd end up with some sort of custom feed that draws inspiration and inputs from social media... but isn't social media. It's something else that you can scroll, read and whatnot.
Also what is toxic to one person is not toxic to another depending on their subjective choices. How will you solve for this without everyone just seeing what they want to see even if reality is not like that? I feel that will just enhance the problems of social media than reduce it.
It kind of falls apart when you start to think of edge cases rather than "hey this tool will keep morons off my feed!" mentality
- Gemini Nano-1: 46% MMLU, 1.8B
- Gemini Nano-2: 56% MMLU, 3.25B
- Gemma4 E2B: 60.0% MMLU, 2.3B
- Gemma4 E4B: 69.4% MMLU, 4.5B
Sources:
- https://huggingface.co/google/gemma-4-E2B-it
- https://android-developers.googleblog.com/2024/10/gemini-nan...
Note that the article here was last updated 2025-09-21, and as of that time it was already on Gemini Nano 3.
But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back. That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
With MoE models, you could fetch expert layers from the network on demand by issuing HTTP range queries for the corresponding offset, similar to how bittorrent downloads file chunks from multiple hosts. You'd still have to download shared layers, but time to first token would now be proportional to active-size rather than total-size. Of course this wouldn't be totally "offline" inference anymore, but for a web browser feature that's not a key consideration.
Maybe the next big thing will be some software subscription premium offers with a bunch of 5090s as an extra.
Here's to hoping that that dystopia will never happen.
fantastic!
> the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back
sure but does this mean the model is lazily downloaded? that is, if I used this and I am the first time the model was called, the user would be waiting until the model was downloaded at that point?
that sounds like a horrible user experience - maybe chrome reduces the confusion by showing a download dialog status or similar?
also, any idea what the on disk impact is?
So it's once per browser, not once per site.
You can track the download state yourself and pop whatever UI you want.
Then it's possible the model you get will scale with the CPU/GPU/RAM available, so if you have a 12GB GPU you probably get a better model, perhaps that's a 10-11GB model? At 2x that's 22GB.
Then consider that a machine is not static, GPUs/hardware come and go, VRAM allocation in integrated graphics changes, etc. You end up with just needing to pick a number and not confuse users.
This is part of it, and also we just didn't want to use up the last of the user's disk space! It's disrespectful to use up 3 GB if the user only has 4 GB left; it's sketchy if the user only has 10 GB. At 22 GB, we felt there was more room to breathe.
One could argue that users should have more agency and transparency into these decisions, and for power users I agree... some kind of neato model management UI in chrome://settings would have been cool. But 99% of users would never see that, so I don't think it ever got built.
> Built-in models should be significantly smaller. The exact size may vary slightly with updates.Yes, I can read and comprehend English and you should assume I read the page. Because of the "At least" wording, I was curious what a person who has actually used the feature has noticed, aka, learning from people who have actually done it already.
If it turns out useful enough I'm sure browsers will just start including it as (perhaps optional?) part of installation.
And do you guys communicate between other browsers when doing something like this to try to settle on something common? I don't mean W3C but practically, it's a small world after all.
The target usage for the prompt API is anything that would benefit from the general capabilities of a language model, and can't be encompassed by the more-specific APIs for summarization/writing/rewriting. Realistic use cases currently are things like sentiment analysis, keyword extraction, etc. I have a number of ideas on how to integrate it into my current retirement project around Japanese flashcards, e.g. generating example sentences. If the small (~10 GiB) model class keeps getting smarter, the class of things possible on-device in this way gets larger and larger over time.
We definitely communicated with other browsers. There were the standing WebML Community Group meetings at the W3C every few weeks. There were async discussions like https://github.com/mozilla/standards-positions/issues/1213 and https://github.com/WebKit/standards-positions/issues/495 . (Side note, I love the contrast between Mozilla's helpful in-depth feedback and WebKit's... less helpful feedback.) There was also a bit of a debacle where the W3C Technical Architecture Group tried to give "feedback" but the feedback ended up being AI-generated slop... https://github.com/w3ctag/design-reviews/issues/1093 .
But overall, yeah, the goal with the prompt API, as with all web APIs, is to put something out there for discussion as early as possible, and get input from the broad community, especially including other browsers, to see if it's something that they are interested in collaborating on. https://www.chromium.org/blink/guidelines/web-platform-chang... (which I also wrote) goes into how the Chromium project thinks about such collaboration in general.
While many AI integrations are focused on text communication / chat style. A lot of software benefits from non-text interfaces.
I believe at some point OSes and browsers should provide an API to manage models so you'll have access to on-device/remote ones with a simplified interface for the app. Making something standardized that is cross-platform would be fantastic. It also needs to be on mobile devices, so the players that can easily make it happen are mostly Apple and Google. (Meta will follow or vice-versa I guess)
Key-point: it shouldn't be exclusive to promoted models.
(1) https://developer.apple.com/documentation/foundationmodels So the app would be able to query and get the right model(s).
It's a tiny script that looks up the rss feed and uses the content to generate summaries; quite a nice fit with our static site. Sometime I'd like to extend it to ask different questions about the content.
It would actually be pretty interesting to see if its possible to decentralize the compute to generate something useful from a larger prompt broken down and sent to a bunch of browsers using a subagent pattern or something like RLM, each working on a smaller part of the prompt
Plus even if you really wanted to do that, WebGPU exists and has for a while right?
Edit: simple example is a spam bot
> This feels like a lot of work for low reward
Low per-device reward combined with a high user count - either by large legitimate players or by botnets - has been the monetisation strategy of most online enterprises.There's a lot of ways this API could go, e.g. more powerful models eventually, or perhaps integration with cloud models. For example, I could see Google trying to default Gemini as the model for users signed into Chrome
As for cloud models, that would be interesting, although I guess then the fraud would be easier in spoofing whatever parameters (ip address? domain name? some Chrome install identifier?) to get around whatever rate limiting they come up with, rather than actually using people’s computers.
Anyways I’m sure if it ends up being abused, they can throw a permissions dialog in front of it. Just need to figure out a way to make normal people understand.
> Just need to figure out a way to make normal people understand.
Has that strategy ever actually worked?I haven't pushed out a full version[1] which uses ducklake-wasm + this to make a completely local SQL answering machine, but for now all it does is retype prompts in the browser.
If you want to do anything interesting you need transformers.js and a decent mode. Qwen 0.9B is where things start working usefully
Or a LocalNet API that integrates with trusted hardware devices on your local network. As a trial (Chrome beta programme — strictly limited but here’s 3x signup links to share with your friends) you can adjust your Google Next Mini underfloor heating directly from Chrome!
Or a DirectCast API that lets you stream <video> elements to a device of your choice even over a VPN. As a Chrome trial, you can use your Google Cloud account to stream directly from YouTube Premium to any linked Google Chromecast devices you own!