Ask HN: What's a standard way for apps to request text completion as a service?
50 points
15 days ago
| 10 comments
| HN
If I'm writing a new lightweight application that requires LLM-based text completion to power a feature, is there a standard way to request the user's operating system to provide a completion?

For instance, imagine I'm writing a small TUI that allows you to browse jsonl files, and want to create a feature to enable natural language parsing. Is there an emerging standard for an implementation agnostic, "Translate this natural query to jq {natlang-query}: response here: "?

If we don't have this yet, what would it take to get this built and broadly available?

lcian
11 days ago
[-]
When I'm writing a script that requires some kind of call to an LLM, I use this: https://github.com/simonw/llm.

This is of course cross-platform and works with both models accessible through an API and local ones.

I'm afraid this might not solve your problem though, as this is not an out of the box solution, it requires the user to either provide their own API key or to install Ollama and wire it up on their own.

reply
kristopolous
11 days ago
[-]
I've been working on a more unixy version of his tool I call llcat. Composable, stateless, agnostic, and generic:

https://github.com/day50-dev/llcat

It might help things get closer..

It's under 2 days old and it's already really fundamentally changing how I do things.

Also for edge running look into the LFM 2.5 class of models: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct

reply
mirror_neuron
11 days ago
[-]
I love this concept. Looks great, I will definitely check it out.
reply
kristopolous
10 days ago
[-]
Please use it and give me feedback. I'm going to give a lightning talk on it tonight at sfvlug
reply
nvader
11 days ago
[-]
I think this is definitely a step in the right direction, and is exactly the kind of answer I was looking for. Thank you!

`llm` gives my tool a standard bin to call to invoke completions, and configuring and managing it is the user's responsibility.

If more tools started expecting something like this, it could become a defacto standard. Then maybe the OS would begin to provide it.

reply
netsharc
11 days ago
[-]
That's interesting, on Linux there's the $EDITOR variable (a quick search of the 3 distros Arch, Ubuntu, Fedora show me they respect it) for the terminal text editor.

Maybe you can trailblaze and tell users your application will support the $LLM or $LLM_AUTOCOMPLETE variables (convene the committee for naming for better names).

reply
billylo
15 days ago
[-]
Windows and macOS does come with a small model for generating text completion. You can write a wrapper for your own TUI to access them platform agnostically.

For consistent LLM behaviour, you can use ollama api with your model of choice to generate. https://docs.ollama.com/api/generate

Chrome has a built-in Gemini Nano too. But there isn't an official way to use it outside chrome yet.

reply
vintagedave
11 days ago
[-]
Do you know what it’s called, at least on Windows? I’m struggling to find API docs.

When I asked AI it said no such inbuilt model exists (possibly a knowledge date cutoff issue.)

reply
billylo
10 days ago
[-]
reply
vintagedave
9 days ago
[-]
Thankyou!
reply
bredren
11 days ago
[-]
Yes. I am not aware of a model shipping with Windows nor announced plans to do so. Microsoft’s been focused on cloud based LLM services.
reply
usefulposter
11 days ago
[-]
This thread is full of hallucinations ;)
reply
tony_cannistra
11 days ago
[-]
These are the on-device model APIs for apple: https://developer.apple.com/documentation/foundationmodels
reply
nvader
15 days ago
[-]
Is there a Linux-y standard brewing?
reply
billylo
14 days ago
[-]
Each distro is doing their own thing. If you are targeting Linux mainly, I would suggest to code it on top of ollama or LiteLLM
reply
1bpp
10 days ago
[-]
Windows doesn't?
reply
WilcoKruijer
11 days ago
[-]
MCP has a feature called sampling which does this, but this might not be too useful for your context. [0]

In a project I’m working on I simply present some data and a prompt, the user can then pipe this into a LLM CLI such as Claude Code.

[0] https://modelcontextprotocol.io/specification/2025-06-18/cli...

reply
brumar
11 days ago
[-]
Sampling seemed so promising, but do we know if some MCPs managed to leverage this feature successfully?
reply
lurking_swe
10 days ago
[-]
if i recall the issue is that most mcp capable client APPs (Cursor, Claude Code, etc) don’t yet support it! VSCode is an exception.

Example: https://github.com/anthropics/claude-code/issues/1785

reply
cjonas
11 days ago
[-]
I asked a similar question a while back and didn't get any response. Some type of service is needed for applications that want to be AI enabled but not deal with usage based pricing that comes with it. Right now the only option is for the user to provide a token/endpoint from one of the services. This is fine for local apps, but less ideal for we apps.
reply
joshribakoff
11 days ago
[-]
I have been using an open source program “handy”, it is a cross platform rust tauri app that does speech recognition and handles inputting text into programs. It works by piggybacking off the OS’s text input or copy and paste features.

You could fork this, and shell out to an LLM before finally pasting the response.

reply
jiehong
11 days ago
[-]
This might work through a LSP server?

It’s not exactly the intended use case, but it could be coerced to do that.

I’ve seen something else like that, though: voice transcription software that have access to the context the text is in, and can interact with it and modify it.

Like how some people use super whisper modes [0] to do some actions with their voice in any app.

It works because you can say "rewrite this text, and answer the questions it asks", and the dictation app first transcribes this to text, extract the whole text from the focused app, send both to an AI Model, get an answer back and paste the output.

[0]: https://superwhisper.com/docs/common-issues/context

reply
Sevii
11 days ago
[-]
Small models are getting good but I don't think they are quite there yet for this use case. For ok results we are looking at 12-14GB of vram committed to models to make this happen. My MacBook with 24GB of total ram runs fine with a 14B model running but I don't think most people have quite enough ram yet. Still I think it's something we are going to need.

We are also going to want the opposite. A way for an LLM to request tool calls so that it can drive an arbitrary application. MCP exists, but it expects you to preregister all your MCP servers. I am not sure how well preregistering would work at the scale of every application on your PC.

reply
TZubiri
11 days ago
[-]
Not at all natural language, but linux has readline for exact character matches, it's what powers tab completion in the command line.

Maybe it can be repurposed for natural language in a specific implementation

reply
tpae
11 days ago
[-]
You can check out my project here: https://github.com/dinoki-ai/osaurus

I'm focused on building it for the macOS ecosystem

reply