Ferret-UI Lite: Lessons from Building Small On-Device GUI Agents
21 points
4 days ago
| 3 comments
| machinelearning.apple.com
| HN
bensyverson
1 hour ago
[-]
I recently experimented with Apple's Foundation Models framework, and I came away impressed at the speed and accuracy of the LLM. You can't ask it to build you a web app, but it can reliably translate a written instruction into tool use within your native app. I think there's a lot of merit to Apple's approach, using specialist tiny models like Ferret-UI Lite, though I don't think we'll see the full fruits of their labor for another year or two.

But it's a vision that I can get behind, where basic tasks like transcription, computer use, in-app tool, image understanding, etc, are local, secure and private.

reply
w10-1
27 minutes ago
[-]
I'm disappointed that they are taking the long way around, with screen shots and visual recognition.

Apple GUI's have underlying accessibility annotations that if surfaced would make UI manipulation easy for LLM's.

"Back in the day" - 1990's - Apple had Virtual User, basically a lisp derivative that reported UI state as S-expressions (like a web DOM) and allowed scripts to manipulate settings and perform UI actions.

With such a curated DOM/model and selective UI inputs, they could manage privacy and safety, opening up LLM control to users who would otherwise never trust a machine.

I hope they're working on that approach and training models for it. It's one way they could distinguish the Apple platform as being more controllable, with safety and permissions built into the subsystems instead of giving the LLM full control over UI input.

reply
brudgers
2 days ago
[-]
reply