I just tested this with my internet connection disabled and it still worked. Since it's doing local processing, I suspect it uses traditional OCR algorithms rather than LLMs.
As the article concludes, LLMs aren't magic, they're just one useful tool to include in your toolbox.
Security concerns aside (...) that sounds pretty useful.
Similarly coding-focused LLMs can access backend engines that actually run the code and get feedback, either to show the user or to internally iterate.
Having a whole host of such backend processors would be great. Users still only ever have to interact using natural language, but get the power of all these specialized tools in the backend. There are some tasks LLMs can do, but special-purpose algorithms may do better, faster, and/or with less energy usage.
https://www.dafont.com/forum/read/522670/font-identification
I don’t think the proposed font is correct either, I’m not even sure the concept of font works for that example though. Mainly the arches on the m are wrong, too arch like and whereas the example is more teardrop.
I agree that using the frontier models would be much more interesting.
E.g. https://www.dafont.com/forum/read/569491/taylor-swift-font-p...
Edit: here's the same effect made on inkscape: https://i.postimg.cc/TYV6K6bt/taylorswift.png
This is actually quite important! Especially when you're not talking about a single word/name but a group of several words/names.
It’s quite likely LLMs don’t “know” the fonts in the dataset, but they could figure many of them out.
Every time I turn around these days I encounter someone ready to use an infinite amount of energy that is being paid for by other people, to 'simulate' some analog process by temporarily taking the reins of some data center that is burning megawatts of energy. We are being given the reins for 0.5 seconds but very soon the horse will gallop away unless we have a lot of money to spend.