I Self-Hosted Llama 3.2 with Coolify on My Home Server
218 points
2 days ago
| 14 comments
| geek.sg
| HN
bambax
2 days ago
[-]
> I decided to explore self-hosting some of my non-critical applications

Self-hosting static or almost-static websites is now really easy with a Cloudflare front. I just closed my account on SmugMug and published my images locally using my NAS; this costs no extra money (is basically free) since the photos were already on the NAS, and the NAS is already powered on 24-7.

The NAS I use is an Asustor so it's not really Linux and you can't install what you want on it, but it has Apache, Python and PHP with Sqlite extension, which is more than enough for basic websites.

Cloudflare free is like magic. Response times are near instantaneous and setup is minimal. You don't even have to configure an SSL certificate locally, it's all handled for you and works for wildcard subdomains.

And of course if one puts a real server behind it, like in the post, anything's possible.

reply
ghoomketu
2 days ago
[-]
> Cloudflare free is like magic

Cloudflare is pretty strict about the Html to media ratio and might suspend or terminate your account if you are serving too many images.

I've read far too many horror stories about this on hn only so please make sure what you're doing is allowed by their TOS.

reply
hdra
2 days ago
[-]
do they ever publish an actual number on this? given the size of HTML documents v.s. images, I imagine its something thats something that can be exceeded very easily without knowing..

e.g. is running a personal photography website OK?

reply
telgareith
2 days ago
[-]
Cloudflare removed those restrictions from the TOS 12+ months ago.

Take a look at if Cloudflare Pages + Cloudflare R2 meets the needs of your site.

I'd also recommend using cloudflare tunnels (under Zero Trust) rather than punching a hole in your firewall. For a number of reasons.

reply
telgareith
2 days ago
[-]
Cloudflare removed that bit from their TOS entirely about a year ago now. Are you citing a more recent source?

PS: talking about Cloudflare being snappy when content is being served from a austore nas made me chuckle.

reply
jgalt212
2 days ago
[-]
I think the OP meant once the resource was cached by Cloudflare. The first time served is not snappy.
reply
archerx
2 days ago
[-]
You could also use openVPN or wireguard and not have a man in the middle for no reason.

I have a VPN on a raspberry pi and with that I can connect to my self hosted cloud, dev/staging servers for projects, gitlab and etc when I’m not on my home network.

reply
dweekly
2 days ago
[-]
I believe the suggested setup was for making a site and images available to the public, for which hiding the origin behind Cloudflare seems a very good reason. Some public IP will need to have ports 443/80 open.
reply
nirav72
2 days ago
[-]
That requires opening a firewall port on router. For some people, that might not be possible. Either due to ISP restrictions such as CGNAT. In those cases, they're better off using something like Tailscale.
reply
Reubend
2 days ago
[-]
Is the NAS exposed to the whole internet? Or did you find a clever way to get CloudFlare in front of it despite it just being local?
reply
bambax
2 days ago
[-]
The web server of the nas is exposed to the Internet (port forwarding of 80 from the router to the nas); the rest of the nas is not exposed / not accessible from outside the LAN.

The images that are published are low-res versions copied to a directory on a partition accessible to the web server.

This is not the safest solution, as it does punch a hole in the lan... It's kind of an experiment... We'll see how it goes.

reply
cheema33
2 days ago
[-]
You can use CloudFlare Tunnel (https://www.cloudflare.com/products/tunnel/) to connect a system to your cloudflare gateway, without exposing it to the Internet.
reply
rmbyrro
2 days ago
[-]
Or Tailscale, which is pretty cool piece of tech.
reply
telgareith
2 days ago
[-]
Tailscale is wireguard with advertising, a convenient UI, and a STUN/TURN server.
reply
rmbyrro
2 days ago
[-]
I'm aware they wrap OSS, but they made it very, very easy to adopt and maintain for a large chunk of potential users. This requires significant effort and should not be undervalued, in my opinion.
reply
calgoo
2 days ago
[-]
exactly, which means setting up a vps, generating certificates, setting up some type of monitoring to make sure the tunnel is working, etc. I agree that wireguard is the best option, if you have the time and knowledge, but for some dev people that just wants to put up a webpage with a few users, tailscale/cloudflare is a much easier system to maintain (especially as it handles ssl for you as well - to some degree...).
reply
shepherdjerred
2 days ago
[-]
I've used Tailscale funnel which works quite well for this.

https://tailscale.com/kb/1223/funnel

reply
taosx
2 days ago
[-]
For the people who self-host LLMs at home: what use cases do you have?

Personally, I have some notes and bookmarks that I'd like to scrape, then have an LLM summarize, generate hierarchical tags, and store in a database. For the notes part at least, I wouldn't want to give them to another provider; even for the bookmarks, I wouldn't be comfortable passing my reading profile to anyone.

reply
xyc
2 days ago
[-]
llama3.2 1b & 3b is really useful for quick tasks like creating some quick scripts from some text, then pasting them to execute as it's super fast & replaces a lot of temporary automation needs. If you don't feel like invest time into automation, sometimes you can just feed into an LLM.

This is one of the reason why recently I added floating chat to https://recurse.chat/ to quickly access local LLM.

Here's a demo: https://x.com/recursechat/status/1846309980091330815

reply
taosx
2 days ago
[-]
Looks very nice, saved it for later. Last week, I worked on implementing always-on speech-to-text functionality for automating tasks. I've made significant progress, achieving decent accuracy, but I imposed some self-imposed constraints to implement certain parts from scratch to deliver a single binary deployable solution, which means I still have work to do (audio processing is new territory for me). However, I'm optimistic about its potential.

That being said, I think the more straightforward approach would be to utilize an existing library like https://github.com/collabora/WhisperLive/ within a Docker container. This way, you can call it via WebSocket and integrate it with my LLM, which could also serve as a nice feature in your product.

reply
xyc
2 days ago
[-]
Thanks! lmk when/if you wanna give it a spin as free trial hasn't been updated with the latest but I'll try to do it this week.

I've actually been playing around with speech to text recently. Thank you for the pointer, docker is a bit too heavy to deploy for desktop app use case but it's good to know about the repo. Building binaries with Pyinstaller could be an option though.

Real time transcription seems a bit complicated as it involves VAD so a feasible path for me is to first ship simple transcription with whisper.cpp. large-v3-turbo looks fast enough :D

reply
taosx
2 days ago
[-]
Yes it's fast enough, especially if you don't need something live.
reply
afro88
2 days ago
[-]
Can you list some real temporary automation needs you've fulfilled? The demo shows asking for facts about space. Lower param models seem to be not great as raw chat models, so I'm interested in what they are doing well for you in this context
reply
xyc
2 days ago
[-]
Things like grab some markdown text and ask to make a pip/npm install one liner, or quick js scripts to paste in the console (which I didn't bother to open an editor), a fun use case was random drawing some lucky winners for the app giveaway from reddit usernames. Mostly it's converting unstructured text to short/one-liner executable scripts & doesn't require much intelligence. For more complex automation/scripts that I'll save for later, I do resort to providers (cursor w sonnet 3.5 mostly).
reply
TechDebtDevin
2 days ago
[-]
I keep an 8b running with ollama/openwebui to ask it to format things, summarization, and to generate SQL/simple bash commands and what not.
reply
worldsayshi
2 days ago
[-]
So 8b is really smart enough to write scripts for you? How often does it fail?
reply
wokwokwok
2 days ago
[-]
> So 8b is really smart enough to write scripts for you?

Depends on the model, but in general, no.

...but it's fine for simple 1 liner commands like "how do I revert my commit?" or "rename these files to camelcase".

> How often does it fail?

Immediately and constantly if you ask anything hard.

An 8b model is not chat-gpt. The 3B model in the OP post is not chat-gpt.

The capability compared to sonnet/4o is like a potato and a car.

Search for 'LLM Leaderboard' and you can see for yourself. The 8b models do not even rank. They're generally not capable enough to use as a self hosted assistant.

reply
worldsayshi
2 days ago
[-]
I really hope we can get sonnet like performance down to single consumer level GPU someone soon. Maybe the hardware will get there before the models.
reply
TechDebtDevin
2 days ago
[-]
Well considering it probably takes several hundred GBs of VRAM to run inference for Claude its going to be a while.

But yes, like the guy above said it's really only helpful for one line commands. Like if I forgot some sort flag thats available for a certain type of command. Or random things I don't work with often enough to memorize their little build commands etc. It's not helpful for programming just simple commands.

It also can help with unstructured or messy data to make it more readable, although there's potential to hallucinate if the context is at all large.

reply
lolinder
2 days ago
[-]
> Search for 'LLM Leaderboard' and you can see for yourself. The 8b models do not even rank.

This is not true. On benchmarks, maybe, but I find the LLM Arena more accurately accounts for the subjective experience of using these things, and Llama 3.1 8B ranks relatively high, outperforming GPT-3.5 and certain iterations of 4.

Where the 8Bs do struggle is that they don't have as deep a repository of knowledge, so using them without some form of RAG won't get you as good results as using a plain larger model. But frankly I'm not convinced that RAG-free chat is the future anyway, and 8B models are extremely fast and cheap to run. Combined with good RAG they can do very well.

reply
Rick76
2 days ago
[-]
I essentially use it like everyone else. I use it to search through my personal documents because I can control the token size and file embedding
reply
williamcotton
2 days ago
[-]
I work with a lot of attorney’s eyes only documents and most protective orders do not allow for shipping off these files to a third-party.
reply
archerx
2 days ago
[-]
For me at least the biggest feature of some self hosted LLMs is that you can get it then to be “uncensored”, you can get them to tell you dirty jokes or have the bias removed with controversial and politically incorrect subjects. Basically you have a freedom you won’t get from most of the main providers.
reply
ndheebebe
2 days ago
[-]
And reliability. When Azure sends you the "censored output" status code it had basically failed and no retry is gonna help. And unless you are some corp you wont get approved for lifting the censoring.
reply
ein0p
2 days ago
[-]
I run Mistral Large on 2xA6000. 9 times out of 10 the response is the same quality as GPT 4o. My employer does not allow the use of GPT for privacy related reasons. So I just use a private Mistral for that
reply
cma
2 days ago
[-]
They are much more flexible, you can e.g. edit the system's own responses rather than waste context on telling it a correction.
reply
laniakean
2 days ago
[-]
I mostly use it to write some quick scripts or generate texts if it follows some pattern. Also, getting it up running with LM studio is pretty straightforward.
reply
theodric
2 days ago
[-]
I've been enjoying fine-tuning various models with various data, for example 17 years of my own tweets, and then just cranking up the temperature and letting the model generate random crap that cracks me up. Is that practical? Is joy practical? I think there's a place for it.
reply
segalord
2 days ago
[-]
I use it exclusively for users on my personal website to chat with my data. I've given the setup tools to have read access my files and data
reply
netdevnet
2 days ago
[-]
Is this not something that you can with non-hosted LLMs like ChatGPT? If you expose your data, it should be able to access it iirc
reply
worldsayshi
2 days ago
[-]
You can absolutely do that but then you pay by the token instead of a big upfront hardware cost. It feels different I suppose. Sunk cost and all that.
reply
netdevnet
2 days ago
[-]
Am I right thinking that a self-hosted llama wouldn't have the kind restrictions ChatGPT has since it has no initial system prompt?
reply
dtquad
2 days ago
[-]
All the self-hosted LLM and text-to-image models come with some restrictions trained into them [1]. However there are plenty of people who have made uncensored "forks" of these models where the restrictions have been "trained away" (mostly by fine-tuning).

You can find plenty of uncensored LLM models here:

https://ollama.com/library

[1]: I personally suspect that many LLMs are still trained on WebText, derivatives of WebText, or using synthetic data generated by LLMs trained on WebText. This might be why they feel so "censored":

>WebText was generated by scraping only pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The corpus was subsequently cleaned

The implications of so much AI trained on content upvoted by 2015-2017 redditors is not talked about enough.

reply
nubinetwork
2 days ago
[-]
> All the self-hosted [...] text-to-image models come with some restrictions trained into them

https://github.com/huggingface/diffusers/issues/3422

reply
thrdbndndn
2 days ago
[-]
My to-go test for uncensoring is to ask the LLM to write erotic novel.

But I haven't yet find any "uncensored" ones (on ollama) that works. Did I miss something?

(On the contrary: when ChatGPT first came out, it was trivial to jailbreak it to make it write erotica.)

reply
dtquad
2 days ago
[-]
Try the popular (pull count) dolphin models:

https://ollama.com/library/dolphin-mistral

reply
TomK32
2 days ago
[-]
I found that "Don't censor your answer" works as intended and my self-hosted llm happily delivers smut.
reply
Kudos
2 days ago
[-]
Many protections are baked into the models themselves.
reply
nubinetwork
2 days ago
[-]
That depends on the frontend, you can supply a system prompt if you want to... whether it follows it to the letter is another problem...
reply
exe34
2 days ago
[-]
It has a sanitised output. You might want to look for "abliterated" models, where the general performance might drop a bit but the guard-rails have been diminished.
reply
seungwoolee518
2 days ago
[-]
Great post!

However, Do I need to Install CUDA toolkit on host?

I haven't install CUDA toolkit when I use on Containerized platform (like docker)

reply
thangngoc89
2 days ago
[-]
You don't need to install CUDA toolkit on host system.

Nvidia driver + Nvidia container toolkit would do the job. You could check official instructions at [0]

[0] https://docs.nvidia.com/datacenter/cloud-native/container-to...

reply
seungwoolee518
2 days ago
[-]
Thanks, I was bit confused on install a CUDA Toolkit on the Host. (Because I don't install any software except Driver && Toolkit)
reply
ossusermivami
2 days ago
[-]
ai generated blog post (or reworded, whatever) are kinda getting very irritating, like playing chess against the computer, it feel soulless
reply
hmcamp
2 days ago
[-]
How can you tell this post was ai generated? I’m curious.
reply
varun_ch
2 days ago
[-]
I’m curious about how good the performance with local LLMs is on ‘outdated’ hardware like the author’s 2060. I have a desktop with a 2070 super that it could be fun to turn into an “AI server” if I had the time…
reply
magicalhippo
2 days ago
[-]
I've been playing with some LLMs like Llama 3 and Gemma on my 2080Ti. If it fits in GPU memory the inference speed is quite decent.

However I've found quality of smaller models to be quite lacking. The Llama 3.2 3B for example is much worse than Gemma2 9B, which is the one I found performs best while fitting comfortably.

Actual sentences are fine, but it doesn't follow prompts as well and it doesn't "understand" the context very well.

Quantization brings down memory cost, but there seems to be a sharp decline below 5 bits for those I tried. So a larger but heavily quantized model usually performs worse, at least with the models I've tried so far.

So with only 6GB of GPU memory I think you either have to accept the hit on inference speed by only partially offloading, or accept fairly low model quality.

Doesn't mean the smaller models can't be useful, but don't expect ChatGPT 4o at home.

That said if you got a beefy CPU then it can be reasonable to have it do a few of the layers.

Personally I found Gemma2 9B quantized to 6 bit IIRC to be quite useful. YMMV.

reply
magicalhippo
2 days ago
[-]
Yes, gemma-2-9b-it-Q6_K_L is the one that works well for me.

I tried gemma-2-27b-it-Q4_K_L but it's not as good, despite being larger.

Using llama.cpp and models from here[1].

[1]: https://huggingface.co/bartowski

reply
khafra
2 days ago
[-]
If you want to set up an AI server for your own use, it's exceedingly easy to install LM Studio and hit the "serve an API" button.

Testing performance this way, I got about 0.5-1.5 tokens per second with an 8GB 4bit quantized model on an old DL360 rack-mount server with 192GB RAM and 2 E5-2670 CPUs. I got about 20-50 tokens per second on my laptop with a mobile RTX 4080.

reply
taosx
2 days ago
[-]
LM studio is so nice, I'm up and running in 5 minutes. ty
reply
dtquad
2 days ago
[-]
I am using an old laptop with a GTX 1060 6 GB VRAM to run a home server with Ubuntu and Ollama. Because of quantization Ollama can run 7B/8B models on an 8 year old laptop GPU with 6 GB VRAM.
reply
alias_neo
2 days ago
[-]
You can get a relative idea here: https://developer.nvidia.com/cuda-gpus

I use a Tesla P4 for ML stuff at home, it's equivalent to a 1080 Ti, and has a score of 7.1. A 2070 (they don't list the "super") is a 7.5.

For reference, 4060 Ti, 4070 Ti, 4080 and 4090 are 8.9, which is the highest score for a gaming graphics card.

reply
whitefables
2 days ago
[-]
Here's how it looks like in real time: https://youtu.be/3vhJ6fNW8AI
reply
thisguyagain
2 days ago
[-]
What’d you use to record that? Looks really great.
reply
whitefables
2 days ago
[-]
Screen studio
reply
taosx
2 days ago
[-]
Last time I tried a local llm was about a year ago with a 2070S and 3950x and the performance was quite slow for anything beyond phi 3.5 and the small models quality feels worse than what some providers offer for cheap or free so it doesn't seem worth it with my current hardware.

Edit: I've loaded llama 3.1 8b instruct GGUF and I got 12.61 tok/sec and 80tok/sec for 3.2 3b.

reply
nubinetwork
2 days ago
[-]
I'm happy with a Radeon VII, unless the model is bigger than 16gb...
reply
_blk
2 days ago
[-]
Why disable LVM for a smoother reboot experience? For encryption I get it since you need a key to mount, but all my setups have LVM or ZFS and I'd say my reboots are smooth enough.
reply
satvikpendem
2 days ago
[-]
I love Coolify, used to use v3, anyone know how their v4 is going? I thought it was still a beta release from what I saw on GitHub.
reply
j12a
2 days ago
[-]
Coolify is quite nice, have been running some things with the v4 beta.

It reminds a bit of making web sites with a page builder. Easy to install and click around to get something running without thinking too much about it fairly quickly.

Problems are quite similar also, training wheels getting stuck in the woods more easily, hehe.

reply
raybb
2 days ago
[-]
V4 beta is working well for me. Also the new core dev Coolify hired mentioned in a Tweet this week that they're fixing up lots of bugs to get ready for V4 stable.
reply
whitefables
2 days ago
[-]
I'm using v4 beta in the blog post. Didn't try v3 so there's no point of comparison but I'm loving it so far!

It was so easy to get other non-AI stuffs running!

reply
sorenjan
2 days ago
[-]
Can you use a selfhosted LLM that fits in 12 GB VRAM as a reasonable substitute for copilot in VSCode? And if so, can you give it documentation and other code repositories to make it better at a particular language and platform?
reply
0xedd
2 days ago
[-]
Technically, yes, but will yield poor results. We did it internally at big corp n+1 and it, frankly, blows. Other than menial tasks, it's good for nothing but a scout badge.
reply
tbrownaw
2 days ago
[-]
Is that really that much worse than full copilot, though? When we tried it this past spring, it was really cool but not quite useful enough to actually stick with.
reply
vincentclee
2 days ago
[-]
Instead of `watch -n 0.5 nvidia-smi` to track GPU usage. One can use `nvtop`

https://github.com/Syllo/nvtop

reply
cranberryturkey
2 days ago
[-]
How is coolify different than ollama? is it better? worse? I like ollama because I can pull models and it exposes a rest api to me. which is great for development
reply
grahamj
2 days ago
[-]
Might want to skim the article
reply
cranberryturkey
2 days ago
[-]
i did. just realized its a totally different tool for deploying apps.
reply
grahamj
2 days ago
[-]
fwiw that was my reaction to the title too :D
reply
eloycoto
2 days ago
[-]
I have something like this, and I'm super happy with anythingLLM, which allows me to add a custom board with my workspaces, RAG, etc.. I love it!
reply
ragebol
2 days ago
[-]
Probably saves a bit on the gas bill for heating too
reply
rglullis
2 days ago
[-]
Snark aside, even in Germany (where electricity is very expensive) it is more economical to self host than to pay for a subscription to any of the commercial providers.
reply
CraigJPerry
2 days ago
[-]
I don’t know, it’s kind of amazing how good the lighter weight self hosted models are now.

Given a 16gb system with cpu inference only, I’m hosting gemma2 9b at q8 for llm tasks and SDXL turbo for image work and besides the memory usage creeping up for a second or so while i invoke a prompt, they’re basically undetectable in the background.

reply
szundi
2 days ago
[-]
If only we had heat-pump computers
reply
ragebol
2 days ago
[-]
I'd gladly run whatever model you want at home, rent it out so you can pay for both heating, the GPU and the power consumed :-)
reply
keriati1
2 days ago
[-]
What model size is used here? How much memory does the GPU have?
reply
thawab
2 days ago
[-]
he is using the 3b one, since it's the default when downloading it from ollama: https://ollama.com/library/llama3.2
reply