One negative feeling however is that the author didn't mentioned Coolify in the article while being stated in the title :(
Another good article on the same topic that I have already bookmarked is: Setting up a Production-Ready VPS from Scratch (https://dreamsofcode.io/blog/setting-up-a-production-ready-v...)
To expand my knowledge on this topic, generally, after I finish reading this type of content, I copy the article link, put it in an LLM and prompt it:
"here's an article on 'topic name/article title': https://article.link. Grasp it, analyze it then expand each section mentioned from your own knowledge. Add additional sections relative to the subject"
Been running this setup for about a year now, and it's the first time I am actually self-hosting and feeling fairly confident about it.
Aside from that, which distro would you choose for Coolify? I’m debating between Ubuntu 24.04 and Debian 13.
OVH VPS - 24 vCPU, ( or Threads ) 96GB RAM for $53.40/month.
Hetzner VPS [1] - 16vCPU, 32GB, $54.90/month
DO Droplet - General Compute, Regular CPU, 16 vCPU 64GB RAM, $504/month.
Linode - 20 vCPU, 96GB RAM, $576/month
Upcloud - 24 vCPU, 96GB RAM, $576/month
I dont know what CPU OVH is using, because all the others are AMD EPYC or Newer Intel Xeon. But the pricing difference is too great that even if they were Intel E-Core CPU it would still be pretty damn good deal.
[1] There is a cheaper option from Intel vCPU, but those hardware are older and is only available when other customers cancel their plan to free up slot. So only the newer AMD option is used for comparison.
(no /hours pricing, cannot instantly deploy, etc.)
Not that their pricing isn't really really good, but it depends on your use-case. DO / Linode / Upcloud / EC2 / etc. do have an insane pricing in comparison, yes.
[1] https://www.hetzner.com/cloud/
Plus, you have the option of scaling up/down very easily if you do not grow your harddisk size beyond the minimum configuration.
8 vCore + 24 GB RAM + 200 GB SSD NVMe VPS @ OVH
is it a honeypot? also did ovh change prices recently? I remember checking a couple years ago and it was more expensive vs hetzner
OVH has a similar setup but is way more diversified into other product lines. I'd personally never touch them after the fire that they never bothered to explain to those of us affected by it. With the amount of downtime they had there it made it very clear that their ability to recover a situation - any situation is crap.
Most of the SMEs in France are customers.
They are cheap because they do most things in-house, with a lot a recycling, because their DCs are mostly located in low-cost places (real estate, rents, salaries...) and because they go for low margins.
They may not be a honeypot but those requirements seem like a honeypot.
Support is super cold and dismissive too.
Didn’t try ovh but can’t imagine it is much worse than hetzner.
Custom Hardware, down to the DC design, rack, water cooling and economy of scale. There are reasons why some Datacenter are more expensive than others. And the fire at previous OVH DC shows why. Although I remember OVH did explain they dont use that design anywhere else. Doing Custom hardware part like water cooling with Racks isn't the rocket science part, doing it great while doing it at cost efficiency is the most difficult part.
Network quality. OVH owns its own Network. Layering Cables across its own DC along with other exchanges. It used to be slower but this has become less of an issue in 2025. But in the old days the difference between premium network connected and other commodity partners from DC makes a lot of difference. ( It still does but less of an concern )
Minimal Support - Although that is not a concern anymore in 2025 because everyone got used to Cloud computing that has zero support most of the time.
Expectation of Low Margin. I think both Hetzer and OVH have accepted the fact they are in computing commodity business with low margin and aim for volume. While most US business will always try to improve their margin and venture into SaaS or other managed services. Which means both Hetzer and OVH are also the expert in squeezing penny out of everything. As someone who used to work in commodity business I have a lot of respect for these people as they are harder than most people think.
Again, these are things on top of my head when I was keeping an eye on VPS. I just checked LowEndBox ( https://lowendbox.com ) is still alive and well after almost 20 years! Before cloud computing was a thing or went mainstream there were plenty of low cost low end VPS options like OVH and Hetzner. So this isn't exactly new, they just happened to have grown into current size.
Even ECC - for 99% of applications (and especially on low-end VPS servers) its less likely to be a problem.
The only thing I have found to be an issue with Hetzner is on dedicated servers, and specifically the hard drives. I've had new servers provisioned and they've given me decade old drives that are on the verge of failure - it's less of an issue now as most of their servers are shipping with new nvme drives but I dare say in 3-4 years time it'll be a problem when they reuse those and have instant non-recoverable failures for some of the hardware range.
Although in 2025 AMD decided instead of people using Ryzen for server they launched EPYC Grado instead. Which is similar if not slightly cheaper than Ryzen at 32 vCPU and offer official ECC Memory support.
It’s great for throwaway machines, e.g. CI. But don’t rely on them
These days I'd take their ampere VPS servers over the dedicated ones though, the performance and reliability is way better (mostly just due to it being brand new hardware).
If you're just looking for the name, AMD sells EPYC branded AM4/AM5 cpus that have remarkably similar specs to the Ryzen AM4/AM5 chips.
Depending on what you're doing, consumer hardware is often more than enough. And it's managed hosting... if the (whatever) dies, you just yell at the host and get new hardware, no big deal if you're doing reasonable backups.
And in my experience EC2 is not that reliable. I have Hetzner dedicated servers with more uptime than EC2 nodes.
I used to have no issues with OVH for hosting, but I moved to Hetzner because the cost was significantly cheaper previously. I've not had one single scrap of trouble with Hetzner, but I'm always looking for something I can use for failover in an emergency.
Hetzner has two servers with the same amount of cores but one only costs half as much. They don't say this anywhere but if you test the performance you indeed only get half as much on the cheaper server.
OVH support response times were atrocious, multiple days of waiting until weeks later it was escalated.
They never figured it out, just suggested spinning a new server. By that point I had already migrated, but it was a bit scary since it was my first time managing infrastructure.
Just anecdata :) maybe buy a support plan if they have it.
Geekbench 6 single core score on these is about 900-1100.
pre { margin: 2rem 0 !important; padding: 1rem !important; }
Each code block has such giant padding and margins that you can only read 3 lines of text in a viewport.
Also, I would suggest installing Webmin/Virtualmin which takes care of a lot of issues like deploying new subdomains or new users.
This article is really about preparing a VPS for Coolify deployment, but stops short of Coolify setup AFAICT
For me this was fine and I understand why they do this but it wasn't clear to me at the start.
Guess how I found out... :(
It was only Hetzner which didn't, and instead they turned off networking to all of our stuff (dedicated servers, some VMs, etc) with no warning. Then their support team screwed us around for a while as well.
I'm about as unimpressed with them as it's possible to get. :(
I updated all of the places I remembered, but missed Hetzner and a few others. Only Hetzner didn't have their shit together enough to gracefully notify us. Or account support staff who were at all interested in assisting.
I'm still not exactly sure of the correct terminology for the situation. I'd noticed two suspicious transactions on the credit card, rang up my bank about it, and we agreed they'd better generate a new credit card and kill the existing one.
I then contacted all of the places that I knew of to update them with the new credit card details. I missed Hetzner and (from rough memory) two others. Only Hetzner wasn't able to handle it correctly.
Literally, no kind of notification, warnings, anything at all. Due to this, and their support team being incredibly unhelpful during the outage, they're now on my personal blacklist for literally everything.
So instead of strongly recommended them, which I used to do, we've migrated 95% of everything off Hetzner and I'm hanging out for it to be 100%. And I warn others away from them at every opportunity. Like here. :)
We will not be returning to Hetzner. Ever.
Zero actual mention of Coolify, and the manual steps to PREPARE for it seem far more complicated than, "Just base your VM on the Docker Compose base image, and then tweak a couple things".
I'll stick with what I have. Nice advantage is that I can migrate from host to host and 99% of it is just copying the Docker Compose YAML file.
Docker compose and bash script is all I need to run 2 vms, with hourly backups to s3 + wal streaming to s3 + PG and redis streaming replication to another vm. That is bare minimum for production
If you haven't done so already, I'd highly recommend reading the postgres documentation about continuous backups before setting it up, as it teaches the fundamentals: https://www.postgresql.org/docs/current/continuous-archiving...
I think if you do register each service separately in coolify it runs OKish.
But I've now switched to the same setup as you had and ironically it has been so much simpler to run than coolify.
I'm really happy people are working on projects like coolify, but currently it's far from ready for any serious use (imo).
So you can just ssh in and do the coolify install and then switch off root login I guess, if you're willing to just blow away the server and start over if you ever needed to ssh in again.
I tried a from scratch coolify deploy recently and it kept failing with ssh key errors. On the other server we have it working and deploying many projects however the "just give it a docker compose" method has never worked for us.
ended up going with caprover because i can more quickly spin up a nodejs app on there with git hooks (so it builds on each commit to a specific branch).
both offer this functionality, there's just less friction on caprover. but coolify is probably more extensive.
If you want to get a bit more fancy than just using their panel for it, you can configure via API: https://docs.hetzner.cloud/reference/cloud#firewalls
Does anyone have objections against Hetzner's firewall solution that I'm not aware of?
Including node & PM2 update to the latest, running PM2 as a systemd service, or simply ditching PM2 altogether, as well as backups, performance settings, and monitoring, log rotation, and cleanup, etc.
Beware of using the same server for running the apps and for building the app. Quite often, building the app needs a lot of RAM and CPU, so it is undesirable to do it on the same host.
That's dangerous, because what if your IP changes? You'll be locked out?
Another layer on top is useful to remove the noise from the logs. And if you have anything aside from SSH on the server that doesn't need to be public, restricting it via a VPN or something like that is useful anyway. Most other software that listens on your server has likely much more attack surface than SSH.
You'll be surprised how many bots get thwarted by just changing the port.
[0] https://docs.docker.com/engine/network/packet-filtering-fire...
Anyone know some infrastructure-as-code framework that makes it easy to spin up and maintain production servers? Something declarative, perhaps, but not Kubernetes?
specs != performance
When I was looking for a hobby cloud provider, I did some benchmarking of similarly spec'd instances. Note that the degree of overcommitting CPU/RAM varies by cloud provider and instance type. I found Vultr to be the most consistently faster than DO. I had used OVH in the past and wasn't interested. I also didn't consider Hetzner because it seemed unlikely they could match performance at their prices. I later saw other benchmarking that showed Vultr as being one of the fastest. That was quite some time ago and I haven't checked lately, but also have no reason to switch.The last time I compared several vps with similar pricing, hetzner was by far the fastest - but I did not try vultr back then.
I'm not very familiar with Traefik and ended up reading CrowdSec and Trafic docs when setting up.
Managed to setup CrowdSec on a Coolify instance on Digital Ocean but it requires more fiddling (SSH and sudo nano) than I liked. It was a good learning experience but the resulting droplet instance is not something I'm confident in hosting anything important.
I have also restricted unneeded ports on DO's firewall and configured Ubuntu Pro. I wonder what else I missed?
I might rebuild the server following this guide as CrowdSec seems to be throwing errors/warning during apt update / apt upgrade. Plus this guide feels more complete.
Are you just hoping to gain more insight on the differing proposed technologies and waiting for someone to give you more information, or are you expressing frustration that that people have their own opinions on which layers to use for their own setups?
If you’re simply asking for information on how to use docker, and how to adapt TFA to include it, you’re in luck. One can find many tutorials on how to dockerize a service (docker’s own website has quite a lot of excellent tutorials and documentation on this topic), and plenty of examples of how to harden it, use SSL, et cetera. This is a very well trodden path.
That said, I’m tempted to read your response with the latter interpretation and my response would be to observe that holding a different opinion on something isn’t inherently ungrateful, or rude, nor is it presumptuous to share that one would, say, recommend dockerizing the production app instead of deploying directly to the server.
That’s the nature of discourse, and the whole reason why hacker news has a comment section in the first place. A lovely article such as TFA is shared by someone, and then folks will want to talk about it and share their own insights and opinions on the contents. Disagreeing with a point in the article is a feature, not a bug.
You could have written “I’d love to learn more, do you have a tutorial or walkthrough that you found helpful?” or formulated the question in any other way that demonstrates a respect for the commenter’s time and any effort they may put into finding a tutorial they think you would enjoy.
“So, where is the walkthrough” implies (at least to me) that what you are really saying is “obviously you must have written a walkthrough, or else your comment has no value, so why haven’t you given it to me.” It reads like a challenge, and given the way you’re communicating now, I feel justified in this reading.
A simple question can still be rude, and yours definitely sounded rude. I tried to give you room to exercise the benefit of the doubt, but based on this and your other comment, you just are entitled. Have a nice day.
Cloud pricing no longer makes any sense.
The personnel matter is harder to quantify. But note that the need for infra skills didn't go away with cloud. Cloud is complicated, you still need people who understand it, and that still costs money. Be it additional skills spread across developers, or dedicated cloud experts depending on organisation size. These aren't a far cry from sysadmims. It really depends on the skillset of your individual team. These days traditional hosting has got so much easier with so much automation, that it's not as specialist a skill or as time consuming or complicated as many people think it is.
Cloud _can_ be cheaper, but you need the correct mix of requirements and skills gap to make it actually cheaper.
We at SadServers moved from big cloud managed K8s to Hetzner + Edka and it's an order of magnitude cheaper (obv some perks are missing).
I set my clients up with Hetzner for the core, and front it with Cloudflare. You can front KEDA scaled services with Cloudflare containers and you're pretty much bulletproof, even if Hetzner shits the bed you're still running.
Is much cheaper than hetzner and still in Europe.
Hetzner
CX22 vCPU 2 4GB 40GB 20TB Traffic € 3.79
Hostup
VPSXS vCPU 2 4GB 50GB 2TB Traffic € 3.54
But the real issue is that the price is a bit of red herring: the CX22 plan is not available everywhere (only in the old datacenters in Europe I think) and if you need to scale up your machine you can't use the bigger Intel plans (CX32, CX42 etc) because they have been unavailable for long time, and you have either to move to Amd based plans (CPX31 etc), which cost almost double for the same amount of ram, or to Arm64 based plans.
Hetzner: https://bgp.he.net/AS24940
Hostup: https://bgp.he.net/AS214640
Hetzner also has extra features like firewalls and whatnot that it doesn't seem Hostup has.
There are many variations you can do. I would recommend caddy instead of nginx for beginners these days.
I only know a little bit about what Google does to secure the VMs and hypervisors and that the attitude several years ago was that even hardened VMs weren't really living up to their premise yet.
When using one of these cost-focused providers do people typically just assume the provider has root in the VM? I sometimes see them mentioned in the context of privacy but I haven't seen much about the threat model.
If you're data is sensitive encrypt it locally and send it. The reality is most people are running something like a website, API or a SAAS and basically just have to have a provider they trust somewhat and take reasonable security precautions themselves. Beyond that it's probably not as secure as it could be unless it's in a facility you own or control access to.
It's true you shouldn't put super sensitive data on a VPS because the host could access it. Regular sensitive is fine - your host will be in a world of trouble if they access your data without permission, so you can generally trust them not to read your emails or open your synced nudes. But if your data is so sensitive that the host would risk everything to read it, or would avoid getting in trouble at all (e.g. national security stuff) then absolutely don't use a VPS. For that level of paranoia you'd need at least a dedicated server which makes it unlikely the host has a live backdoor into the system, ideally your own server so you know they don't, and for super duper stuper paranoid situations, one with a chassis intrusion switch linked to a bag of thermite (that's a real thing).
It's pretty amazing how well it works and how much you learn I. The process.
I love these blogs. Making infra wherever it is or however it's done seems to be a lost art.
Don't do this; just create a new user and give it sudo privileges.
The utility of changing the SSH port is debatable, but it would lead to less noise in logs. Also, instead of limiting SSH connections to a source IP, you might consider putting the server behind Tailscale and only allowing incoming SSH connections over its interface: https://tailscale.com/kb/1077/secure-server-ubuntu (this also solves the logs problem)
Also, why do you think that it is better to not change the root password? It sounds like a very suspicious recommendation.
You don't need to open any ports to use Tailscale, and its job is to a) get nodes to connect directly or b) shuttle jibber-jabber encrypted with nodes' private keys from point A to point B and back again through Tailscale-owned distributed servers. Tailscale only sees the traffic it needs and nothing else. It's free because it's "cost-effective" to run and because it can rely on word-of-mouth marketing because it solves a really complex problem in an elegant way, which makes enterprise customers want to pay for it.
Not changing the root password is correct, because at least on Ubuntu, it has been locked, meaning the only way to use it is through sudo or SSH keys (common during initial server setup). Setting a password for root and using su has no benefits over using sudo and comes with significant downsides, because it is unauditable.
Interesting. How does this work? Will the emails go to spam?
Is anyone else immediately turned off by articles like this written in "ChatGPT voice"? The information in the guide might be good, but I didn't make it past the introduction.
I've been burned too many times by LLM-slop. If an article is written in ChatGPT voice, it might still have good content but your readers don't know that. Editing for style and using your own voice helps credibly signal that you put effort into the content.
I have seen that they do this very frequently to many people for all kinds of convoluted reasons, and often block accounts that have years running because they don't please the requirements of such a demand out of the blue (but without clarifying why they didn't comply well enough)
For example, the Reddit page for Hetzner has no shortage of desperate clients suddenly blocked, and trying to read the corporate runes of this company's policies and whatever means of appealing can be improvised, just so they can regain access to some service they'd come to depend on.
Imagine depending on that for your personally important backend infrastructure or data backup. No thanks, fuck them.
To anyone else, I truly do not recommend such "service". Putting the backbone of a needed digital system into the hands of a company that can and frequently does essentially blacklist users at any random time because maybe you're from the wrong country, or oopsy, used a VPN the wrong way, or just don't meet some other intrusive, opaque criteria, is not a safe way to run something important to you with the technology they sell.