Why Self-Host?
343 points
2 days ago
| 53 comments
| romanzipp.com
| HN
codegeek
2 days ago
[-]
"start self-hosting more of your personal services."

I would make the case that you should also self host more as a small Software/SAAS business and it is not quite the boogeyman that a lot of cloud vendors want you to think.

Here is why. Most software projects/businesses don't require the scale and complexity for which you truly need the cloud vendors and their expertise. For example, you don't need Vercel to deploy NextJS or whatever static website or even netlify. You can setup Nginx or Caddy (my favorite) on a simple VPS with Ubuntu etc and boom. For majority of projects, that will do.

90%+ of projects can be self hosted with the following:

- A well hardened VPS server with good security controls. Plenty of good articles online on how to do the most important things (remove root login, ssh should only be key based etc).

- Setup a reverse proxy like Caddy (my favorite) or Nginx etc. Boom. Static files can now be served. Static websites can be served. No need for CDN etc unless you are talking about millions of requests per day.

- Setup your backend/API with something simple like supervisor or even the native systemd.

- The same Reverse proxy can also forward requests to backend and other services as needed. Not that hard.

- Self host a mysql/postgres database and setup the right security controls.

- Most importantly: Setup backups for everything using a script/cron and test them periodically.

- IF you really want to feel safe against DOS/DDOS etc, add cloudflare in front of everything.

So you end up with:

Cloudflare/DNS=>Reverse Proxy (Caddy/Nginx)=>Your App.

- You want to deploy ? Git pull should do it for most projects like PHP etc. If you have to rebuild binary, it will be another step but possible.

You don't need Docker or containers. They can help but not needed for small to even mid sized projects.

Yes, you can claim that a lot of these things are hard and I would say they are not that hard. Majority of projects don't need the web scale or whatever.

reply
mikepurvis
2 days ago
[-]
The main thing that gives me anxiety about this is the security surface area associated with "managing" a whole OS— kernel, userland, all of it. Like did I get the firewall configured correctly, am I staying on top of the latest CVEs, etc.

For that reason alone I'd be tempted to do GHA workflow -> build container image and push to private registry -> trivial k8s config that deploys that container with the proper ports exposed.

Run that on someone else's managed k8s setup (or Talos if I'm self hosting) and it's basically exactly as easy as having done it on my own VM but this way I'm only responsible for my application and its interface.

reply
oxalorg
2 days ago
[-]
I left my VPS open to password logins for over 3 years, no security updates, no firewalls, no kernel updates, no apt upgrades; only fail2ban and I survived: https://oxal.org/blog/my-vps-security-mess/

Don't be me, but even if you royally mess up things won't be as bad as you think.

reply
Matumio
2 days ago
[-]
I've had password login enabled for decades on my home server, not even fail2ban. But I do have an "AllowUsers" list with three non-cryptic user names. (None of them are my domain name, but nice try.)

Last month I had 250k failed password attempts. If I had a "weak" password of 6 random letters (I don't), and all 250k had guessed a valid username (only 23 managed that), that would give... uh, one expected success every 70 years?

That sounds risky actually. So don't expose a "root" user with a 6-letter password. Add two more letters and it is 40k years. Or use a strong password and forget about those random attempts.

reply
m463
1 day ago
[-]
I wonder about:

- silently compromised systems, active but unknown

- VPS provider doing security behind your back

reply
mikepurvis
1 day ago
[-]
I'd be worried about this too. Like there must be AI bots that "try the doors" on known exploits all over the internet, and once inside just do nothing but take a look around and give themselves access for the future. Maybe they become a botnet someday, but maybe the agent never saw the server doing anything of value worth waking up its master for— running a crypto wallet, a shard of a database with a "payments" table, an instance of a password manager like Vault, or who knows what else might get flagged as interesting.
reply
jesterson
1 day ago
[-]
Security is way more nuanced than "hey look I left my door open and nothing happened!". You are suggesting, perhaps inadvertently, a very dangerous thing.
reply
interroboink
2 days ago
[-]
> Run that on someone else's managed k8s setup ... this way I'm only responsible for my application and its interface.

It's the eternal trade-off of security vs. convenience. The downside of this approach is that if there is a vulnerability, you will need to wait on someone else to get the fix out. Probably fine nearly always, but you are giving up some flexibility.

Another way to get a reasonable handle on the "managing a whole OS ..." complexity is to use some tools that make it easier for you, even if it's still "manually" done.

Personally, I like FreeBSD + ZFS-on-root, which gives "boot environments"[1], which lets you do OS upgrades worry-free, since you can always rollback to the old working BE.

But also I'm just an old fart who runs stuff on bare metal in my basement and hasn't gotten into k8s, so YMMV (:

[1] eg: https://vermaden.wordpress.com/2021/02/23/upgrade-freebsd-wi... (though I do note that BEs can be accomplished without ZFS, just not quite as featureful. See: https://forums.freebsd.org/threads/ufs-boot-environments.796...)

reply
mikepurvis
1 day ago
[-]
I think for a normal shlub like me who is unlikely to be on top of everything it’s really more of a cost / convenience tradeoff.

It might take Amazon or Google a few hours or a day to deploy a critical zero-day patch but that’s in all likelihood way better than I’d do if it drops while I’m on vacation or something.

reply
virtue3
2 days ago
[-]
I used digital ocean for hosting a wordpress blog.

It got attacked pretty regularly.

I would never host an open server from my own home network for sure.

This is the main value add I see in cloud deployments -> os patching, security, trivial stuff I don't want to have to deal with on the regular but it's super important.

reply
tadfisher
2 days ago
[-]
Wordpress is just low-hanging fruit for attackers. Ideally the default behavior should be to expose /wp-admin on a completely separate network, behind a VPN, but no one does that, so you have to run fail2ban or similar to stop the flood of /wp-admin/admin.php requests in your logs, and deal with Wordpress CVEs and updates.

More ideal: don't run Wordpress. A static site doesn't execute code on your server and can't be used as an attack vector. They are also perfectly cacheable via your CDN of choice (Cloudflare, whatever).

reply
manmal
1 day ago
[-]
A static site does run on a web server.
reply
moehm
1 day ago
[-]
Yes, but the web server is just reading files from disk and not invoking an application server. So if you keep your web server up to date, you are at a much lesser risk than if you would also have to keep your application + programming environment secure.
reply
manmal
1 day ago
[-]
That really depends on the web server, and the web app you'd otherwise be writing. If it's a shitty static web server, than a JVM or BEAM based web app might be safer actually.
reply
moehm
1 day ago
[-]
Uh, yeah, I thought about Nginx or Apache and would expect them to be more secure then your average self-written application.
reply
rrix2
1 day ago
[-]
a static site is served by a webserver, but the software to generate it runs elsewhere.
reply
manmal
1 day ago
[-]
Yes. And a web server has an attack surface, no?
reply
mikepurvis
1 day ago
[-]
I think it’s reasonable to understand that nginx/caddy serving static files (or better yet a public s3 bucket doing so) is way, way less of a risk than a dynamic application.
reply
manmal
1 day ago
[-]
Of course, that’s true for those web servers. If kept up to date. If not, the attack surface is actually huge because exploits are well known.
reply
63stack
1 day ago
[-]
What are these huge attack surfaces that you are talking about? Any links?
reply
codegeek
2 days ago
[-]
The thing with WordPress is that it increases the attack area using shitty plugins. If I have a WP site, I change wp-config.php with this line:

    define( 'DISALLOW_FILE_EDIT', true );
This one config will save you lot of headaches. It will disable any theme/plugin changes from the admin dashboard and ensures that no one can write to the codebase directly unless you have access to the actual server.
reply
cyberax
2 days ago
[-]
You can mitigate a lot of security issues by not exposing your self-hosted stack to the Internet directly. Instead you can use a VPN to your home network.

An alternative is a front-end proxy on a box with a managed OS, like OpenWRT.

reply
pabs3
8 hours ago
[-]
There are distros who keep up to date with CVEs for you, many of them are set up to automatically update packages, automatically restart processes and reboot after Linux kernel upgrades. Once every few years you'll need to upgrade to the latest version of the distro, thats usually pretty quick though.
reply
lewiscollard
1 day ago
[-]
> The main thing that gives me anxiety about this is the security surface area associated with "managing" a whole OS— kernel, userland, all of it. Like did I get the firewall configured correctly, am I staying on top of the latest CVEs, etc.

I've had a VPS facing the Internet for over a decade. It's fine.

  $ $ ls -l /etc/protocols 
  -rw-r--r-- 1 root root 2932 Dec 30  2013 /etc/protocols
I would worry more about security problems in whatever application you're running on the operating system than I would the operating system.
reply
waynesonfire
1 day ago
[-]
> gives me anxiety about this is the security surface

I hate how dev-ops has adopted and deploys the fine-grained RBAC permissions on clouds. Every little damn thing is a ticket for a permissions request. Many times it's not even clear which permission sets are needed. It takes many iterations to wade through the various arbitrary permission gates that clouds have invented.

They orgs are pretending like they're operating a bank, in staging.

This gives me anxiety.

reply
czhu12
2 days ago
[-]
This is why I built https://canine.sh -- to make installing all that stuff a single step. I was the cofounder of a small SaaS that was blowing >$500k / year on our cloud stack

Within the first few weeks, you'll realize you also need sentry, otherwise, errors in production just become digging through logs. Thats a +$40 / m cloud service.

Then you'll want something like datadog because someone is reporting somewhere that a page is taking 10 seconds to load, but you can't replicate it. +$300 / m cloud service.

Then, if you ever want to aggregate data into a dashboard to present to customers -- Looker / Tableau / Omni +$20k / year.

Data warehouse + replication? +$150k / year

This goes on and on and on. The holy grail is to be able to run ALL of these external services in your own infrastructure on a common platform with some level of maintainability.

Cloud Sentry -> Self Hosted Sentry

Datadog -> Self Hosted Prometheus / Grafana

Looker -> Self Hosted Metabase

Snowflake -> Self Hosted Clickhouse

ETL -> Self Hosted Airbyte

Most companies realize this eventually and thats why they eventually move to Kubernetes. I think its also why often indie hackers can't quite understand why the "complexity" of Kubernetes is necessary, and just having everything run on a single VPS isn't enough for everything.

reply
threetonesun
2 days ago
[-]
This assumes you're building a SAAS with customers though. When I started my career it was common for companies to build their own apps for themselves, not for all companies to be split between SAAS builders and SAAS users.
reply
jerf
2 days ago
[-]
I enjoy the flipside... working for a company that does provides SAAS, sometimes I find myself reminding people that we don't necessarily need a full multi-datacenter redundant deploy with logging and metrics and alerting and all this other modern stuff for a convenience service, used strictly internally, infrequently (but enough to be worth having), with effectively zero immediate consequences if it goes down for a bit.

You can take that too far, of course, and if you've got the procedures all set up you often might as well take them, but at the same time, you can blow a thousands and thousands of dollars really quickly to save yourself a minor inconvenience or two over the course of five years.

reply
gregsadetsky
2 days ago
[-]
I'm also in this space - https://disco.cloud/ - similarly to you, we offer an open source alternative to Heroku.

As you well know, there are a lot of players and options (which is great!), including DHH's Kamal, Flightcontrol, SST, and others. Some are k8s based - Porter and Northflank, yours. Others, not.

Two discussion points: one, I think it's completely fair for an indie hacker, or a small startup (Heroku's and our main customers - presumably yours too), to go with some ~Docker-based, git-push-compatible deployment solution and be completely content. We used to run servers with nginx and apache on them without k8s. Not that much has changed.

Two, I also think that some of the needs you describe could be considered outside of the scope of "infra": a database + replication, etc. from Crunchy Bridge, AWS RDS, Neon, etc. - of course.

But tableau? And I'm not sure that I get what you mean by 150k/year - how much replication are we talking about? :-)

reply
czhu12
2 days ago
[-]
Yeah so happy to share how that happened to us.

If you want to host a redshift instance, and get Google Analytics logs + twilio logs + stripe payments + your application database into a datawarehouse, then graph all that in a centralized place (tableau / looker / etc)

A common way to do that is:

- Fivetran for data streaming

- Redshift for data warehousing

- Looker for dashboarding

You're looking at $150k / year easily.

reply
authorfly
2 days ago
[-]
Yeah, you are very right.

If you start peaking success, you realize that while your happy path may work for 70% of real cases, it's not really optimal to convert for most of them. Sentry helps a lot, you see session replay, you get excited.

You realize you can A/B test... but you need a tool for that...

Problem: Things like Openreplay will just crash and not restart themselves, with multiple container setups, some random part going down will just stop your session collection, without you noticing.. try to debug that? Goodluck, it'll take at least half a day. And often, you restore functionality, only to have another random error take it down a couple of months later, or you realize, the default configuration is only to keep 500mb of logs/recordings (what), etc, etc...

You realize you are saving $40/month for a very big hassle and worse, it may not work when you need it. You go back to sentry etc..

Does Canine change that?

reply
czhu12
2 days ago
[-]
Canine just makes deploying sentry / grafana / airbyte + 15k other OS packages a one click install, which then just gives you a URL you can use. Because its running on top of kubernetes, a well built package should have healthchecks which will detect an error and auto-restart the instance.

Obviously if [name your tool] is built so that it can be bricked [1], even after a restart, then you'll have to figure it out. Hopefully most services are more robust than that. But otherwise, Kubernetes takes care of the uptime for you.

[1] This happened with a travis CI instance we were running back in the day that set a Redis lock, then crashed, and refused to restart so long as the lock was set. No amount of restarts fixed that, it required manual intervention

reply
thewebguyd
2 days ago
[-]
> Majority of projects don't need the web scale or whatever.

Truth. All the major cloud platform marketing is YAGNI but for infrastructure instead of libraries/code.

As someone who works in ops and has since starting as a sysadmin in the early 00s, it's been entertaining to say the least to watch everyone rediscover hosting your own stuff as if it's some new innovation and wasn't ever possible before. It's like that old MongoDB is web scale video (https://www.youtube.com/watch?v=b2F-DItXtZs)

Watching devs discover Docker was similarly entertaining back then when us in ops have been using LXC and BSD jails, etc. to containerize code pre-DevOps.

Anyway, all that to say - go buy your gray beard sysadmins a coffee let them help you. We would all be thrilled to bring stuff back on-prem or go back to self hosting and running infra again and probably have a few tricks to teach you.

reply
isodev
2 days ago
[-]
And there is an extra perk: Unlike cloud services, system skills and knowledge are portable. Once you learn how systemd or ufw or ssh works, you can apply it to any other system.

I’d even go as far as to say that the time/cost required to say learn the quirks of Docker and containers and layering builds is higher than what is needed to learn how to administer a website on a Debian server.

reply
codegeek
2 days ago
[-]
Well said. For me, "how to administer a website on a Debian server" is a must if you work in Web Dev because hosting a web app should not require you to depend on anyone else.
reply
neoromantique
2 days ago
[-]
>I’d even go as far as to say that the time/cost required to say learn the quirks of Docker and containers and layering builds is higher than what is needed to learn how to administer a website on a Debian server.

But that is irrelevant as Docker brings more to the table that a simple Debian server cannot by design. One could argue that lxd is sufficient for these, but that is even more hassle than Docker.

reply
tracker1
2 days ago
[-]
For my home or personal server stuff... I'm pretty much using ProxMox as a VM host, with Ubuntu Server as the main server and mostly Docker configured with Caddy installed on the host. Most apps are stacked in /apps/appname/docker-compose.yaml with data directories mounted underneath. This just tends to simplify most of my backup/restore/migrate etc.

I just don't have the need to do a lot of work on the barebones server beyond basic ufw and getting Caddy and Docker running... Caddy will reverse-proxy all the apps running in containers. It really simplifies my setup.

reply
neoromantique
2 days ago
[-]
That's essentially what I do (with a little extra step of having a dedicated server in Hetzner peered with my homelab with wireguard to use as internet facing proxy + offsite backup server).

Ah, also docker is managed with komo.do, but otherwise it is simple GUI over docker-compose

reply
tracker1
2 days ago
[-]
That's cool... I really should take a next step to bridge my home setup with my OVH server. It's a mixed bag, mostly in that the upgrade to "business" class from home is more than what I pay a month to rent the full server and IP block on OVH... But I've got a relatively big NAS at home I wouldn't mind leveraging more/better.

Aside: I really want to like NextCloud, but as much as I like aspects of it, I don't like plenty as well.

reply
neoromantique
1 day ago
[-]
I don't mean proper L2 bridging, I do it with wireguard, as I am behind CGNAT at home.

It's mostly to unify VLANs and simplify management (i.e everything is routed through my main router, rather than loose tunnels on servers/vm's)

reply
isodev
2 days ago
[-]
I think the original vision where containers "abstract" the platform to such an extend that you can basically deploy your dev environment has been somewhat diminished. The complexity of the ecosystem has grown to such an extend, that we need tools to manage the tools that help us manage our services.

And that's not even considering the "tied to a single corporation" problem. If us-east-1 wakes up tomorrow and decides to become gyliat-prime-1, we're all screwed because no more npm, no more docker, no more CloudFlare (because someone convinced everyone to deploy captchas etc).

reply
chrisweekly
2 days ago
[-]
Mostly agreed, and thanks for sharing your POV. One slight disagreement:

"No need for CDN etc unless you are talking about millions of requests per day."

A CDN isn't just for scale (offloading requests to origin), it's also for user-perceived latency. Speed is arguably the most important feature. Yes, beware premature optimization... but IMHO delivering static assets from the edge, close as possible to the user, is borderline table stakes and has been for at least a decade.

reply
eddieroger
2 days ago
[-]
You're right, including your warning of premature optimization, but if the premise of the thread is starting from a VPS, user-perceived latency shouldn't be as wild as self-hosting in a basement or something because odds are your VPS is on a beefy host with big links and good peering anyway. If anything, I'd use the CDN as one more layer between me and the world, but the premise also presupposed a well-hardened server. Personally, the db and web host being together gave me itches, but all things secure and equal, it's a small risk.
reply
assimpleaspossi
2 days ago
[-]
Oh you can go a lot simpler than that.

For 20 years I ran a web dev company that hosted bespoke web sites for theatre companies and restaurants. We ran FreeBSD, postgreSQL and nginx or H2O-server with sendmail.

Never an issue and had fun doing it.

reply
whatevaa
3 hours ago
[-]
How interactive are those sites?
reply
kijin
2 days ago
[-]
> No need for CDN etc unless you are talking about millions of requests per day.

Both caddy and nginx can handle 100s of millions of static requests per day on any off-the-shelf computer without breaking a sweat. You will run into network capacity issues long before you are bottlenecked by the web server software.

reply
whatevaa
3 hours ago
[-]
Network capacity and geo-proximity are the issues, not number of requests.
reply
bluGill
2 days ago
[-]
A small business should onlyself host if they are a hosting company. everyone else should pay their local small business self hosting company to host for them.

This is not a job for the big guys. You want someone local who will take care of you. They also come when a computer fails, ensuring updates are applied to them. Not by come I mean physically sending a human to you. This will cost some money but you should be running your business not trying to learn computers.

reply
throwzasdf
2 days ago
[-]
> A small business should onlyself host if they are a hosting company.

OK that's your opinion, in my view a business should selfhost if they want to maintain data sovereignty.

> everyone else should pay their local small business self hosting company to host for them.

That assumes all small business have at least one "local small business self hosting company" to choose from.

reply
codegeek
2 days ago
[-]
I meant a small software/SAAS business. I would agree with you about a non software business. Edited my comment.
reply
BinaryIgor
2 days ago
[-]
Exactly; but I would rather say that you don't need CDN unless you have tens of thousands of requests per second and your user base is global; single powerful machine can easily handle thousands and tens of thousands of requests per second
reply
dlisboa
2 days ago
[-]
Issue is network saturation. Most VPSs have limited bandwidth (1Gbps), even if their CPUs could serve tens of thousands of req/s.
reply
Sohcahtoa82
2 days ago
[-]
Even 1 Gbps is plenty to handle 1,000 connections unless you're serving up video.

That's 1 Mbps per user. If your web page can't render (ignoring image loading) within a couple seconds even on a connection that slow, you're doing something wrong. Maybe stop using 20 different trackers and shoving several megabytes of JavaScript to the user.

reply
dlisboa
2 days ago
[-]
The thread is about self-hosted CDN capabilities. Serving large images and video is what CDNs are for. I’m just talking technical limitations here, chill a little bit with the “your web page”.
reply
BinaryIgor
2 days ago
[-]
You can always host your stuff on a few machines and then create a few DNS A records to load balance it on the DNS level :)
reply
jcgl
2 days ago
[-]
This sometimes works, sometimes not. Because of how DNS resolution works, you're totally at the mercy of how your DNS resolver and/or application behave.
reply
codegeek
2 days ago
[-]
Agreed. I was being generous to the CDN lovers :). Peope don't know how powerful static file servers like Nginx and Caddy are. You don't need no CDN.
reply
indigodaddy
1 day ago
[-]
For me, CDN is more valuable for avoidance of huge data transfer bills from the origin host, vs the endpoint getting overwhelmed. Obviously those are related and both could happen without a CDN, but the big bills scare me more at the end of the day.
reply
noduerme
1 day ago
[-]
Are you talking about the VPS just serving as a reverse proxy and running the server on-prem or at home? Or are you having a reverse proxy on some VPS with a static IP send connections to other VPSs on a cloud service? I've self-hosted toy apps at home this way, with a static IP VPS as a reverse proxy in the middle, and it is indeed easy. With Tailscale you don't even need a static IP at home. A gigabit connection and a $100 box can easily handle plenty of load for a small app or a static site. To me, the reason I would never put something like that into production even for a very small client is, even in a fairly wealthy city in America, the downtime caused by electrical outages and local internet outages would be unacceptable.
reply
Sohcahtoa82
2 days ago
[-]
Everything you said 110%

I wish I understood why some engineers feel the need to over-engineer the crap out of everything.

Is it because of wishful thinking? They think their blog is gonna eventually become so popular that they're handling thousands of requests per second, and they want to scale up NOW?

I just think about what web servers looked like at the turn of the millennium. We didn't have all these levels of abstraction. No containers. Not even VMs. And we did it on hardware that would be considered so weak that they'd be considered utterly worthless by today's standards.

And yet...it worked. We did it.

Now we've got hardware that is literally over 1000 times faster. Faster clocks, more cache, higher IPC, and multiple cores. And I feel like half of the performance gains are being thrown away by adding needless abstractions and overhead.

FFS...how many websites were doing just fine with a simple LAMP stack?

reply
jstummbillig
2 days ago
[-]
I think this is a moderately good idea, if you are certain that you want to remain involved with the business operationally, forever.

It's still not ever a great idea (unless, maybe, this is what you do for a living for your customers), simply because it binds your time, which will absolutely be your scarcest asset if your business does anything.

I am speaking from my acute experience.

reply
R_Spaghetti
2 days ago
[-]
You write: I'm fortunate enough to work at a company (enum.co) where digital sovereignty is not just a phrase.

info.addr.tools shows [1]: MX 1 smtp.google.com. TXT "mailcoach-verification=a873d3f3-0f4f-4a04-a085-d53f70708e84"

TXT "v=spf1 include:_spf.google.com ~all"

TXT "google-site-verification=TTrl7IWxuGQBEqbNAz17GKZzS-utrW7SCZbgdo5tkk0"

This is not just a phrase, it is a DNS entry. Using the most evil in phrases of digital sovereignty.

[1] https://info.addr.tools/enum.co

reply
gregsadetsky
2 days ago
[-]
To be fair to enum, the services they sell are around k8, an s3-equivalent, and devops. If they sold/promised self-hosting/sovereign email services, and then were "caught" using gmail, that might be a different story.

Your point stands - they're not fully completely independent. And maybe the language in the OP's article could have been different.. but the OP also specifically says "Oh no, I said the forbidden phrase: Self-hosted mail server. I was always told to never under any circumstances do that. But it's really not that deep."

They're aware of the issue, everyone is aware of the issue. It's an issue :-) But I get your point too.

reply
kachapopopow
2 days ago
[-]
I think it would be fair for them to use something like proton or enterprise msft relay service. Actually this is only for inbound mail, it can be self hosted without any issues, spf on the other hand (outbound verification) does need a relay at minimum.
reply
maxheyer
2 days ago
[-]
Hi R_Spaghetti,

Founder of enum here. That's a fair point, and a good catch.

Honestly, using Google Workspace for our internal email was a pragmatic choice early on to let us focus on building our core product. It's a classic startup trade-off, and one we're scheduled to fix in the coming weeks.

I want to be clear, though: our customer-facing platform and all its data are and always have been 100% sovereign. Our infrastructure is totally independent of Big Tech.

Thanks for holding us accountable!

reply
udev4096
2 days ago
[-]
> Our infrastructure is totally independent of Big Tech

That's wishful thinking. You cannot be truly independent from them, no one can. They control major BGP routes, major ASN, big fiber cables, etc. It's just impossible

reply
moehm
1 day ago
[-]
'If you wish to make an apple pie from scratch, you must first invent the universe.'

- Carl Sagan

They aren't going to cut the fiber cables if your Google accounts gets locked.

reply
crossroadsguy
2 days ago
[-]
That’s fine. But as R_Spaghetti has kindly pointed out maybe you could try and convince your colleague to change the post to rather accommodate “… digital sovereignty is still just a phrase …” and then possibly add something like “and we are working to change that” :) Just a thought. Of course we all are free to talk anything we want, do anything we want, and definitely write and post anything we want.
reply
auspiv
2 days ago
[-]
Email is the one notable exception for self hosting. I self host everything, but let email be handled by 3rd parties.
reply
mrsilencedogood
2 days ago
[-]
Yeah I really will give people a pass here. The state of email is one of the worst collective mistakes I think we've made.

You can literally be an expert in everything relevant - and your mail will still not get delivered just because you're not google/mailgun/etc.

I was trying to do a very simple email-to-self use-case. I was sending mail from my VPS (residential IP not even allowed at all) which was an IPv4 i'd had for literally 2+ years to exactly only myself - my personal gmail. I had it all set up - SPF, DKIM, TLS, etc etc. And I was STILL randomly getting emails sent directly to spam / showing up with the annoying ! icon (grates on my sensibilities). I ended up determining - after tremendous, tremendous pain in researching / debugging - that my DKIM sigs and SPF were all indeed perfect (I had been doubting myself until I realized I could just check what gmail thought about SPF/DKIM/etc. It all passed). And my only sin was just not being in the in-crowd.

Incredibly frustrating. The only winning move is not to play. I ended up just switching from emails-to-self to using a discord webhook to @ myself in my private discord server, so I get a push notification.

And this was just me, sending to myself! Low volume (0-2 emails per WEEK). Literally not even trying to actually send emails to other people.

reply
benou
2 days ago
[-]
I'm self-hosting for 17 years and counting.

In my opinion, the pragmatic solution I use is:

1) use a specialized distribution (I use yunohost but there are others). This makes configuring SPF, DKIM, TLS and more a breeze

2) use a reputable relay to send your emails (I use OVH but again there are plenty of other choices)

Of course it means you are not "pure" because emails you send will go trough a 3rd party (the relay) but it solved the delivery issue entirely for me, so that I can continue to benefit from all the other benefits of self-hosting.

reply
constantius
1 hour ago
[-]
Do you run Yunohost in production? Did you consider Cloudron/Coolify/etc.? I use Yunohost for personal services, it's extremely robust, but has a few lacking features that you'd expect to have in more professional setups.
reply
MaKey
1 day ago
[-]
I'm self-hosting my mail server without a relay. It is still possible, you just need to be persistent. In the beginning Microsoft might just let your mails vanish and while they won't confirm this when you contact them doing so eventually resolved my delivery issues with their mail servers. With Google I didn't have any issues.
reply
immibis
2 hours ago
[-]
Self-host receiving email, even if you outsource sending it.
reply
chronci739
2 days ago
[-]
> This is not just a phrase, it is a DNS entry. Using the most evil in phrases of digital sovereignty.

damn, this guy don’t fuck around. respect

reply
radiator
1 day ago
[-]
> TXT "google-site-verification=TTrl7IWxuGQBEqbNAz17GKZzS-utrW7SCZbgdo5tkk0" just to clarify, this part is not evil, it is just a compromise one makes to prevent Gmail from classifying outgoing email as spam (I think).
reply
sksksk
2 days ago
[-]
With self hosting email, if the digital sovreignty aspect is more important to you than the privacy aspect...

What I do is use gmail with a custom domain, self host an email server, and use mbysnc[1] to always be downloading my emails from gmail. Then I connect to that email server for reading my emails, but still use gmail for sending.

It also means that google can't lock me out of my emails, I still retain all my emails, and if I want move providers, I simply change the DNS records of my domain. But I don't have any issues around mail delivery.

reply
npodbielski
2 days ago
[-]
I did all of those DNS shnigannas with spf, dmarc and others ones like 6 years ago.

I think I had problems with my emails like 2 twice , with one exchange server of some small recruitment company. I think it was misconfigured.

Ah there were also some problem with gmail at the beginning they banned my domain because I was sending test emails to my own account there. I had to register my domain on their BS email post master tools website and configure my DNS with some key.

In overall I had much more problem with automatic backups, services going down for no reason, IPs being dynamic and etc. Email server just works.

reply
carlosjobim
2 days ago
[-]
The custom domain is all you need for complete e-mail sovereignty. As long as you have it, you can select between hundreds (thousands?) of providers, and take your business elsewhere at any time.
reply
sksksk
2 days ago
[-]
You risk losing your historical emails if you don’t back them up somewhere
reply
carlosjobim
1 day ago
[-]
Of course you use e-mail clients, so that you have your mails on at least one device. And separate backup as well.
reply
jraph
2 days ago
[-]
Why not also do the sending? Deliverability concerns?
reply
singron
2 days ago
[-]
Not OP, but yes. For personal use, you don't have enough traffic to establish reputation, so you get constantly blocked regardless of DKIM/DMARC/SPF/rDNS. Receiving mail is reliable though, so you can do that yourself and outsource just sending to things like Amazon SES or SMTP relays.
reply
tracker1
2 days ago
[-]
Depending on your mail flow, there's SendGrid and other options at a pretty reasonable cost to handle delivery concerns. I have one server set for sendgrid and another I've got setup for direct delivery... the only issue I've had sending from my own is to Outlook.com servers (not o365 or hotmail though). With DMARC/SPF, etc, gmail has been okay as well.
reply
throwzasdf
2 days ago
[-]
> For personal use, you don't have enough traffic to establish reputation, so you get constantly blocked regardless of DKIM/DMARC/SPF/rDNS.

Been selfhosting personal low traffic email since the 1990's, I don't have that problem.

reply
jraph
1 day ago
[-]
Me too, since 2016. I had issues with Microsoft, but it has been otherwise flawless.
reply
sksksk
2 days ago
[-]
Yep exactly, it removes a whole class of potentially problems.

Doing the sending myself wouldn't improve my digital sovreignty, which is my primary motivation.

reply
jdoe1337halo
2 days ago
[-]
Self hosting is awesome. I have been doing it for about a year since I quit my full time SWE job and pursued SaaS. I am using Coolify on a $20/month Hetzner server to host a wide variety of applications: Postgres, Minio (version before community neuter) for S3, Nuxt application, NextJs applications, Umami analytics, Open WebUI, and static sites. It was definitely a learning process, but now that I have everything set up, it really is just plug and play to get a new site/service up and running. I am not even using 1/4 of my server resources either (because I don't have many users xd). It is great.

https://coolify.io/docs/

reply
pengfeituan
1 day ago
[-]
Excellent topic, I can offer a perspective from my own experience. The biggest benefit of running a homelab isn't cost savings or even data privacy, though those are great side effects. The primary benefit is the deep, practical knowledge you gain. It's one thing to read about Docker, networking, and Linux administration; it's another thing entirely to be the sole sysadmin for services your family actually uses. When the DNS stops working or a Docker container fails to restart after a power outage, you're the one who has to fix it. That's where the real learning happens. However, there's a flip side that many articles don't emphasize enough: the transition from a fun "project" to a "production" service. The moment you start hosting something critical (like a password manager or a file-syncing service), you've implicitly signed up for a 24/7 on-call shift. You become responsible for backups, security patching, and uptime. It stops being a casual tinker-toy and becomes a responsibility. This is the core trade-off: self-hosting is an incredibly rewarding way to learn and maintain control over your data, but it's not a free lunch. You're trading the monetary cost of SaaS for the time and mental overhead of being your own IT department. For many on HN, that's a trade worth making.
reply
xnx
2 days ago
[-]
Defining "self-host" so narrowly as meaning that the software has to run on a server in your home closet ensures that it will always remain niche and insignificant. We should encourage anything that's not SaaS: open source non-subscription phone apps, plain old installable software that runs on Windows, cloud apps that can easily be run (and moved) between different hosts, etc.

Anything that prevents lock-in and gives control to the user is what we want.

reply
al_borland
2 days ago
[-]
We can’t have the word lose all meaning either. A cloud app that uses standard protocols and can be moved is still being run on a server you don’t own or control, by someone who could decide to change polices about data collection and privacy at any time. You can leave, but will you be able to migrate before the data is harvested? How would you ever know for sure?
reply
FinnKuhn
2 days ago
[-]
The general definition (although it can be pretty loose) is that you need to control the computer/server your software is running on. If that is a VPS or a server in your basement really doesn't matter all that much in the end when talking about if something is self-hosted or not.
reply
ziml77
2 days ago
[-]
Why doesn't it matter? A VPS is still someone else's computer. They could be monitoring what you're doing on there because they run the hypervisor and they have physical access.
reply
FinnKuhn
1 day ago
[-]
As I said, it's a loose definition, but the same could also be said, if I place a second computer at my parents place for example so I can have an offsite backup. They technically could also be monitoring it and have physical access. I don't think anyone would argue that this isn't self-hosting though.

For me at least self-hosting is mostly about having control of a computer/server software wise, not physically. That is probably an important differentiator from homelabbing, which is more focused on controlling the hardware. You can combine the two, but for self-hosting you don't need to physically control the hardware.

reply
kijin
2 days ago
[-]
At the very least, it should include colocating your server with somebody else who has better power and connectivity. As long as you have root, it's your server.
reply
srcreigh
2 days ago
[-]
It does include windows installable software. People often start out by running stuff that way (maybe in Docker).
reply
turtlebits
2 days ago
[-]
Self hosting is great and I'm thankful for all the many ways to run apps on your own infra.

The problem is backup and upgrades. I self host a lot of resources, but none I would depend on for critical data or for others to rely on. If I don't have an easy path to restore/upgrade the app, I'm not going to depend on it.

For most of the apps out there, backup/restore steps are minimal or non existent (compared to the one liner to get up and running).

FWIW, Tailscale and Pangolin are godsends to easily and safely self-host from your home.

reply
hamdingers
2 days ago
[-]
What kind of backup solution are you expecting?

Every selfhosted app runs in docker, where the backup solution is back up the folders you mounted and the docker-compose.yml. To restore, put the folders back and run docker compose up again. I don't need every app to implement its own thing, that would be a waste of developer time.

reply
tracker1
2 days ago
[-]
+1 for the above... all my apps are under /app/appname/(compose and data)... my backup is an rsync script that runs regularly... when I've migrated, I'll compose down, then rsync then rsync to the new server, then compose up... update dns, etc.

It's been a pretty smooth process. No, it's not a multi-region k8s cluster with auto everything.. but you can go a long way with docker-compose files that are well worth it.

reply
turtlebits
2 days ago
[-]
That doesn't work for databases unless you stop the container. You'll likely end up with a corrupt backup.
reply
mamcx
1 day ago
[-]
?

Any decent RDBMS can be backup live without issues. You only need to stop for restore (well without going with complex tricks)

reply
turtlebits
1 day ago
[-]
Yes, but not by "simply" snapshotting the volume/disk as the previous poster suggested.
reply
hamdingers
1 day ago
[-]
Then stop the container.

A self hosted app can have a few minutes of downtime at 3am while the backup script runs.

reply
esseph
1 day ago
[-]
Streaming replication to a read-only DB off-site.

Shut that one down and back it up from time to time.

Then copy that to a third site with rsync/etc

reply
bean469
2 days ago
[-]
> Tailscale and Pangolin are godsends to easily and safely self-host from your home.

Instead of Tailscale, I can highly recommend self-hosting netbird[1] - very active project, works great and the UI is awesome!

1. https://github.com/netbirdio/netbird

reply
udev4096
2 days ago
[-]
I would rather use headscale than netbird. Headscale is well established and very stable. netbird has a lot of problems and the fact their issue list is hardly looked at by the devs is more concerning
reply
aborsy
1 day ago
[-]
Will tailnet lock be available in the near future?

Also several ports need to be opened. How is its history vulnerabilities?

reply
wltr
2 days ago
[-]
What is Pangolin? I tried my search, but it returned the animal.
reply
turtlebits
2 days ago
[-]
reply
jimangel2001
2 days ago
[-]
My selfhosted stack includes: 1. immich 2. jellyfin 3. ghost 4. wallabag 5. freshrss 6. vaultwarden 7. nextcloud 8. overleaf/sharelatex 9. matrix server 10. pds for atproto
reply
jopsen
2 days ago
[-]
How do you upgrade to new versions?

How do ship security patches?

How do backup? And do you regularly test your backup?

I feel like upgrade instructions for some software can be extremely light, or require you to upgrade through each version, or worse.

reply
import
2 days ago
[-]
Not the OP.

I assume everything running in docker.

For containers: Upgrading new versions can be done headless by watchtower or manually.

For the host: You can run package updates regularly or enable unattended upgrades.

Backups can be easily done with cron + rclone. It is not a magic.

I personally run everything inside docker. Less things to concern.

reply
nicce
2 days ago
[-]
nixOS is great as host. If updates break something, either update does not go through or you just rollback to previous version. And all configuration in a single file.
reply
udev4096
2 days ago
[-]
I have been trying to move from proxmox + arch VMs to incus + nixos VMs. Really love the idea of functional programs as a config but the upfront cost of getting familiar with it is quite high but seems to be worth it
reply
tamimio
2 days ago
[-]
Self hosting is great, but I am more interested in decentralized technology, whether as services or even radio. I think in the near future the world will experience major disruptions, technical, financial, or even political, that centralized solutions are rendered useless, and the average person would rather have a local connection instead (local as in both topology and physical medium, so you have your own wifi station serving the neighborhood for example), and of course self hosting will be part of it, but there should be protocols that support it either in software or hardware, so it would be great for example you host your xyz chat server instance and within the client side (phones for example) you switch to local mode in the app and connect to local server. I know some applications have already implemented this but not yet adopted and still too niche for the average person, let alone for other services besides chats. Some have already caught the potential and started making ideas around it, bitchat is an example, but relying on Bluetooth won't really do it in my opinion, instead, users would be having their own 5G BTS managed and operated locally, with an option to connect to nearby 5G networks, or similar tech like wimax.
reply
npodbielski
2 days ago
[-]
You think civilisation will go down but you will still be able to chat with people via smartphone?
reply
tamimio
2 days ago
[-]
Decline usually happens gradually. I highly doubt a coronal mass ejection will fry up all the electronics on earth, or I hope not at least.
reply
abdullahkhalids
2 days ago
[-]
20 years ago grandpa could go to limewire.com, download setup.exe and click next->next->next to install a fully functional file hosting server+client. It was so easy that 1/3rd of world's computers had limewire installed in 2007 [1]. ONE FUCKING THIRD!

Today, to install even the simplest self-hosted software, one has to be effectively a professional software engineer. Use SSH, Use Docker, use tailscale, understand TLS and generate certificates, Perform maintenance updates, check backups, and million things that are automatable.

No idea why self-hosted software isn't `apt-get install` and forget. Just like Limewire. But that's the reason no one self-hosts.

[1] https://en.wikipedia.org/wiki/LimeWire

reply
Pooge
2 days ago
[-]
> No idea why self-hosted software isn't `apt-get install` and forget.

Some of it is. But as soon as you want your services to be accessible from the Internet, you need to have a domain name and HTTPS. To run Limewire or a BitTorrent client, you don't need a domain name yourself because you use a central server (in the case of BitTorrent, a tracker) to help you discover peers.

reply
abdullahkhalids
2 days ago
[-]
All the popular domain name services and certificate issuers have APIs. All grandpa has to do is go online and buy a domain - which is a very reasonable step that grandpa can do. Grandpa, after all buys stuff online. But after that the self-hosted app should be able to leverage the APIs to configure all the settings.
reply
MrZander
2 days ago
[-]
I don't think many people would consider limewire to be "self-hosting". That is just installing a program.
reply
rakoo
2 days ago
[-]
That's ok, because "self-hosting" is not a goal in itself, it's a means to an end
reply
thewebguyd
2 days ago
[-]
> No idea why self-hosted software isn't `apt-get install` and forget.

Ubuntu tried to fix this with snaps but the whole Linux community raged and pushed back at them. Yeah, snap has its faults but it was designed initially for server-side apps.

Snap install xyz-selfhosted-app was the initial goal. You can install nextcloud as a snap right now.

Instead the Linux community let perfect be the enemy of good and successfully convinced everyone else to dump and avoid snaps as a format at all costs.

reply
quesera
2 days ago
[-]
I don't recall any of that narrative being why people didn't like snaps.

One of the early sticking points was switching Firefox from deb to snap. That doesn't fit into your characterization.

reply
thewebguyd
2 days ago
[-]
Right, it got a bad rep on the desktop which tarnished its reuptation overall as a packaging format entirely
reply
quesera
1 day ago
[-]
Isn't Ubuntu primarily a desktop distribution?

The numbers might favor server installs (no idea), but it seems like the decisions must be primarily desktop. (i.e. a server admin or business that installs a thousand Ubuntu instances is just a single decision).

Either way, if Canonical's goals for snaps included easing people into self-hosting their services, surely making the experience pleasant on desktop would be a priority?

I don't recall any positive changes brought by snaps. I was looking at it through a desktop lens at the time, but my general perspective is mostly server-side, so I might be biased in that direction.

I don't think the two perspectives are necessarily in conflict, but noted just for framing... :)

reply
aborsy
1 day ago
[-]
Nextcloud snap is really easy to install, and has been solid. Zero maintenance.
reply
floundy
2 days ago
[-]
>Today, to install even the simplest self-hosted software, one has to be effectively a professional software engineer.

I’m a regular engineer, non-software, my coding knowledge is very basic, I could never be employed even as a junior dev unless I wanted to spend evenings grinding and learning.

Still I was able to set up a NAS and a personal server using Docker. I think a basic and broad intro to programming class like Harvard’s CS50 is all that would be required to learn enough to be able to figure out self-hosting.

reply
velocity3230
1 day ago
[-]
That's because Limewire is a client and not a server. If you wanted decent share ratios you needed to update your firewall to allow the correct inbound ports (or leave UPnP on (bad idea)).

A self-hosted server is an entirely different beast. You're right, it's not easy to setup and run -- but that's the world we live in. Malicious actors have ruined something that could have been relatively easy and automated to setup and run; even the most experienced of us wouldn't stand against professional penetration testers or nation states.

reply
gregsadetsky
2 days ago
[-]
Fully agreed that any rough edges/onboarding can be solved (with a lot of work, care, etc.).

I just have one main question: what would you like to self-host? Limewire was about file sharing, so the "value proposition" was clearly-ish defined. The "what does Limewire do" was clear.

Are you interested in hosting your own web site? Or email? Or google cal/drive/photos-equivalent? Some of it, all of it?

I'm genuinely curious, and also would love to know: is this a 80% of people want X (self-hosted file storage? web serving?), and then there's a very long tail of other services? Does everyone want a different thing? Or are needs power-law distributed? Cheers

reply
dizhn
2 days ago
[-]
Self hosting involves 3 steps in my life.

1) Find the docker compose file. 2) Change the expose line to make it specific 10.0.10.1:9000 instead of the default 0.0.0.0:9000 . 3) Connect via wireguard.

(Answers the "security" point a sister comment brought up too)

reply
udev4096
2 days ago
[-]
It's really frustrating how every single compose file publishes on 0.0.0.0 instead of 127.0.0.1
reply
dvdgsng
2 days ago
[-]
Not to mention the 100 steps you have to do to get their, of course...
reply
dizhn
2 days ago
[-]
Yes of course. The first time you try to get wireguard working you will not get it to ping the other side right away. It is a process. The next few times it'll be much quicker. Then it will keep running forever. Or maybe mine isn't working but I never noticed.

I had this wireguard setup in place long before I even ran my first docker container. It's all building on top of things already there.

reply
goodpoint
2 days ago
[-]
and goodbye security...
reply
dizhn
2 days ago
[-]
Yes. It's implicit. Goodbye security indeed.
reply
seec
1 day ago
[-]
I agree, software development has become insane. It's basically people creating problems to justify their existence, very much like the bureaucrats.

The reality is that the vast majority of people don't really need "self-hosting", what they would need is a decent software that they can run on their computer and let others access the data from time to time, mostly locally because global availability is rarely worth the hassle.

But since there is not much money into that and devs are enamored with insane layers of complexities and obtuse use case that are irrelevant for the vast majority we get the server software, that relies on web views and have a large disconnect with the data on a local machine. You just had another layer of stuff to manage on yet another computer when most already have one sitting idle the vast majority of time. I think the laptop craze is also partly to blame.

Even good local Mac apps have dried up and now it's all cloud-based subscription software, and you are supposed to be thankful you can install open-source stuff with a docker image and god knows how many configuration steps and gotchas.

Many of those softwares would be desirable and worth a bit of money if only you could just have a simple installation and management process but instead you have to pay with your time, which is not worth it for most. So, at this point, people just say fuck it and pay someone else, usually one of the big tech providers to take care of the problem and that's that.

As much as I like web technology for interactive documents, the software use case is still largely a pain in the ass.

reply
udev4096
2 days ago
[-]
Those things would hardly take an hour to set up, it's the cost of freedom and control. Don't want to put any effort? Might as well be a cloud slave and complain about lack of digital sovereignty while using gdrive like a fucking normie
reply
lifestyleguru
2 days ago
[-]
Because of American copyright predators and jurisdictions friendly to their lobbying like e.g. Germany. If you're planning to get involved in this kind of software, better think beforehand about practices to ensure your anonymity.
reply
skhameneh
2 days ago
[-]
XAMPP seems to still be alive and maintained.

https://www.apachefriends.org/

I haven't used it in over a decade, but I'm glad to see it's still kicking.

reply
neoromantique
2 days ago
[-]
>No idea why self-hosted software isn't `apt-get install` and forget. Just like Limewire. But that's the reason no one self-hosts.

Security.

As an avid self-hoster with a rack next to my desk, I shudder as I read your comment, unfortunately.

reply
abdullahkhalids
2 days ago
[-]
It's in fact the opposite. If the user has to manually write/fix endless configuration files, they are likely to make a mistake and have gaps in their security. And they will not know because their settings are distinct from everyone else.

If they `apt-get install` on a standard debian computer, and the application's defaults are already configured for high-security, and those exact settings have been tested by everyone else with the same software, you have a much higher chance of being secure. And if a gap is found, update is pushed by the authors and downloaded by everyone in their automatic nightly update.

reply
arich
2 days ago
[-]
The core point is valid. As someone who self hosts, it's become so complicated to get the most basic functionality setup that someone with little to no knowledge would really struggle whereas years ago it was much simpler. Functionally now we can do much more but practically, we've regressed.
reply
jimmaswell
2 days ago
[-]
What's so complicated? I'm currently on DigitalOcean but I've self-hosted before. My site is largely a basic LAMP setup with LetsEncrypt and a cron job to install security updates. Self-hosting that on one of my machines would only be a matter of buying a static IP and port forwarding.
reply
authorfly
2 days ago
[-]
LAMP with dynamic webpages (I assume your approach) works just like it ever did (besides SSL)

But are you really keen to make a PHP dynamic webpage application where each page imports some database function/credentials and uses them to render html?

Can you keep the behavior of fluent userflow (e.g. menu not rerendering) that way? Only with minimal design.

When in 2006 most webpages had an iframe from the main content, an iframe for the menu, and maybe an iframe for some other element (e.g. a chat window), it was fine to refresh one of those or have a link in one load another dynamic page. Today that is not seen to be very attractive and to common people (consumers and businesses), unattractive means low-trust which means less income. Just my experience, unfortunately. I also loved that era in hindsight, even though the bugs were frustrating, let alone the JS binding and undefined errors if you added that...

reply
perilunar
1 day ago
[-]
You can make modern single-page web apps with a LAMP back-end if you want. PHP is perfectly capable of serving database query results as JSON, and Apache will happily serve your (now static) HTML and JS framework-based page.
reply
metalforever
1 day ago
[-]
I was doing web development in 2006 and that's not how it was. Websites were not all in i-frames and they were not all insecure. Setting up a PHP dynamic website with Apache does not have to be insecure and didn't have to be back then, either.
reply
dlisboa
2 days ago
[-]
Putting something on the Internet by yourself has always been outside the reach of a non-tech person. Years ago regular people weren't deploying globally available complex software from their desktops either.
reply
neoromantique
2 days ago
[-]
The point to an extent is to make it have friction.

If you don't care enough to figure it out, then you don't care enough to make it secure and that leads to very very bad time in modern largely internet-centric world.

reply
goodpoint
2 days ago
[-]
100% wrong
reply
AlfeG
2 days ago
[-]
Not as click click click, but still awesome - copyparty
reply
j23n
2 days ago
[-]
I've stepped back from self-hosting after realizing that 90% of my use case was to keep calendar/contacts/files/photos/passwords in sync between my laptop and phone.

I'm now experimenting with a files-based approach, using syncthing for the p2p syncing, and it works really well.

No VPS or home server to setup and maintain, no security worries, no database migrations, no extra backups, no tinkering with Caddy configs.

reply
azemetre
5 hours ago
[-]
Could you explain how you handle P2P between mobile and web? That’s the one hurdle I can’t figure out.
reply
seec
1 day ago
[-]
Yes that's what I think as well.

The problem is that we have all been tricked into cloud syncing because big tech couldn't figure out proper local sync and they actually have incentives not to because they would really like you to pay to their subscriptions for storage on which they have great margins.

Yet for the vast majority of people what would be needed is just very simple syncing between their phone and personal computer. It should work with a cable for speed but also wirelessly for convenience and that's it.

All the crap they add on top is mostly overengineered crap that sometimes doesn't even work and creates interdependence/lock-in.

reply
TuxSH
2 days ago
[-]
For self-use author has a point, but for public-facing sites not so much, because:

- infra work is thankless (see below)

- outages will last long because you're unlikely to have failovers (for disk failures, etc.), plus the time to react to these (no point in being paged for hobby work)

- more importantly, malicious LLM scrapers will put your infra under stress, and

- if you host large executable you'll likely want to do things like banning Microsoft's IP address because of irresponsible GH Actions users [1] [2] [3]

In the end it is just a lot less stress to pay someone else to deal with infra; for example, when hosting static sites on GH Pages or CF Pages, and when using CF caching solutions.

[1] https://www.theregister.com/2023/06/28/microsofts_github_gmp...

[2] https://news.ycombinator.com/item?id=36380325

[3] https://github.com/actions/runner-images/issues/7901

reply
rob_c
2 days ago
[-]
Only worth paying if you actually need it though.

And if it's a hobby, no you don't, that should be part of it, the fun is getting nocked out from orbit and figuring out how and why and how to avoid it. Stand back up again and you've learned from that mistake :p

reply
trenchpilgrim
2 days ago
[-]
We used to host production websites this way as recently as 10-15 years ago just fine. These days you can do it with as few as two machines and a good router or two. The main risk is power outages due to non-redundant power outside of a colo (solvable with a battery backup) and non-redundant public internet links (potentially solvable with a cellular failover + a lot of caching at the CDN, depending on the application).

You generally still use a CDN and WAF to filter incoming traffic when you self host (even without abusive scrapers you should probably do this for client latency). You can also serve large files from a cloud storage bucket for external users where it makes sense.

reply
shadowgovt
2 days ago
[-]
This era has been a long time coming.

We've known for decades now that the philosophy underpinning Free Software ("it's my computer and I should be able to use it as I wish") breaks down when it's no longer my computer.

Attempts were made to come up with a similar philosophy for Cloud infrastructure, but those attempts are largely struggling; they run into logical contradictions or deep complexity that the Four Essential Freedoms don't have. Issues like

1. Since we don't own the machines, we don't actually know what is needed to maintain system health. We are just guessing. Every new collected piece of information on our information is an opportunity for an argument.

2. Even if we can make arguments about owning our data, the arguments about owning metadata on that data, or data on the machines processing our data, are much murkier... Yet that data can often be reversed back to make guesses about our data because manipulation of our data creates that metadata.

3. With no physical control of the machines processing the data, we are de-facto in a trust relationship with (usually) strangers, a trust relationship that is generally not the case when we own the hardware; who cares what the contract says when every engineer at the hosting company has either physical access to the machine or a social relationship with someone who does, a relationship we lack? When your entire email account is out in the open or your PII has been compromised because of either bad security practices or an employee deciding to do whatever they want on their last day, are you really confident that contract will make you whole?

If there can be, practically, no similar philosophical grounding to the Four Freedoms, the conclusion is that cloud hosting is incompatible with those goals and we have to re-own the hardware to maintain the freedoms, if the freedoms matter.

reply
underlines
1 day ago
[-]
Even though I work as an IT Professional, I was almost always the only person not self hosting anything at home and not having a NAS.

I jumped the hoop and bought a Ugreen nas with 4 bays where the first thing I did was installing TrueNAS CE onto it and then use ChatGPT with highly customized prompts and the right context (my current docker-compose files).

Without much previous knowledge of docker, networking etc. except what I remembered from my IT vocational education from 15 years ago, I now have:

- Dockerized Apps

- App-Stacks in their own App-Network

- Apps that expose web UI not via ports, but via Traefik + Docker labels

- Only Traefik 443 ports reachable from WAN, plus optional port forwarding for non-http services

- Optional Cloudflare Tunnel

- Automatic Traefik TLS termination for LAN and WAN for my domain

- Split-DNS to get hostnames routed properly on LAN and WAN

- CrowdSec for all exposed containers

- Optional MFA via Cloudflare for exposed services

- Local DHCP/DNS via Technitium

- Automatic ZFS snapshots and remote backups

- Separation between ephemeral App data (DBs, Logs) on SSD and large files on HDD

reply
jeppester
2 days ago
[-]
I build myself a fedora coreos based nextcloud instance with encrypted backup to S3: https://github.com/jeppester/coreos-nextcloud

In short you fill in the env-files, then run butane and ignition. (I should improve the README some time)

I love how it's all configuration. If it breaks I can set up another instance with the same secrets in minutes. It will then grab the latest backup and continue like nothing happened.

reply
FinnKuhn
2 days ago
[-]
While this is only one data point, looking at the stats for r/selfhosted, self-hosting seems to be exploding in popularity since last year. The subreddit now has 2.2 on average million daily unique visitors with 175 million total views over the last 12 months, which is up 132 million visitors in comparison to the 12 months before.
reply
fourseventy
2 days ago
[-]
I'm currently self hosting my notes/journal/knowledge base with Trilium, photos with Immich, and files with File Browser, very happy with that setup so far. I just like the feeling of knowing I own my important data and that it won't go away because some third party company goes out of business or sunsets an app.
reply
renegat0x0
2 days ago
[-]
I self host my Search Engine / RSS reader. I track every page I visit from nearly all devices.

Since my basic search engine is self hosted nobody actually sees what I visit, and what I watch.

This is my conclusion seeing that social media algorithm is totally lost at what I would like to watch next.

Also I am in control over UI, and changes, which is a good and a bad thing

reply
BinaryIgor
2 days ago
[-]
"Many many years ago I was running an Android phone with Google services like Google Maps. One day I was looking for a feature in my Google account and saw that GMaps recorded my location history for years with detailed geocoordinates about every trip and every visit.

I was fascinated but also scared about that since I've never actually enabled it myself. I do like the fact that I could look up my location for every point in time but I want to be in control about that and know that only I have access to that data."

This made me thing whether there are any services (or ideas thereof) that would provide this kind of functionality but story encrypted in a similar way as proton does for email; in theory, you can use this pattern - data stored but encrypted on the server, but decrypted only by the client - to rebuild many useful services while retaining full sovereignty of your data.

reply
ikawe
2 days ago
[-]
One tricky thing about maps, as they relate to privacy, is that the earth is large.

Compare that to encrypted email: if I’m sending you an encrypted message, the total data involved is minimal. To a first approximation, it’s just the message contents.

But if I want “Google Maps but private,” I first need access to an entire globe’s worth of data, on the order of terabytes. That’s a lot of storage for your (usually mobile) client, and a lot of bandwidth for whoever delivers it. And that data needs to be refreshed over time.

Typical mapping applications (like Google Maps) solve this with a combination of network services that answer your questions remotely (“Tell me what you’re searching for, and I’ll tell you where it is.”) and by tiling data so your client can request exactly what it needs, and no more, which is also a tell.

The privacy focused options I see are:

1. Pre-download all the map data, like OrganicMaps [1], to perform your calculations on the device. From a privacy perspective, you reveal only a coarse-grained notion of the area you’re interested in. As a "bonus", you get an offline maps app. You need to know a priori what areas you’ll need. For directions, that's usually fine, because I’m usually looking at local places, but sometimes I want to explore a random spot around the globe. Real-time transit and traffic-adaptive routing remain unaddressed.

2. Self-host your own mapping stack, as with Headway (I work on Headway). For the reasons above, it’s harder than hosting your own wiki, but I think it’s doable. It doesn’t currently support storing personal data (where you’ve been, favorite places, etc.), but adding that in a privacy conscious way isn’t unfathomable.

[1] https://organicmaps.app (though there are others)

[2] https://github.com/headwaymaps/headway (see a hosted demo at https://maps.earth)

reply
forever_frey
1 day ago
[-]
The entire planet's worth reverse geocoding data is ~120gb. The map tiles file for whole planet is also ~120gb, and they both are precompiled, so you don't need hundreds of gbs of RAM to run your local planet. It's easier than you probably think nowadays. Not mobile-size, but local server-size
reply
ikawe
1 day ago
[-]
I'd love it if it were easier than I think, because I spend a lot of time thinking about it! I host maps.earth, which is a planet sized deployment of Headway mapping stack (which I also maintain).

To first order, you're right on about the storage size of a vector tileset and an geocoding dataset based on OpenStreetMap. But Google maps is a lot more than that!

Headway uses Valhalla for most routing. A planet wide valhalla graph is about ~100gb of storage. It doesn't produce reasonable transit directions. Transit is an even tougher cookie.

OpenTripPlanner gives good transit routing, but it doesn't scale to planet-wide coverage. We've settled on a cluster of OTP nodes for select metro areas - each one being on the order of 5-10GB of RAM.

https://about.maps.earth/posts/2023/03/adding-transit-direct...

So, I'd say we have some of the pieces of a general purposes mapping tool that could replace Google Maps usage, which you could host yourself.

But we don't have satellite imagery, real time traffic data, global transit coverage, rich POI data (like accurate opening hours, photographs, reviews).

Do all people want all these features? Probably not, but a lot of people seem to want at least some of it and it's not obvious to me that they'll be quickly solved.

reply
zertrin
1 day ago
[-]
What you describe can be done with self-hosted dawarich instance + the owntracks app. It records location history and lets you visualize it in a web interface.

https://github.com/Freika/dawarich

reply
floundy
2 days ago
[-]
What you’re describing is essentially how Apple Maps works.

https://www.apple.com/legal/privacy/data/en/apple-maps/

reply
Steltek
2 days ago
[-]
What I think is missed in self-hosting is WHAT you're self-hosting. In priority order, you should self-host:

1. Your Data. It is the most irreplaceable digital asset. No one should see their photos, their email, their whatever, go poof because of external forces. Ensure everything on your devices is backed up to a NAS. Set a reminder for quarterly offline backups. Backups are an achievable goal for everyone, not just the tech elite.

2. Your Identity. By which I mean a domain name. Keep the domain pseudonymous. Use a trustworthy, respectable registrar. Maybe give some thought for geopolitics these days. Pay for email hosting and point your domain at them.

3. Lastly, your Apps. This is much harder work and only reasonably achievable by tech savy people.

reply
zwilliamson
21 hours ago
[-]
I think one major improvement in technology that allows self hosting in the year 2025 is mesh VPN’s like Tailscale.

Sure, you could run your own firewall and what not but the mesh VPN with it’s simple set up. Makes it a whole lot easier to access your home services.

reply
thayne
2 days ago
[-]
I wish articles like this would include recommendations on how to choose hardware to run your self-hosted services on.
reply
therealfiona
2 days ago
[-]
What ever you have laying around is a great starting point.

It all comes down to what you want to spend vs what you want to host and how you want to host it.

You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way. Or you could use an old computer and get just as far. Or you could use a full blown rack mount server with a real IPMI. Or you could use a VPS and accomplish the same thing in the cloud.

reply
troupo
2 days ago
[-]
> You could build a raspberry pi docker swarm cluster and get very far. Heck, a single Pi 5 with 4gb of memory will get you on your way.

No, you couldn't, and no, you wouldn't.

To build a swarm you need a lot of fiddling and tooling. Where are you keeping them? How are they all connected? What's the clustering software? How is this any better than an old PC with a few SSDs?

Raspberry Pi with any amount of RAM is an exercise in frustration: it's abysmally slow for any kind of work or experimentation.

Really, the only useful advice is to use an old PC or use a VPS or a dedicated server somewhere.

reply
thayne
2 days ago
[-]
And what if I don't have anything lying around?
reply
chasd00
2 days ago
[-]
A mid-range gaming build without the GPU is capable of running a full saas stack for a small company let alone an individual.

https://pcmasterrace.org/builds

reply
FinnKuhn
2 days ago
[-]
Depends on what you want to do with it. To start any old free PC that you can find online is going to work for experimenting with it.
reply
nicce
2 days ago
[-]
If you plan to run 24/7, buy some proper fans and undervolt/set power limits to CPU and you will be saving with noise and electricity.
reply
nickthegreek
6 hours ago
[-]
n150
reply
troupo
2 days ago
[-]
And things like "this is a rack you can use, it will not cost you a kidney, and it will not blow your eardrums out with noise"
reply
npodbielski
2 days ago
[-]
I would not use nuc like this guy. Had one and it was slow and it have limited capacity.

Then I had my old PC and it was very good but I wanted more nvme disks and motherboard supported only 7.

Now I am migrating to threadripper which is a bit overkill but I will have ability to run 1 or two GPUs along with 23 nvme disks for example.

reply
LTL_FTC
2 days ago
[-]
I also have a threadripper pro with tons of pcie lanes. Just wish there was an easier way to use newer datacenter flash and that it wasn't so expensive. Hoping those servers that hold 20 something u.2/3 drives start getting decommissioned soon as I hope my current batch of HDD's will be my last. Curious to know how you're using all those nvme drives?
reply
npodbielski
1 day ago
[-]
Asus and Acer motherboards supports bifurcation on pcie slots. So for example you can enable this in BIOS and put Asus hyper extension card to put 4 nvme disk into pcie slot https://www.asus.com/support/faq/1037507/

There are other cards like that i.e. https://global.icydock.com/product_323.html This one have better support for smaller in size disks, much easier to swap disk but costs like 4 times more.

I think it could put even more drives in my new case I.e. using pcie to u2 card and the using 8 drives bays. But this would cost me probably like 3 times more just for the bay with connectors. and I do not need that much space.

https://global.icydock.com/product_363.html

If you like u2 drives then icydock provides solution for them too. Or if you want go cheaper there are other cards with slim-sas or mcio https://www.microsatacables.com/4-port-pcie-3-0-x16-to-u-2-s...

But u2 disks are at least 2 times more costly per GB. Like 40tb costs 10k$. This is too much IMO.

reply
NoiseBert69
2 days ago
[-]
I'm your opposite :-)

Intel n100 with 32GB RAM and single big SSD here (but with daily backups).

Eats roughly 10 Watts and does the job.

reply
npodbielski
2 days ago
[-]
If this does the job for you sure. For me they were very pricey at the time comparing to my old PC Intel core i3 that I had already lying around. And power cost does not matter really in my case.
reply
import
2 days ago
[-]
I have two NUC’s (ryzen 7 and intel i5) they’re rock solid.
reply
npodbielski
2 days ago
[-]
Yes, if this works sure why not. Few years back decent NUC cost was at least 1k$ dollars. And still it is quite small, so you cannot slam 8 ssds in there.

I did use my old PC and it was working very nice with 4 sata ssds, in raid 10.

And as I already said on other comment - in my case power does not matter much. Space too.

reply
schmookeeg
2 days ago
[-]
"More RAM than you think you'll need" -- particularly if you virtualize. :)
reply
npodbielski
2 days ago
[-]
Why? I was running like 15 containers on a hardware with 32gb of ram. You could probably safely use disk swap as additional memory for less frequent used applications, though I did not check.
reply
schmookeeg
2 days ago
[-]
For my case, and my workload, the answer has always been "RAM is cheap, and swapping sucks" -- but there are folks using Rpi as a NAS platform so really... my anecdote actually sucks upon reflection and I'd retract it if I could.

For every clown like me with massive RAM in their colo'd box, there is someone doing better and more amazing things with an ESP32 and a few molecules of RAM :D

reply
simonw
2 days ago
[-]
The existence of Tailscale has made me a lot less scared of self-hosing than I used to be, since it provides a method of securing access that's both robust and easy to setup.

... but I still worry about backups. Having encrypted off-site backups is essential for this to work, and they need to be frequently tested as well.

There are good tools for that too (I've had good experiences with restic to Cloudflare B2) but assembling them is still a fair amount of overhead, and making sure they keep working needs discipline that I may want to reserve for other problems!

reply
cls59
2 days ago
[-]
The control plane of Tailscale can even be self-hosted via the Headscale project:

https://github.com/juanfont/headscale

As for backups, I like both https://github.com/restic/restic and https://github.com/kopia/kopia/. Encryption is done client-side, so the only thing the offsite host receives is encrypted blobs.

reply
fundatus
2 days ago
[-]
For anyone looking for a convenient way to set restic up: Backrest[1] provides a docker container and a web interface to configure, monitor and restore your restic backups.

[1] https://github.com/garethgeorge/backrest

reply
Sanzig
2 days ago
[-]
I'm currently using Restic + Backblaze, but I'm building a new NAS with OpenZFS. My plan for it is to use ZFS send to backup whole datasets automatically. I was thinking of giving zfsbackup-go [1] a try, since it allows using ZFS send with any S3 object storage provider. No idea how well it'll work, but I'll give it a shot.

[1] https://github.com/someone1/zfsbackup-go

reply
jasode
2 days ago
[-]
>... but I still worry about backups.

For me, it's not just off-site backups, it's also the operational risks if I'm not around which I wrote about previously: https://news.ycombinator.com/item?id=39526863

In addition to changing my mind about self-hosting email, my most recent adventure was self-hosting Bitwarden/Vaultwarden for passwords management. I got everything to work (SSL certificates, re-startable container scripts to survive server reboots, etc) ... but I didn't like the resultant complexity. There was also the random unreliability because a new iOS client would break Vaultwarden and you'd have to go to github and download the latest bugfix. There's no way for my friend to manage that setup. She didn't want to pay for a 1Passord subscription so we switched to KeePass.

I'm still personally ok with self-hosting some low-stakes software like a media server where outages don't really matter. But I'm now more risk-averse with self-hosting critical email and passwords.

EDIT to reply: >Bitwarden client works fine if server goes down, you just can't edit data

I wasn't talking about the scenario of a self-hosted Vaultwarden being temporarily down. (Although I also didn't like that the smartphone clients will only work for 30-days offline[1] which was another decision factor to not stay on it.)

Instead, the issue is Bitwarden will make some changes to both their iOS client and their own "official" Bitwarden servers which is incompatible with Vaultwarden. This happens because they have no reason to test it on an "unofficial" implementation such as Vaultwarden. That's when you go to the Vaultwarden Github "Issues" tab and look for a new git commit with whatever new Rust code makes it work with the latest iOS client again. It doesn't happen very frequently, but it happened often enough that it makes it only usable for a techie (like me) to babysit. I can't inflict that type of random broken setup on the rest of my family. Vaultwarden is not set-and-forget. (I'm also not complaining about Bitwarden or Vaultwarden and those projects are fine. I'm just being realistic about how the self-hosted setup can't work without my IT support.)

[1] Offline access in Bitwarden client only works for 30 days. : https://bitwarden.com/blog/configuring-bitwarden-clients-for...

reply
npodbielski
2 days ago
[-]
Bitwarden client works fine if server goes down, you just can't edit data. I am self hosting bitwarden for several years and I do not complain.
reply
smiley1437
2 days ago
[-]
I value my time as well that's why I have 2 Synology devices, one at my home, one at my sibling's home.

Both on Tailscale and we use Hyperbackup between them.

It was very easy to set up and provides offsite backups for both of us.

Synology very recently (a day ago) decided to allow 3rd party drives again with DSM 7.3.

reply
npodbielski
2 days ago
[-]
You can look at https://kopia.io/ Looks quite OK. With one downside that it manages only one backup target so you can't I.e. backup to local HDD and to cloud. You need two instances.
reply
romanzipp
2 days ago
[-]
That's right. I also haven't solved the backup problem perfectly but I'd love to dive in deeper in the future. Well-tested is probably the important aspect in this
reply
move-on-by
2 days ago
[-]
I do as much self hosting as I can, but at the end of the day it requires buy-in by all users to be effective. It can create a lot of friction otherwise. I’ve accepted it’s just not going to happen.

The absolutely most important item (IMO) is photos- which I frankly do not trust Apple’s syncing logic to not screw up at some point. I’ve taken the approach that my self-hosting _is_ the backup. They lock me out or just wipe everything, no problem I have it all backed up. If the house burns down- everything is still operational.

reply
Havoc
2 days ago
[-]
Also just life stability. If i figure out a foss thing once i can functionally use that for life as personal infra

A SaaS - they could change price tomorrow or change terms or do any number of things that could be an issue. It’s a severely asymmetrical dynamic

Don’t think I’ll ever do email though

reply
Ingon
2 days ago
[-]
I also started self-hosting more and more. But instead of making services available on the internet/intranet (e.g. VPS reverse proxy/tailscale), I'm binding them to localhost and using connet [1] (cloud or self-host [2]) to cast these locally on my on my PC/phone (when I need them). These include my NAS and Syncthing instance running on my NAS and I'm looking to add more.

[1] https://connet.dev

[2] https://github.com/connet-dev/connet

reply
prism56
2 days ago
[-]
I selfhost only things that aren't critical, I'm not hosting passwords or photos. I'd rather pay for the redundancy offered by big datacentres. I do however choose platforms that are privacy first, ente.io/Proton for example.

I do however selfhost FreshRSS, Audiobooks, Readeck, Linkding, YoutubetoRSS... Useful services that individually hosted playforms want £5 or so per month to use. The redundancy is significantly less important with these services to me compared to losing £30+ extra a month.

reply
trenchpilgrim
2 days ago
[-]
I self host photos, but my backups are cloud hosted. A cold rarely accessed backup is way cheaper and more fungible across providers than an entire photos app.
reply
prism56
2 days ago
[-]
Yeah that's fair enough. Valid approach, I went away from this due to getting family on my ente plan. I didn't want to be responsible/trusted with their images. This way the images are pretty well protected in ente's infrastructure and we can share in the same platform.
reply
trenchpilgrim
2 days ago
[-]
True, I only host my own photos, I don't want to possess anyone else's selfies or family photos for sure
reply
fridder
2 days ago
[-]
I do wonder if there is a market for a preinstalled self hosting computer or setup where the service would be automated backups, e2e encrypted of course, and perhaps high availability
reply
jopsen
2 days ago
[-]
Security updates.

And fixing things when they eventually break.

Honestly, there is a reason I still use a dreamhost shared plan. It's dirt cheap, been running forever, and I've never had to do the boring stuff.

And if they break my app, I can ask them to fix it.

If you deploy your app on a PaaS you still have to update everything inside the container.

Old school php hosting on a shared server does have some upsides - namely affordable support. (Sure, if I'm an extreme edge case support will not do much for me).

The same kind of thing for "self-hosting" would be cool.

reply
EvanAnderson
2 days ago
[-]
Synology and likely other NAS vendors are basically doing this. A buddy of mine isn't any kind of Linux sysadmin but he's running his whole home media management setup as Docker containers on a Synology NAS. I assume they have off-site backup services available, too.
reply
aborsy
2 days ago
[-]
Self hosting is much more accessible today. The security issue has not been solved yet though. How do you make available your services to other people?

People won’t install VPNs. They are usually okay with authenticating to a web server, so you can put authentication with something like Authentik in front of your reverse proxy. But can you configure this front end security correctly and patch it, and are you sure it doesn’t have easy zero days?

reply
elevation
2 days ago
[-]
Your employees/contractors will install your VPN if it's a contingency of employment. If you don't need to serve to the world, this step dramatically limits your attack surface, though you should still use Authentik and TLS.
reply
esseph
1 day ago
[-]
front it with a cloudflare tunnel

waits for the pitchforks and torches

reply
aborsy
1 day ago
[-]
CF terminated TLS and scans the traffic. It makes sense if you host your services on a VPS.

If I run my services at home, I don’t want to provide Cloudflare with access to my data.

reply
esseph
1 day ago
[-]
It also makes sense if you run public services at home
reply
aborsy
1 day ago
[-]
Public in the sense that the actual content is public (like a blog), sure, anyone can access it, so does the reverse proxy). Since it’s public, I Would still take the trouble entirely out to a provider.

Public in the sense that the front page is public, and the client still need to authenticate to the service at home, in this case, that does not make sense (the user authenticates to reverse proxy, which authenticates to the service), for the reason I mentioned.

reply
esseph
1 day ago
[-]
> I Would still take the trouble entirely out to a provider.

Frankly, because you don't trust your own abilities in that area, or you're simply not interested in taking responsibility for that piece - and that's totally fine.

> Public in the sense that the front page is public, and the client still need to authenticate to the service at home

Maybe your authentication doesn't live at home, or on the home network. It could be on a vps or a cloud radius/ldap/etc auth service.

Some people have been writing code for 30+ years. I've been running internet facing systems for 30+. Different backgrounds, different levels of comfort and enjoyment out of different things!

reply
6ak74rfy
2 days ago
[-]
I too care a lot about privacy and data sovereignty but those aren't sufficient arguments to self-host. For instance, my wife cares about the two too and so she uses most of the services that I host at my home, but she isn't going to start self-hosting herself anytime soon.

I think the missing piece is you need to enjoy the process itself - without that, it's not really tenable (at least today).

reply
rubatuga
1 day ago
[-]
I can testify Radicale works great on iOS devices:

https://www.naut.ca/blog/2019/11/16/self-hosting-series-part...

reply
rwendt1337
2 days ago
[-]
> Radicale (Python, basic web ui, only single user, does not work with apple devices from my experience)

it does work with apple devices from my experience

reply
sutoor
2 days ago
[-]
I use Radicale with iOS and iPadOS devices and multiple users.
reply
throwawaylaptop
1 day ago
[-]
I operate an entire saas with 34 paying smb companies, on namecheap shared hosting. PHP/jQuery. While namecheaps time to first bit is a little longer than some, my saas is still faster than 90% of CRMs I've ever used because that was my main goal when writing it.
reply
rob_c
2 days ago
[-]
Cost, experience and for the paranoid (right or not) control.

The biggest downside is initial cost in time, effort and cash compared to typing in a credit card.

Ok other downsides include lack of power redundancy and decent networking which are more common in data-centers.

Other side of this is, why buy 8xa100 for that project to stick them on eBay to recoup cost when you can rent them?

reply
kdawkins
2 days ago
[-]
Agreed - Effort/Cost/Time is what always nips my self-host projects out of the gate. I start working down the recursive thought experiment of everything I "need" to get an email server working (for example) and bail when I see the list.

Convincing the family to buy in is hard too because (as you put) I can't promise the same level of redundancy/service guarantees.

reply
1vuio0pswjnm7
1 day ago
[-]
A VPS provider that allows the customer to upload and boot their own custom kernels

These kernels could be for _any_ operating system that runs on the hardware, e.g., NetBSD

A. This already exists

B. This does not exist

reply
jcon321
2 days ago
[-]
We self host everything at our company as we're a data center - all the tools required for a modern development stack + modern environments.

It's great for learning and control - it's not so great for anxiety.

reply
avmich
1 day ago
[-]
> Big Tech and governments (like with chat control in the EU) want to shine light in every part of your personal life.

What would be a way to shine light in every part of their private life?

reply
thire
2 days ago
[-]
I have been so happy moving out of Google Photos and storing everything on my NAS + cloud backup. I don't have to worry about Google re-encoding my videos and not letting me get my originals back.
reply
octo888
2 days ago
[-]
[warning: old man rants at clouds]

Maybe I'm getting old, but I think at this stage I want the third, often-unspoken route: no data.

Let go of things

No need for infrastructure when you have nothing to host. And data that doesn't exist is the most secure in the world.

Is my home a home – or the premises of a small-business? Racks, servers, cables, smart devices, the fan noise etc!

It does feel like we are operating our lives more and more like a small business these days: managing data, managing logins, "B2B" with hundreds of companies (EULAs, contracts, invoices, subscriptions...), files, archives, backups, contacts, appointments, app after app after app...on and on.

I wish life were simpler. Maybe a lot is in our control, more than we realise

reply
tylerjl
2 days ago
[-]
Another sort-of-recent development in the space has made self-hosting dramatically more accessible: even though hardware costs were reasonable before, they're now _very_ reasonable and also resource-efficient.

Repurposing an old tower would offer you enough compute to self-host services back in the day, but now an Intel NUC has plenty of resources in a very small footprint and branching out into the Raspberry Pi-adjacent family of hardware also offers even smaller power draw on aarch64 SBCs.

One experiment in my own lab has been to deploy glusterfs across a fleet of ODroid HC4 devices to operate a distributed storage network. The devices sip small amounts of power, are easy to expand, and last week a disk died completely but I swapped the hardware out while never losing access to the data thanks to separate networked peers staying online while the downed host got new hardware.

Relying on container deployments rather than fat VMs also helps to compress resource requirements when operating lots of self-hosted services. I've got about ~20 nomad-operated services spread across various small, cheap aarch64 hosts that can go down without worrying about it because nomad will just pick a new one.

reply
OkayPhysicist
2 days ago
[-]
Hardware's hasn't been the issue (at least for the 15 or so years I've been doing server tinkering). The problem is ISPs. They don't want to give me a static IP address, and they don't want to give me even half-decent upload bandwidth.
reply
thenthenthen
1 day ago
[-]
I would be interested to read more on hardening your internet exposed home lab and ideas for (off site?) backups!
reply
oxalorg
2 days ago
[-]
I left my Hetzner VPS open to password logins for over 3 years, no security updates, no firewalls, no kernel updates, no apt upgrades; only fail2ban and I survived: https://oxal.org/blog/my-vps-security-mess/

Don't be me, but have some solace in the fact that even if you royally mess up things won't be as bad as you think.

I self host a lot of things on a VPS and have recently started self hosting on a raspberry pi 5, it's extremely liberating!

reply
breakingcups
1 day ago
[-]
You have no idea whether your server is currently actively compromised and participating in a botnet.
reply
kentbrew
1 day ago
[-]
Small typo: under Calendar and Contacts, "let's other" looks like it wants to be "let others."
reply
igor47
2 days ago
[-]
Thank you for writing this! I've been playing around with writing something similar and I getting lost going way too far up the concept chain. Like, ultimately, I self host because... Capitalism?

In my ideal world, one tech savvy person would run services for a group of their friends and family. This makes the concept more mainstream and accessible, while also creating social cohesion for that group. I think we've monetized too many of our relationships, and often have no real reason to be in community. This is a big change from most of human history, where you depended on community for survival. Building lower-stakes bonds now (I run your email, you help me fix my car) helps avoid the problem later when you really need help (old, sick) but have never practiced getting anything you need except by paying for it

reply
daitangio
1 day ago
[-]
Self-hosting is becoming a freedom factor in my humble opinion. I have an hard time hosting my email server, it was not so diffcult 10 years ago and was trivial 20 years ago.

The reason is the anti-spam rules and the fact that Google, Microsoft and so on are creating a iron trust to each other, and the little server outside are marked spam by default.

Lets encrypt avoided a similar destiny to https connections, but the risk is always out of the window. I mean, https was becoming "pay-us-to-publish a web server, or our browser will mark you as unsafe and do not display it".

I think it is time also to self-host private free chats and possibly other services lik DDoS services.

reply
alexchantavy
2 days ago
[-]
In recent years I noticed RSS has gotten way less popular, even in hacker circles (or maybe that's just my perception).

I remember browsers used to have a native RSS button in the main interface and then you could curate your feed. Seems better than any news feed thing gamified to steal my attention. Sigh.

old-man-yells-at-cloud.gif

reply
neko_lover
2 days ago
[-]
interesting to find out there are self-hostable location tracking solutions as replacement for google location services and the like!
reply
ZebusJesus
1 day ago
[-]
This site had some great links in it, thanks for the share
reply
esseph
1 day ago
[-]
Some people here really are truly terrified of self hosting.

Huh.

reply
podgietaru
2 days ago
[-]
I worked on getting [Omnivore](https://github.com/omnivore-app/omnivore) from cloud to self hosting.

I never appreciated the value of Self-hosting until then. I was so sick of finding new services to do essentially the same thing. I just wanted some stability.

Now I can continue using the thing I was already using, and have developed my own custom RSS Reader ontop of Omnivore.

I don't need to care about things breaking my flow. I can update the parsing logic if websites break, or I want to bypass some paywalls. It really changed my view on Self-hosting.

reply
cowpig
2 days ago
[-]
selfhostyour.tech
reply
mikewarot
2 days ago
[-]
If I could host something on an actually secure OS, self hosting might make sense. Given the deliberately crippled choices we're all given, walled gardens with active management are the only somewhat sane options.

Self hosting remains untenable for most things because of the legacy of Unix and MS-DOS and the ambient authority model of computing.

reply
xoa
2 days ago
[-]
This list of "why self host" focuses almost entirely on privacy/sovereignty which, as the author admits, has come to be a pretty standard reason given. But I think there are plenty of purely practical ones as well, depending on your specific situation. There's a spectrum here from self-hosting to leaving it all to 3rd parties, and you can mix and match to get the most value out of it. But I'd add:

- Use case/cloud business model mismatch: ultimately much of the value of cloud services comes from flexibility and amortization across massive audiences. Sometimes that's exactly what one might be after. But sometimes that can leave a big enough mismatch between how it gets charged for vs what you want to do that you will flat out save money, a lot of money, very fast with your own metal you can adjust to yourself.

- Speed: Somewhat related to above but on the performance side instead of cost. 10G at this point is nothing on a LAN and it's been regularly easy to pick up used 100G Chelsio NICs for <$200, I've got a bunch of them. Switches have been slowly coming down in price as well, Mikrotik's basic 4 port 100G switch is $200/port brand new. If you're ok with 25 or 40 can do even less. Any of those is much, much faster (and of course lower latency) then the WAN links a lot of us have access to, even at a lot of common data centers that'd be quite the cost add. And NVMe arrays have made it trivial to saturate that, even before getting into the computing side. Certainly not everyone has that kind of data and wants/needs to be able to access it fast offline, but it's not useless.

- Customization: catch all for beyond all-of-the-above, but just you really can tune directly to what you're interested in terms of cpu/memory/gpu/storage/whatever mix. You can find all sorts of interesting used stuff for cheap and toss it in if you want to play with it. Make it all fit you.

- Professional development: also not common, but on HN in particular probably a number of folks would derive some real benefit from kicking the tires on the various lower level moving parts that go into the infrastructure they work with at a higher level normally. Once in awhile you might even find it leads to entire new career paths, but I think even if one typically works with abstractions having a much better sense of what's behind them is occasionally quite valuable.

Not to diminish the value of privacy/sovereignty either, but there are had dollar/euro/yen considerations as well. I also think self hosting tends to build on itself, in that there can be a higher initial investment in infrastructure but then previously hard/expensive adaptions get easier and easier. Spinning up a whole isolated vm/jail/vlan/dynamic allocation becomes trivial.

Of course, it is upfront investment, you are making some bets on tech, and it's also just plain more physical stuff, which takes up physical space of yours. I think a fair number of people might get value out of the super shallow end of the pool (starting with having your own domain) but there's nothing wrong with deliberately leaning on remote infra in general for most. But worth reevaluating from time to time, because the amount of high value and/or open source stuff available now is just wonderful. And if we have a big crash might be a lot of great deals to pick up!

reply
PhilipRoman
2 days ago
[-]
Performance is definitely a big factor. I used to think CI was inherently slow and nothing could be done there until I started to self host local runners.
reply
npodbielski
2 days ago
[-]
Seems like you like networking.
reply
throw-10-8
2 days ago
[-]
He mentions nextcloud, has anyone been self-hosting this for a small org with 100-200 users?
reply
pauleee
2 days ago
[-]
Kinda. I use managed Nextcloud by Hetzner (StorageShare) for ~20 people with their smallest instance (1TB, 4.50 EUR/month) and connected it with a Collabora hosted on the smallest Hetzner VPS (this could use more cores).

If you wanna self-host completly look at https://github.com/nextcloud/all-in-one . I have this running on my NAS for other stuff, but it just works out of the box.

Edit: and it scales. Orgs with a lot more people use it for 10k users or more. And it doesn't need a 100 EUR/month setup, from what I experienced.

reply
throw-10-8
2 days ago
[-]
Yeah I tested it out with the hetzner app on their smallest dedicated server and it ran fine.

Is storage share the managed service?

reply
aborsy
2 days ago
[-]
Look at AIO.

There are institutions with several thousands of employees that use Nextcloud, including mine.

I run an installation for our family, and it’s been problem free.

reply
throw-10-8
2 days ago
[-]
Great, do you use their video conferencing (Talk?) at that scale?
reply
dizhn
2 days ago
[-]
Yes it's fine. Do you have any particular questions?
reply
throw-10-8
2 days ago
[-]
What does your usage look like? My use would be about 30 heavy daily users, another hundred sporadic. Mostly doc editing and video calls.

What kind of hosting infra are you using? Hetzner seems popular.

Any major recent security concerns, it seems to have a large attack surface.

reply
dizhn
2 days ago
[-]
We use it as primarily a file sharing thing. We do not use it for video calls (and I woulnd't recomment it for that purpose).Last time I tried integrating with an office suite server was also a pain in the ass. I do use its calendar and dav addressbook because it works fairly well.

The only security thing we've done is disable a few paths in the web configuration and only allow SSO logins. (Authentik). You can also put it behind Authentik's embedded proxy for more security. I didn't do it because of the use case with generic calendar/addresbook software.

Hetzner is good. Great even, in terms of what you get for the money. They do provide mostly professional service. You will not get one iota of extra service other than what they promise. VERY German in that regard and very unapologetic about it. And don't talk about them in public with your real identity attached. They ban people for arbitrary reasons and have their uber fans (children with a 4 dollar vps) convince other fellow users that if you got banned you must have been a Russian hacker trying to infiltrate the Hague.

reply