Eventually, the thing that kicked the bucket was actually the keyboard (and later the fan started making "my car won't start" noises occasionally). Even the horribly-slow HDD (that handled Ubuntu Server surprisingly well) hadn't died yet.
In my country it's possible to have power loss occasionally, so having the battery on good health is important. I'm planning to setup two unused laptops, and a mobile for servers (different purposes) and power management, battery health has been an issue.
Here is a knowledgebase article that goes into the details:
https://knowledgebase.frame.work/en_us/framework-laptop-13-b...
echo 80 | doas tee /sys/class/power_supply/BAT0/charge_stop_threshold
One interesting note in the new functionality is that charging will let the battery drain by some percent before recharging.
Laptops generally don't have the best thermals, so there's a limit to how much chooch you can get from them.
I am thinking of writing a script that will trigger the desktop to shut down as soon as the LAN goes down. My desktop is 1kw PSU, so, I am really not looking forward to replacing the 1.2kw UPS every few years.
These Laptops will be only doing backups, and small personal services
one for photo backups, replacing OneDrive
Another for sync, calendar service etc.
None requires heavy processing, in my knowledge. So thermals are not going to be an issue, hopefully.
Secondary issues are cats. And I don't have any idea how to stop them from stepping on the keyboards whenever they think like it, or jump on the router and cut the Internet :|
Does your desktop actually drain 1kw during normal operation? If so, yeah, it would probably benefit from a separate UPS. You might not need to replace the whole thing though, just the batteries -- admittedly they are the bulk of the cost.
WRT cats -- I got myself a network cabinet for precisely that reason. Also minimizes the amount of fur getting trapped in vents.
https://frame.work/ca/en/products/cooler-master-mainboard-ca...
I put my original mainboard in one of these when I upgraded. It's fantastic. I had it VESA-mounted to the back of a monitor for a while which made a great desktop PC. Now I use it as an HTPC.
I have 2 USB disks and want to make a cheapo NAS but I always doubt between making a ZFS mirror, making 2 independent pools and use one to backup the other, or just go the alternate route and use SnapRAID and then be able to mix more older HDDs for maximum usage of the hardware I already own.
You will gain protection agains bit-rot and self-healing (via scrubs) with a mirror. Also faster reads.
> mix more older HDDs
You can do this with ZFS too! As long as you have two HDDs of the same size (or similar size, as to not loose too much to unused space), you can also add them as a mirrored zdev to your existing pool (or make an new one for backups as you wrote). Only the two disks in a mirror need to be of similar size, not all disks in a pool.
As for the ZFS setup, I kept it simple and did RAID5/raidz1. I'm no expert in that, and have been starting to think about it again as the pool approaches 33% full.
I saw this comment in another thread here that sounded interesting as well by magicalhippo: "I've been using ZFS for quite a while, and I had a realization some time ago that for a lot of data, I could tolerate a few hours worth of loss. So instead of a mirror, I've set up two separate one-disk pools, with automatic snapshots of the primary pool every few hours, which are then zfs send/recv to the other pool."
This caught my attention as it matches my usecase well. My original idea was that RAID5 would be good incase a HD fails, and that I would replicate the setup at another location, but the overall costs (~$1k USD) are enough that I haven't done that yet.
Edit: Realised you can’t use glacier since storage has to be mounted to the ec2 compute running the garage binary as a filesystem. So doesn’t really make sense as media library backup over just scheduling a periodic borg / restic backup to glacier directly.
So in that sense, I've loved it.
You can own a computer and not run any services at all. Most people do.
Deciding to run your own services, like email, means a lot of work that most people aren’t interested or capable of doing.
It’s the difference between using your computer to consume things or produce things.
Self-hosting implies those features without the cloud element and not just buying a computer.
It is though. People in tech need to stop pretending everything they are doing is super complicated.
Reminds me of the "Dropbox can be built in a weekend"
You can also buy a computer with this — not a laptop, and I don't know about budget desktops, but on Dell's site (for example) it's just a drop-down selection box.
Self-hosting 10TB in an enterprise context is trivial.
Self hosting 10TB at home is easy.
The thing is: once you learn enough ZFS, whether you’re hosting 10 or 200TB it doesn’t change much.
The real challenge is justifying to yourself spending for all those disks. But if it’s functional to yourself spending hobby…
However, garage sounds nice :-) Thanks for posting.
So instead of a mirror, I've set up two separate one-disk pools, with automatic snapshots of the primary pool every few hours, which are then zfs send/recv to the other pool.
This gives me a lot more flexibility in terms of the disks involved, one could be SSD other spinning rust for example, at the cost of some read speed and potential uptime.
Depending on your needs, you could even have the other disk external, and only connect it every few days.
I also have another mirrored RAID pool for more precious data. However almost all articles on ZFS focus on the RAID aspect, while few talk about the less hardware demanding setup described above.
I have two setups.
1.) A mirror with an attached Tasmota Power Plug that I can turn on and off via curl to spin up an USB-Backup-HD:
curl "$TASMOTA_HOST/cm?cmnd=POWER+ON"
# preparation and pool imports
# ...
# clone the active pool onto usb pool
zfs send --raw -RI "$BACKUP_FROM_SNAPSHOT" "$BACKUP_UNTIL_SNAPSHOT" | pv | zfs recv -Fdu "$DST_POOL"
2.) A backup server that pulls backup to ensure ransomware has no chance via zsync (https://gitlab.bashclub.org/bashclub/zsync/)To prevent partial data loss I use zfs-auto-snapshot, zrepl or sanoid, which I configure to snapshot every 15 minutes and keep daily, weekly, montly and yearly snapshots as long as possible.
To clean up my space when having too many snapshots, I wrote my own zfs-tool (https://github.com/sandreas/zfs-tool), where you can do something like this:
zfs-tool list-snapshots --contains='rpool/home@' --required-space=20G --keep-time="30d"
Your use case perfectly matches mine in that I wouldn't mind much about a few hours of data loss.
I guess the one issue is that it would require more disks, which at the current prices is not cheap. I was suprised how expensive it was when I bought them 6 months ago and was even more suprised when I looked recently and the same drives are even more now.
Gives me the benefit of automatic fixes in the event of bit rot in any blocks more then an hour old too.
zfs wait <poolname>
Could you talk a little more about your zfs setup? I literally just want it to be a place to send snapshots but I’m worried about the usb connection speed and the accidentally unplugging it and losing data
So if you want to use an app that needs S3 then you need to deploy S3 and not NFS.
I run a minio cluster (S3) for Veeam backups at work. I also run multiple NFS for Veeam and VMware datastores.
Tools for the job mate!
When things inevitably need attention it’s not about diy.
Some former colleagues still using gitlab ce tell me they also removed features from their self-hosted version, particularly from their runners.
It's really just a bait and switch to try to get free community engagement around a commercial product. It's fundamentally dishonest. I call it "open source cosplay". They're not real open source projects (in the sense that if you write a feature under a free software license that competes with their paid proprietary software, there's zero percent chance it will be upstreamed, even if all of the users of the project want it) so they shouldn't get the credit for being such just because they slapped a free software license on a fraction of their proprietary code.
Invariably they also want contributors to sign a rights-assignment CLA so they can reuse free software contributions (that they didn't pay for) in their for-profit proprietary project. Never sign a CLA that assigns rights.
Some open source projects flat-out illegally "relicensed" open source contributions as a proprietary license when they wanted to start selling software (CapRover). Some just start removing features or refuse to integrate features (Minio, Mattermost, etc). Many (such as Minio) use nonfree fake open source licenses like the AGPL[1].
It's all a scam by people who don't care about software freedoms. If you believe in software freedoms, you never release any software that isn't free software.
This statement in the linked article is incorrect. It overlooks the "through some standard or customary means of facilitating copying of software" clause in section 12.
The software does not have to provide the source code _itself_. It must provide users a reference to such. A link to the Github repository on the about page, for example, would fulfill the requirement.
I manage several Garage clusters and will keep maintaining the software to keep these clusters running. But concerning the "low level of activity in the git repo": we originally built Garage for some specific needs, and it fits these needs quite well in its current form. So I'd argue that "low activity" doesn't mean it's not reliable, in fact it's the contrary: low activity means that it works well for us and there isn't a need to change anything.
Of course implementing new features is another deal, I personally have only limited time to spend on implementing features that I don't need myself. But we would always welcome outside contributions of new features from people with specific needs.
You also want fast networking, I just use 10Gbps. My nodes each are 6 rust and 1 NVMe drive each, 5 nodes. I colocate my MONs and MDS daemons with my OSDs, each node has 64GB of RAM and I use around 40GB.
Usage is RDB for a three node OpenStack cluster, and CephFS. I have about 424TiB between rust and NVMe raw.
Smr drives are absolutly shit-tier choice in terms of drives.
https://docs.oracle.com/cd/E19253-01/819-5461/gitgn/index.ht...
Also ZFS is perfectly happy with USB connections. In fact it's the best type of FS to have if your storage is unreliable due to its self healing capabilities. Not that modern USB is unreliable nowadays and there are plenty of DAS solutions that rely on 3.x USB.
What brand of HDD did you use?
I feel that is a bit of an unfair assessment.
AWS S3 was the first S3-compatible API provider, nowadays most cloud providers and bunch of self hosted software supports S3(-Compatible) APIs. Call it Object Store (which is a bit unspecific) or call it S3-Compatible.
EKS and EC2 on the other hand are a set of tools and services, operated by AWS for you - with some APIs surrounding them that are not replicated by any other party (at least for production use).
- Openstack Swift
- CEPH Object Gateway
- Riak CS
- OpenIO
SeaweedFS vs. JuiceFS https://juicefs.com/docs/community/comparison/juicefs_vs_sea...It is eleven nines of durability? No. You didn't build S3. You built a cheapo NAS.
Yes, it's obvious, but it's a terrible title. I don't really get the point. It's trivial to have storage however you want an shove the S3 API on top.
"It's trivial to have storage" I'd argue this wasn't trivial for me. Buying $1k of drives+JBOD, acquiring the second hand laptop, getting ZFS working with USB took a couple tries and finally moving my projects to using the dual local network S3 object storage vs cloud S3 took a fair amount of time.
"however you want an shove the S3 API on top." You're right here too. I did find this part pleasantly trivial, which I didn't know before, and hence the article about how pleased I was that this part ended being trivial and has remained trivial once the other parts were setup.