> The "no-delete" permission disallows deleting objects as well as overwriting existing objects.
[0]: https://github.com/borgbackup/borg/pull/8798#issuecomment-29...
it was implemented for "file:" (which is also used for "ssh://" repos) and there are automated tests for how borg behaves on such restricted permissions repos.
after the last beta I also added cli flags to "borg serve", so it now also can be used via .ssh/authorized_keys more easily.
so it can now also be used for practical applications, not just for testing.
not for production yet though, borg2 is still in beta.
help with testing is very welcome though!
[0] https://github.com/borgbackup/borg/blob/3cf8d7cf2f36246ded75...
[1] https://github.com/borgbackup/borg/blob/3cf8d7cf2f36246ded75...
Making e.g. snapshots on the backing storage was always the better approach.
https://github.com/restic/restic
https://github.com/restic/rest-server
which has to be started with --append-only. I use this systemd unit:
[Unit]
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
ExecStart=/usr/local/bin/rest-server --path /mnt/backups --append-only --private-repos
WorkingDirectory=/mnt/backups
User=restic
Restart=on-failure
ProtectSystem=strict
ReadWritePaths=/mnt/backups
I also use nginx with HTTPS + HTTP authentication in front of it, with a separate username/password combination for each server. This makes rest-server completely inaccessible to the rest of the internet and you don't have to trust it to be properly protected against being hammered by malicious traffic.Been using this for about five years, it saved my bacon a few times, no problems so far.
rclone serve restic --stdio
You add something like this to ~/.ssh/authorized_keys: restrict,command="rclone serve restic --stdio --append-only backups/my-restic-repo" ssh-rsa ...
... and then run a command like this: ssh user@rsync.net rclone serve restic --stdio ...
We just started deploying this on rsync.net servers - which is to say, we maintain an arguments allowlist for every binary you can execute here and we never allowed 'rclone serve' ... but now we do, IFF it is accompanied by --stdio. restic ... --option=rclone.program="ssh -i <identity> user@host" --repo=rclone:
which has it use the rclone backend over ssh.I've been doing this on rsync.net since at least February; works great!
Is this what append-only achieved for Borg?
But this was a good reminder I should probably figure out some good way to monitor my borg repo for unintended changes. Having snapshots to roll back to is only useful if a problem is detected in time.
There used to be append-only, they've removed it and suggest using a credential that has no 'delete' permission. The question asked here is whether this would protect against data being overwritten instead of deleted.
Anyone knows when will it come out of beta?
For low-latency storage (like file: and maybe ssh:) it already works quite nicely, but there might be a lot to do still for high-latency storage (like cloud stuff).
It wasn't perfect, but it did protect against some scenarios in which a device could be majorly messed up, yet the server was more resistant to losing the data.
For work, the backup schemes include separate additional protection of the data server or media, so append-only added to that would be nice, as redundant protection, but not as necessary.
Borg has the issue that it is in limbo, i.e. all the new features (including object storage support) are in Borg2, but there's no clear date when that will be stable. I also did not like that it was written in Python, because backups are not always IO blocked (we have some very large directories, etc.).
I really liked borgmatic on Borg, but we found resticprofile which is pretty much the same thing (it is underdiscussed). After some testing I think it is important to set GOGC and read-concurrency parameters, as a tip. All the GUIs are very ugly, but we're fine a CLI.
If rustic matures enough and is worth a switch we might consider it.
Single binary, well supported, dedup, compression, excellent snapshots, can mount a backup to restore a single file easily etc etc.
It's made my backups go from being a chore to being a joy.
This should be simpler still:
Git LFS is 50k loc, this is 891 loc. There are other differences, but that is the main one.
I don't want a sophisticated backup system. I want one so simple that it disappears into the background.
I want to never fear data loss or my ability to restore with broken tools and a new computer while floating on a raft down a river during a thunder storm. This is what we train for.
I don't see what value this provides that rsync, tar and `aws s3 cp` (or AWS SDK equivalent) provides.
abridged example:
rsync --archive --link-dest 2025-06-06 backup_role@backup_host:backup_path/ 2025-06-07/
Actual invocation is this huge hairy furball of an rsync command that appears to use every single feature of rsync as I worked on my backup script over the years. rsync_cmd = [
'/usr/bin/rsync',
'--archive',
'--numeric-ids',
'--owner',
'--delete',
'--delete-excluded',
'--no-specials',
'--no-devices',
'--filter=merge backup/{backup_host}/filter.composed'.format(**rsync_param),
'--link-dest={cwd}/backup/{backup_host}/current/{backup_path}'.format(**rsync_param),
'--rsh=ssh -i {ssh_ident}'.format(**rsync_param),
'--rsync-path={rsync_path}'.format(**rsync_params),
'--log-file={cwd}/log/{backup_id}'.format(**rsync_params),
'{remote_role}@{backup_host}:/{backup_path}'.format(**rsync_params),
'backup/{backup_host}/work/{backup_path}'.format(**rsync_params) ]
I think it sort of works like apples time-machine but I have never used that product so... (shrugs)
Note that it is not, in the strictest sense, a very good "backup" mainly because it is too "online", to solve that I have a set of removable drives that I rotate through, so with three drives, each ends up with every third day.
Google Cloud Store Archive Tier is a tiny bit more.
My low value backups go into a cheap usb hdd from Best Buy.
I even wrote python scripts to automatically cleanup and unmount if something goes wrong (not enough space etc). On openbsd I can even Double encrypt with blowfish(vnconfig -K) and then a diff alg for bioctl.
Every once in a while things gets sparsed out, so that for example I have daily backups for the recent past, but only monthly and then even yearly for further back.
Bonus: the backups are readable without any specific tools, you don't have to be able to reinstall a backup software to restore files, which may or may not be difficult in 10 years.
This is the tool I use: https://github.com/hcartiaux/bontmia
It's forked from an old project which is not online anymore, I've fixed a few bugs and cleaned the code over the years.
Are you talking about using ZFS snapshots on the remote backup target? Trying to solve the same problem with local snapshots wouldn't work because the attack presumes that the device that's sending the backups is compromised.
Yes.
Would be interested to know what others have set up as I'm not really happy with how I do it. I have zfs on my NAS running locally. I backup to that from my PC via rsync triggered by anacron daily. From my NAS I use rclone to send encrypted backups to Backblaze.
I'd be happier with something more frequent from PC to NAS. Syncthing maybe? Then just do zfs sync to some off site zfs server.
I guess some people might have been relying on this feature of borgbackup to implement that requirement
They are so similar in features. How do they compare? Which to choose?
All three have a lot of commands to work with repositories. Each one of them is much better than closed source proprietary backup software that I have dealt with, like Synology hyperbackup nonsense.
If you want a better solution, the next level is ZFS.
> If you want a better solution, the next level is ZFS.
Not a backup. Not a bad choice for storage for backup server tho
With ZFS, all file system is replicated. The backup will be consistent, which is not the case with file level backup. With latter, you have to also worry about lock files, permissions, etc. The restore will be more natural and quick with ZFS.
When I tested Restic (eight years ago) it was super slow.
No opinion about Kopia, never heard of it.
The fact that Kopia has a UI is awesome for non-technical users.
I migrated off restic due to memory usage, to Kopia. I am currently debating switching back to restic purely because of how retention works.
I was setting up PCs for unsophisticated users who needed to be able to do their own restores. Most OSS choices are only appropriate for technical users, and some like Borg are *nix-only.