dd if=/dev/urandom of=/home/myrandomfile bs=1 count=N openssl enc -aes-256-ctr -pbkdf2 -pass pass:"$(date '+%s')" < /dev/zero | dd of=/home/myrandomfile bs=1M count=1024
Almost all CPUs have AES native instructions so you'll be able to produce pseudorandom junk really fast. Even my old system will produce it at about 3Gb/s. Much faster than urandom can go.Most current desktops (smaller than your usual server) won't have any problem with the GP's command. Yours is still better, of course.
(Not saying you're wrong, just asking)
It also serves to leave some space unused to help out the wear-levelling on the SSDs on which the RAID array that is the PV¹ for LVM. I'm, not 100% sure this is needed any more² but I've not looked into that sufficiently so until I do I'll keep the habit.
--------
[1] if there are multiple PVs, from different drives/arrays, in the VG, then you might need to manually skip a bit on each one because LVM will naturally fill one before using the next. Just allocate a small LV specially on each and don't use it. You can remove one/all of them and add the extents to the fill LV if/when needed. Giving it a useful name also reminds you why that bit of space is carved out.
[2] drives under-allocate by default IIRC
I knew I didn’t invent the concept, as there’s so many systems that cannot recover if the disk is totally full. (a write may be required in many systems in order to execute an instruction to remove things gracefully).
The latest thing I found with this issue is Unreal Engines Horde build system, its so tightly coupled with caches, object files and database references: that a manual clean up is extremely difficult and likely to create an unstable system. But you can configure it to have fewer build artefacts kept around and then it will clear itself out gracefully. - but it needs to be able to write to the disk to do it.
Now that I think about it, I don’t do this for inodes, but you can run out of those too and end up in a weird “out of disk” situation despite having lots of usable capacity left.
ZFS has a "reservation" mechanism that's handy:
> The minimum amount of space guaranteed to a dataset, not including its descendants. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations.
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...
Quotas prevent users/groups/directories (ZFS datasets) from using too much space, but reservations ensure that particular areas always have a minimum amount set aside for them.
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...
Addendum: there's also the built-in compression functionality:
> When set to on (the default), indicates that the current default compression algorithm should be used. The default balances compression and decompression speed, with compression ratio and is expected to work well on a wide variety of workloads. Unlike all other settings for this property, on does not select a fixed compression type. As new compression algorithms are added to ZFS and enabled on a pool, the default compression algorithm may change. The current default compression algorithm is either lzjb or, if the lz4_compress feature is enabled, lz4.
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...
Would it be more pragmatic to allocate a swap file instead? Something that provides a theoretical benefit in the short term vs a static reservation.
Except that one time when .NET decides that the incoming POST is over some magic limit and it doesn't do the processing in-memory like before, but instead has to write it to disk, crashing the whole pod. Fun times.
Also my Unraid NAS has two drives in "WARNING! 98% USED" alert state. One has 200GB of free space, the other 330GB. Percentages in integers don't work when the starting number is too big :)
surely you don't need a fire extinguisher in your kitchen, if you have a smoke detector?
a "warning alarm" is a terrible concept, in general. it's a perfect way to lead to alert fatigue.
over time, you're likely to have someone silence the alarm because there's some host sitting at 57% disk usage for totally normal reasons and they're tired of getting spammed about it.
even well-tuned alert rules (ones that predict growth over time rather than only looking at the current value) tend to be targeted towards catching relatively "slow" leaks of disk usage.
there is always the possibility for a "fast" disk space consumer to fill up the disk more quickly than your alerting system can bring it to your attention and you can fix it. at the extreme end, for example, a standard EBS volume has a throughput of 125mb/sec. something that saturates that limit will fill up 10gb of free space in 80 seconds.
And of course there's nothing to say that both of these things can't be done simultaneously.
Defence in depth is a good idea: proper alarms, and a secondary measure in case they don't have the intended effect.
The authorization can probably be done somehow in nginx as well.
https://nginx.org/en/docs/http/ngx_http_auth_request_module....
I recently came across gdu (1) and have installed/used it on every machine since then.
Even more confusing can be cases where a file is opened, deleted or renamed without being closed, and then a different file is created under the orginal path. To quote the man page, "lsof reports only the path by which the file was opened, not its possibly different final path."
But maybe the European Hetzner servers still have really big limits even for small ones.
But still, if people keep downloading, that could add up.
> Note: this was written fully by me, human.
And this is why I tried Plausible once and never looked back.
To get basic but effective analytics, use GoAccess and point it at the Caddy or Nginx logs. It’s written in C and thus barely uses memory. With a few hundreds visits per day, the logs are currently 10 MB per day. Caddy will automatically truncate if logs go above 100 MB.
5. Implement infrastructure monitoring.
Assuming you're on something like Ubuntu, the monit program is brilliant.
It's open source and self hosted, configured using plain text files, and can run scripts when thresholds are met.
I personally have it configured to hit a Slack webhook for a monitoring channel. Instant notifications for free!
It's always lupu... I mean NTP or disk space.