AI writing takes these to an extreme but we have see the same happening everywhere even before AI
They are a false positive signal for identifying AI texts anyway.
At least, in reasonably long sections of text. I find it can be hard to tell one way or the other in shorter texts (like comments)
I feel like a school friend of mine has been taken from me.
and the comments above it
...
Skipping past the investigation bit (minimising my daily slop intake), it's a wrong MTU value causing failing connections when Wireguard is disabled:
> When we disabled WireGuard, we expected the configuration to change to use the full 1500 bytes. However, some nodes in the cluster hadn't been restarted [and were] using the old 1420-byte MTU.
> [paraphrased] This particularly affected Valkey connections because they were distributed across nodes with mismatched MTU settings. So your API pod might not connect. The fix was rerolling all the nodes to get a consistent MTU configuration
Great idea, rebuild a whole fleet of VMs instead of adding the MTU configuration to your wg-down script
Makes me think of how pre-chatGPT I still could barely handle most recipe blogs because of their well known attempts at "filling space". And yea the problem is significantly magnified now everywhere else.
Anyway, my point is, whether or not someone uses AI is almost secondary in a way (even though it can seem pretty obvious to most of us when it's being used). All that matters is if the writing seems like it cares more about throwing words at people instead of actually conveying its points in a way to elicit understanding.