I am usually amused by the way really competent people judge other's context.
This post assumes understanding of:
- emacs (what it is, and terminology like buffers)
- strace
- linux directories and "everything is a file"
- environment variables
- grep and similar
- what git is
- the fact that 'git whatever' works to run a custom script if git-whatever exists in the path (this one was a TIL for me!)
- irc
- CVEs
- dynamic loaders
- file priviledges
but then feels important to explain to the audience that:
>A socket is a facility that enables interprocess communication
Juniors know how much they have learned whereas a 10+ year senior (like the author) forget most people don't know all this stuff intuitively.
I still will say stuff like "yeah it's just a string" forgetting everyone else thinks a "string" is a bit of thread/cord.
As someone younger, ports and sockets appeared very early in my learning. I'd say they appeared in passing before programming even, as we had to deal with router issues to get some online games or p2p programs to work.
And conversely, some of the other topics are in the 'completely optional' category. Many of my colleagues work on IDEs from the start, and some may not even have used git in its command line form at all, though I think that extreme is more rare.
>The term socket dates to the publication of RFC 147 in 1971, when it was used in the ARPANET. Most modern implementations of sockets are based on Berkeley sockets (1983), and other stacks such as Winsock (1991).
Though one explanation is that I think for the other stuff that the writer doesn't explain, one can just guess and be half right, and even if the reader guesses wrong, isn't critical to the bug — but sockets and capabilities are the concepts that are required to understand the post.
It still is amusing and I wouldn't have even realized that until you pointed that out.
It’s not that I was unaware that’s how Unix worked here, just that I rarely think of sockets in that context.
The author is both an example of and an example for how we can get caught in "bubbles" of tools/things we know and use and don't, and blog posts like this are great for discovery (I didn't know about git invoking a binary in the path like his "git re-edit", for example, until today).
I would expect a person with 10+ years of Unix sysadmin experience — but who has never programmed directly against any OS APIs, “merely” scripting together invocations of userland CLI tools — to have exactly this kind of lopsided knowledge.
(And that pattern is more common than you might think; if you remember installing early SuSE or Slackware on a random beige box, it probably applies to you!)
I agree that it's amusing.
Years ago I worked on contract for a large blue 3 letter company doing outsourced server management for the fancy credit card company. The incident in question happened before my time on the team but I heard about it first hand from the server admin (let's call him Ben) who had been at the center of it.
The data center in question was (IIRC) 160K sqft of raised floor spread across multiple floors in a major metropolitan downtown area. It isn't there anymore. Windows, Unix, Linux, mainframe, San, all the associated fun stuff.
Ben was working the day after thanksgiving decommissioning a system. Full software and physical decommission. Approved through all the proper change management procedures.
As part of the decommission Ben removed the network cables from under raised floor. Standard snip the connector off and pull it back. Easy. Little did he know that network cable was ever so slightly entangled with another cable. Not enough to give him pause when pulling it though. It wouldn't have been an issue if the other cable had been properly latched in its ports. It wasn't. That little pull ended up pulling the network connection out of a completely unrelated system. A system managed by a completely different group. A system responsible for credit card processing. On USA Black Friday.
Oops. CC processing went down. It took far too long to resolve. Amazingly Ben didnt loose his job. After all he followed all the processes and procedures. Kudos to the management team who kept him protected.
Change management and change freezes were far more stringent by the time I joined the team. There was also now a raised floor infrastructure group and no one pulled a tile without their involvement.
Be careful what you tug on!
Also, “direct” link: https://blog.plover.com/tech/tmpdir.html (This doesn't really matter, as the posted link is to https://blog.plover.com/2016/07/01/#tmpdir i.e. the blog post named “tmpdir” posted on 2016-07-01 and there is only post posted on that date, so the content of the page is basically the same.)
I wonder what could be done to make this type of problem less hidden and easier to diagnose.
The one thing that comes to mind is to have the loader fail fast. For security reasons, the loader needs to ensure TMPDIR isn't set. Right now it accomplishes this by un-setting TMPDIR, which leads to silent failures. Instead, it could check if TMPDIR is set, and if so, give a fatal error.
This would force you to unset TMPDIR yourself before you run a privileged program, which would be tedious, but at least you'd know it was happening because you'd be the one doing it.
(To be clear, I'm not proposing actually doing this. It would break compatibility. It's just interesting to think about alternative designs.)
I know when this was necessary and used it myself quite a bit. But today, couldn't we just open up a mount namespace and bind-mount something else to /tmp, like SystemDs private tempdirs? (Which broke a lot of assumptions about tmpdirs and caused a bit of ruckus, but on the other hand, I see their point by now)
I'm honestly starting to wonder about a lot of these really weird, prickly and fragile environment variables which cause security vulnerabilities, if low-overhead virtualization and namespacing/containers are available. This would also raise the security floor.
No, because unless you're already root (in which case you wouldn't have needed the binary with the capability in the first place), you can't make a mount namespace without also making a user namespace, and the counterproductive risk-averse craziness has led to removing unprivileged users' ability to make user namespaces.
Are we just shittier engineers, is it more complex, or is the culture such that we output lower quality? Does building a bridge require less cognitive load then a complex software project?
We're better at encapsulating lower-level complexities in e.g. bridge building than we are at software.
All the complexities of, say, martensite grain boundaries and what-not are implicit in how we use steel to reinforce concrete. But we've got enough of it in a given project that the statistical summaries are adequate. It's a member with thus strength in tension, and thus in compression, and we put a 200% safety factor in and soldier on.
And nobody can take over the ownership of leftpad and suddenly falsify all our assumptions about how steel is supposed to act when we next deploy ibeam.js ...
The most well understood and dependable components of our electronic infrastructure are the ones we cordially loathe because they're composed in shudder COBOL, or CICS transactions, or whatever.
Both IMO: first, anybody could buy a computer during the last three decades, dabble in programming without learning basic concepts of software construction and/or user-interface design and get a job.
And copying bad libraries was (and is) easy. I still get angry when software tells me "this isn't a valid phone number" when I cut/copy/paste a number with a blank or a hyphen between digits. Or worse, libraries which expect the local part of an email address to only consist of alphanumeric characters and maybe a hyphen.
Second, writing software definitely is more complex than building physical objects. Because there are "no laws" of physics which limit what can be done. In the sense that physics tell you that you need to follow certain rules to get a stable building or a bridge capable of withstanding rain, wind, etc.
Given hardware available to an average modern Linux box, it is hardly surprising that these bells and whistles were added - someone will find them useful in some scenarios and additional resource is negligible. It does however make understanding the whole beast much, much harder...
Setting TMPDIR to /mnt/tmp seems also to come from that.
I would guess both were the result of someone who didn't really know what they were doing trying things until they found something that got what they needed to work, then pushed that out without understanding the broader implications.
https://www.youtube.com/watch?v=aWXuDNmO7j8
Peter Weller, playing Buckaroo Banzai, is late for his military-particle-physics-interdimensional-jet-car test because he's helping Jeff Goldblum's character with neurosurgery. Later that day he will go play lead guitar in an ensemble.
Scriptwriting gurus advise that your protagonist should have flaws and character progression. The writers of this movie disagree.
Also, computers in 2015 were not meaningfully less complex than today. Certainly not when the topic is weird emacs and perl interactions.