Just like any security control, if it's your only means of security, it will not offer much risk reduction. Just like all security controls, the if you want risk reduction use more security controls together. Like all security controls, there is no way to eliminate risk, just reduce it as much as possible while still being able to effectively achieve your mission.
Because of this I believe security through obscurity to be important component in a healthy and mature risk posture.
It irks me when it's dismissed because obscurity is not security. No single security control is security on its own.
Asking because of the Baader–Meinhof phenomenon :)
I recently learned about that and now I see it everywhere, weird.
To keep with the analogy: no one is going to stand in a field when people are shooting at you. So then why do a small subset of vocal people online suggest that you just put your bulletproof vest and claim that hiding in the woods, regardless of the vest, is a bad idea?
Therefore, the safest assumption to make is that an adversary already has figured out all of your obscurity, because they always can do this given sufficient time and interest, at which point the only thing between them and you is your security.
That is why we design systems without obscurity and only care about security.
Security through obscurity merely means that your system is atypical. It's not hidden, it's not secret, it's not hard to find, it's not hard to examine, it's not less visible, etc - there is nothing inherently different about the systems at all other than that one is more common than the other. It's just less typical.
This notion was termed "security through obscurity" ie: "you use the less popular option, therefor that option is safer". It has nothing to do with "obscuring" in the sense of "hiding", that's a linguistic quirk of a colloquial term. If you were actually taking action to reduce the ability to understand a system in a way that you could meaningfully defend, it would no longer be "security through obscurity".
The argument has persisted because there are two different questions that sound the same (X is less typical than Y):
1. Is "X" safer than "Y"?
2. Is a user of "X" safer than a user of "Y"?
When looking at (1) in isolation, you can say things like "X lacks security features, therefor Y is safer" and "X is less often used, therefor X is safer", etc. This is a question about the posture of the project itself, in isolation.
(2) is about the context for users. The reality is that X, which perhaps is fundamentally less well built software, may actually have users who are attacked far less frequently.
Both are likely to favor "rarity is a poor indicator of safety" as we generally reject mitigation approaches that rely on attackers to behave specific ways, but what's important is that these are completely different questions and neither has to do with being obscured but rather rare.
None of this is about what is "obscured" or not. If something is obscured or obfuscated, that is a technique that can be evaluated separately by its own merits (ie: how hard is deobfuscation, how easy is it to adapt to deobfuscation, etc). All of this is about whether you're evaluating (1) or (2) - and in the case of (1), which is what the criticism always has focused on, the answer is that "rarity" is not a mitigation.
Ideally we want a viable plan B, for when it’s leaked/figured out. (E.g. generate new passwords)
(For convenience let’s label air-gap as kind of physical security)
That's not what the expression means.
"Security through obscurity" has a very specific meaning — that your system's security depends on your adversary not understanding how it works. E.g. understanding RSA is a few wikipedia articles away, and that doesn't compromise its security, so RSA isn't security through obscurity.
I’ve used it for a long long time. Like in 1999 I’d have a knock on certain ports in a certain order to unlock the ssh port.
And lots of weird stuff to stop forum spam. Which could work for weeks or months or even a year.
Lucketone's argument is essentially saying that the bad practice itself isn't actually a bad practice by equivocating the term of art and the plain language definition.
your password (plain text) is secret because only you are supposed to a have it. in the digital realm, sharing the contents of the password (plain-text) is be akin to making a copy of it — undesirable
now, the algorithm that hashes the plain-text for comparison with the stored hash, that can be know by anyone, and typically is
so password ≠ hashing algorithm
"The Integrated Survivability Onion"
https://cogecog.com/the-threat-onion/
1. Don't be seen.
2. Don't be acquired
3. Don't be hit
4. Don't be penetrated
5. Don't be killed
It's actually not a bad mental model training aid for teaching people who might find themselves in an active combat environment.
if you were hiding in cover during ww1, maybe you had a chance.
But if you were hiding from the Terminator, who is "Tireless, Fearless, Merciless", it might not last that long.
same might be said of exploits hiding from people... vs AI.
All security is security through obscurity. When it gets obscure enough we call it “public key cryptography”. Guess the 2048-bit prime number I'm thinking of and win a fabulous prize! (access to all of my data)
However "Not Having Stuff to Steal" works like a charm. It's thousands of years old, and has never gone out of style.
I know that it's considered blasphemy, hereabouts, but I've found that not collecting information that I don't absolutely need is pretty effective.
Even if someone knocks down all my gates and fences, they'll find the fox wasn't worth the chase.
It does make stuff like compiling metrics more of a pain, but that's my problem; not my users'.
> I don't think "obscurity" really buys you much (especially these days, with LLMs).
Actually I think it does so even more with LLMs. As has been posited before (particularly on the threads about open source projects going closed source) security comes down to who has paid more attention to the code, the attacker or the defender. And of course, these days attention is measured in tokens.
We know that LLM's are pretty capable of reversing-engineering to figure out an application's logic, but I would bet it takes many more tokens than reading the code or other public information directly. As such, obscurity adds an important layer to security: increasing the costs on the attacker.
Security has always been a numbers game, but now the numbers will overwhemingly be tokens and scale. If the defenders can cheaply raise the costs on the attackers by adding simple layers of obscurity, it can act as a significant deterrent at scale. I wonder if we'll even see new obfuscation techniques that are cheap to implement but targeted specifically at LLMs...
This is the crux of the article.
(1) Kerckhoffs's Principle doesn’t say that. It says to design the system AS IF the adversary has all of the info about it except the secrets (encryption key, certificates, etc).
(2) this rule is okay if you are a solo maintainer of a WordPress installation. It’s a problem if you work at a large company and part of the company knows the full intent of this, while the rest of the company doesn’t know the other layers of security BECAUSE of the obscurity layer. In this way, it’s important to communicate that this is only a layer and shouldn’t replace any other security decisions.
More broadly, anything that raises the cost of an attack helps security. Whether it is worth investing your defensive effort in that vs on more actual security is a different matter.
For instance, with respect to url parameters, I have seen people being told they have an Insecure Direct Object Reference, then apply base64 encoding to it to obscure what is going on. To QA they don't notice it looks like junk, it is obscure, but base64 encoded parameters are catnip to hackers.
So in this case, the obscurity made the system worse over time.
Heck, the most cringeworthy phrase "Base64 Encryption" which I have heard many many times.
Might give you enough time to change the locks. But not provably — which can matter to a lot of people.
Again, I'm not opposed to simple tricks like this to “buy some time” so long as they don’t PREVENT the deeper layers of security from being performed. But if a company has scarce resources and a choice between patching unpatched software or changing DB names from the defaults the former actually improves security and the latter should only be performed if the staff has solved all of the higher risk items.
In some ways this is not security through obscurity. If you don't have a way to enumerate tables, this is in effect another short password being added to the data. In the same way you could say that the "obscurity" of users' passwords is security through obscurity... except we still use passwords.
The idea of security through obscurity being bad stemmed from the idea that a cryptosystem should still be secure when you know how it works. That's all.
In that way, you know how WordPress works, and yet you don't have access. You know how passwords work, and yet don't have access.
Obfuscating code is interesting because it sort of sits between the two. You could execute the code, and you may know how the obfuscation scheme works, but you can't de-obfuscate easily and see the original intent, and that way it's useful. The fact that you can still execute the code does however limit the impact.
But it can add a bit of delay to someone breaking actual security, so maybe they'll hit the next target first as that is a touch easier. Though with the increasing automation of hole detection and exploitation, even that might stop being the case if it hasn't already.
The biggest problem with obscurity measures IMO is psychological: people tend to assume that the measures⁰ are far more effective than they actually are, so they might make less effort to verify that the proper security is done properly.
----
[0] like moving SSHd to a non-standard port¹
[1] a solution that can inconvenience your users more than attackers, and historically (in combination with exploiting a couple of bugs) actually made certain local non-root credential scanning attacks possible if you chose a high port
Now, in both instances, the obscurity provided does not necessarily cure your infrastructure's vulnerabilities, a dedicated attacker wouldn't have a single problem with either of these. But for someone who hammers the whole internet in a dim hope of finding another Wordpress server from 2017, or the latest flawed online security cam, your disguise is as good as perfect.
> that step didn't add any security.
It is a decision that’s part of the entire process. A branch of many in the decision tree. Other branches are deciding which characters to type for the password; ASCII characters can be as little as 1 bit apart. Deciding between left and right is also 1 bit apart.
I think it boils down to what people commonly understand to be publicly knowable information versus understood-to-be-secret information.
One example: I self-host my password manager at pw.example.com/some-secret-path/. That extra path adds as much to security as a randomly picked username in HTTP Basic Auth: arguably none. Yet, it is as impossible for attackers to enumerate and find that path as it is with passwords.
The difference is that the path leaks easier. It’s not generally understood to be a secret. Yet I argue it helps security. (Example: leaking the domain name through certificate transparency logs AND even, say, user credentials means an attack is still unsuccessful; a strictly necessary piece of the puzzle is missing).
The delay can also be infinite in practice. If a really bad zero day is discovered, it might protect you from becoming a victim. No guarantees, but it can improve your chances.
So ASLR [1] is not a security control? I guess you are pretty alone with this opinion.
[1] https://en.wikipedia.org/wiki/Address_space_layout_randomiza...
I am pretty sure everyone who works in security agrees that obscurity is not security.
ASLR is a well understood system that exploit writers know to expect and thus ASLR is not security through obscurity.
One example I remember is Pidgin storing its passwords in plain text in $HOME. They could have encrypted them with some hardcoded string, and made a lot of people happy that they would no longer grep their $HOME and find their passwords right there. However this had the side effect that now people were dropping the ball and sharing their config files with others. Or forgetting to setup proper permissions for their $HOME, etc.
In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.
The whole point of security in depth is that you use non colinear layers of protection to raise the cost of an attack and reduce the blast radius of a successful attack.
(Note also most keychain implementations are not truly improving security in any way, but this is a separate topic)
Does that make it wrong?
I almost missed the twist at the end because I had no idea what the hell cockroach papers were. I still don't understand the reference, but at least it sounds mildly interesting. So, well done.
Now, as for this strawman argument of yours about justifying an infinite amount of crap, that's true of all manner of disingenuous arguments. Who cares about that in this case?
> Or forgetting to setup proper permissions for their $HOME, etc.
This is Pidgin's fault how?
Now, if you wanted to argue that Pidgin should have put the passwords into a separate file and chmod400'ed it that would make much more sense.
> In addition, these layers of obscurity are also not overhead free: they may complicate debugging, hey may introduce dangerous dependencies, they may tie you to a vendor, they may reduce computing freedom (e.g. Secure Boot), etc.
Not many good things have zero cost, do they... The point of TFA is that a little bit of well thought out obscurity pays huge dividends when applied in the real world. His example about the WP exploit ought to be all you need to read to get on board with that.
Security ONLY through obscurity is bad (Kerckhoffs's Principle).
Security through obscurity, as an additional layer, is good!
I've been saying this ever since that phrase was coined. A layer or two of obscurity keeps a lot of noise out of logs, reduces alert fatigue and cuts down on storage costs especially if one is using Splunk as their SIEM and makes targeted attacks much easier to detect. I will keep it.
The argument is that it's much easier to secure proper key material rather than design and config information that can often be leaked accidentally because it's actually directly manipulated by humans (employee onboarding, employee churn etc)
If the focus is on the latter, obscurity buys you nothing and adds complexity/distraction, which is bad. The former can be important though.
You have been alive since the 1880s?
”Security including obscurity“ is fine.
The way how human brain works, anything that gives you the slightly sense of "security", will make you to leave as it is without implementing an actual solution.
That security by obscurity is now a security issue.
Valve pivoted to server-side anti-cheat and toleration because someone probably did the math on max(profit) with lootboxes.
The fact that it's completely hidden from cheat developers gives them a huge advantage though. In the past, any client side algorithm or detection method could be reversed engineered by cheat developers and patched before lunch time. Now they're working against Valve completely in the dark.
Depending on the setting and the adversary, obscurity measures can raise costs by a material or immaterial amount.
Obscurity measures usually also impose costs on defenders (and, transitively, on the intended users of the system). Those costs are different than they are for adversaries (usually: substantially lower). They might or might not be material.
Your general goal is to asymmetrically raise costs on the adversary.
Seen that way, it's usually pretty easy to reason about whether obscurity is worth pursuing or not. Don't do it if it doesn't materially raise costs for attackers, or, even if it does, if it doesn't raise costs way less for defenders and users.
What trips people up in forums like this is that we're used to dealing with security problems framed in settings where we can impose \infty costs on attackers: foreclosing all known avenues of attack (to something like a mathematical certainty, and stipulating that computer science discoveries may change the cost function tomorrow). In those settings, all obscurity measures have relatively immaterial attacker costs associated. But it's still the same underlying problem! And, in the real world, we're actually rarely operating in model situations where we really can impose \infty costs on attackers.
It's a simple probability calculation. If some automated scanning tools can't find your service, a lot of attackers will never know of its existence. So even if it has an unpatched vulnerability, they won't attack it.
If 1000 attackers find the vulnerable system, the probability is high at least one is attacking it. If it's only or two one who find it, they might just ignore your system, because they found thousands of others they randomly chose first.
Obscurity provides, effectively, no security. There may be other benefits to the obscurity, but considering the obscurity a layer of your security is bad. I hope we all agree that moving telnet to another port provides no security (it's easily sniffable, easily fingerprintable).
If it provides another benefit, use it, but don't think there's any security in it.
For ~30 years I've moved my ssh to a non-standard port. It quiets down the logs nicely, people aren't always knocking on the door. But it's not a component of my security: I still disable password auth, disable root login, and only use ssh keys for access. But considering it security is undeniably bad.
I disagree on this. It's right up there with "premature optimization is the root of all evil" on the list of phrases that get parroted by a certain type of engineer who is more interested in repeating sound bites than understanding the situation.
You can even see it throughout this comment section: Half of the top level comments were clearly written by people who didn't even read the first section of the article and are instead arguing with the headline or what they assumed the article says
You may not see it as “security“, but any entity that is actively monitoring their logs benefits when the false positives decrease. If I am dealing with 800 failed login attempts per minute I cannot possibly investigate all of them. But if failed logins are rare in my environment, I may be able to investigate each one.
Obscurity that increases the signal to noise ratio is a force multiplier for active defense.
Q: Why would you "review the logs" by (human/agent) hand for a service exposed to the Internet? What are you actually looking for?
[I say this as someone who has tens of thousands of failed auth attempts against services I expose to the Internet. Per day.]
If I were you I would do that immediately. Then, once your logs become actually useful again, look at them.
"Hmmm. There sure seem to be a lot of failed login attempts for bobsmith@server. Maybe I should call him up and see if there's something going on."
Advice like this should be at the top of the chapter in the textbook that teaches young sysmonkeys how to admin a box securely. Well stated.
Q: If you've still done the right things - "disable[d] password auth, disable[d] root login, and only use ssh keys for access" - why do you care about how 'quiet' your logs are?
[0] https://web.archive.org/web/20201128060507/https://hot3eed.g...
Like moving ssh to a different port. If you are the only one working on it, sure fine, as long as you remember the port. If you re working with others, then everyone needs to know the new port, so it has to be documented somehow. It’s a pita
I am the Modern Man (Secret, secret, I've got a secret)
Who hides behind a mask (Secret, secret, I've got a secret)
So no one else can see (Secret, secret, I've got a secret)
My true identityI wrote a blog about this: https://tanyaverma.sh/2026/03/01/nowhere-to-hide.html
The industry should instead say: relying on an obscure process is bad when it comes to security. Better to rely on obscured data. As this is what is meant.
But technically speaking, all of information security is done through obscurity. It is all done via hiding something from being known. To state otherwise, is a misuse of semantics.
concealment will make specific targeting -less than straightforward,but a scorched earth obliteration will get you along with all else.
cover, is a condition that is resistant to attack when you are visible.
you should have both, resistance to sequential action when you are specificly targeted, a obfusification of presence, minimizing the frequency of targeting.
Mom & Pop code shops might be high risk if nation-state level vulnerability-exploitation becomes economically viable to any disgruntled prick.
That's why forcing people to use E-mail addresses as user IDs is stupid.
By having obscurity you lose anther layer of security: public scrutiny. It's harder for security issues to remain if people can see them and point them out, more eyes mean more chances to catch problems.
There is also a cultural component: having to lay out what you are doing publicly means you can't just think "no one will know", and let something slide, which pushes you towards better security practices.
Of course, this doesn't mean obscurity is always going to be the worse choice, there are times it will offer more than it costs and it's particularly evident that in, for example, open source projects, a lot of the time the number of eyes on most code is low enough that "many eyes" is a bit misleading, but I think presenting it as a pure positive is wrong, obscurity has cost, even if you think it's worth it in some cases.
I never called it a "free" layer of security, I said it was ONE layer of security. Emphasizing the one, because security comes in as many layers as one is able to manage.
As my comment made the case: it's not a simple addition, it's a trade-off, and I'm saying it should be thought about in those terms. I didn't find that was evident from what you said, I guess the "push back" framing was more negative than I intended.
Like, a lot of it comes down to 'high friction' vs 'low friction'. Obscurity means high friction. It means that the attacker needs to craft a specific solution for your site or system in particular rather than relying on an off-the-shelf solution to handle it all for them.
For example, the article's point about changing the WordPress database prefix fits into this category perfectly. Does it really make things that much more 'secure'? No, of course not. But it does mean that automated scripts that just assume tables like wp_posts exist will fail. It means that an attacker can't just run any old WordPress hacking toolkit and watch it do its thing, they have to figure out what database prefix you're using first.
Same with antispam solutions. The best solution to stop spam is to make your site unique in some way. To add some sort of challenge that a new user has to overcome to use the site, like a question related to the topic, a honeypot field they can't fill in, a script that detects how quickly they register, etc.
This won't stop a determined spammer, but it will stop or delay bots and automated scripts that rely on the target system having the same behaviour across the board. The spammer has to specifically target your site in particular, not just every forum script running the same software.
And much of society works this way to a degree. A federated or decentralised system (whether a social network or political movement) isn't technically harder to attack than a centralised one might be.
But it is more work to attack it. If a government or company wants to censor Reddit or Discord or YouTube, they have one target they can force to censor information across the board. If they want to target the Fediverse or some sort of torrent based system, then they have to track down dozens of people and deal with at least some of those people refusing or taking it to court or being in countries that aren't under their control or whatever else.
That's kinda what a good security through obscurity setup can be. You can't mass target everyone at once, you have to target different systems individually and spend more time and resources in the process.
However, you still need real security measures there. Security through obscurity is like hiding a safe behind a painting. It'll stop casual attackers from finding it, but it won't stop a targeted attack on its own. You need a strong lock, materials that are difficult to drill through and the safe itself being difficult to remove from the wall too.
Cryptography is just a collection of ‘obscure’ keys (and, arguably, algorithms) that someone nefarious has to guess or work out - or social engineer out of someone - to access data. They’re just really hard to guess or work out.
To me this is a major problem of everyone saying security through obscurity is bad. But then those same people reinforcing encryption as a gospel of security.
As far as I know, there are no secrets in the world. Encryption is not providing security to anything. It only gives you guarantees wrt to a certain interpretation/perspective.
Modern encryption is underpinned by, no common folk (not no one or even the people who would have the ability to which are probably the ones that should be worried about) should be able to decrypt your contents _within your lifetime_ - which in and of itself is a pragmatic goal, but does not ensure secrets remain secrets.
If your data is encrypted, what your adversary needs is some information about you - which they can gather by either buying it from someone or by investigating you - and a $10 wrench to go over there and get the keys out from you...
Most secrets are only secrets because the combination of obscurity and incentives raises the bar high enough so no one who wants to bother really bothers.
It's like hiding your key under the mat, vs hanging on a tree limb of a specific tree only you know the gps coordinate of. Both are "obscure". Huge difference in difficulty.
I recently did use a variation of this type of security to prevent a malicious user misusing our services... But I made a not to me an everyone else it was just a quick fix not guaranteed to work long term.
That's not a serious argument, of course. But consider how the spooks operate in the field. They employ all manner of obscure practices in an attempt to improve their security. Their intentional obscurity (AFAIK) is never allowed to unnecessarily complicate operational practices, which would introduce risk. And they've probably got a lot more theory and no-BS field testing behind their practices than we do.
Maybe we should ask them for advice?
> There is a long-standing security recommendation to change WordPress's default database table prefix to a random one. For example, wp_users becomes wp_8df7b8_users. This is often dismissed as "worthless" because it is security through obscurity.
I found that just changing the default URL for the wordpress login from the usual wp-admin to anything reduces by several orders of magnitude the number of scripts that try your site for the most common vulnerabilities---something that happens constantly for any site on the web, once a minute or so.
There's a very simple method to reduce spam in OpenSSH server logs: whitelist IPs of those who require access (could be ranges, too), and centralize over a jumphost. And something like Shodan (and friends) would find your OpenSSH server running on a different port anyway. But it wouldn't find it if you were using whitelisting of IPs of those who require access. There is, for example, no valid reason that people in China or Russia need to connect to your OpenSSH server. Why allow them to? Don't. I don't allow traffic from any IPs allocated to China or Russia, among a couple of other countries, and I don't feel like I am missing out.
Another one is port knocking. Anyone who has read access over the network between client and server can figure out the port knocking process, including a hostile actor who does a MITM (with for example a rogue WiFi AP).
So what happens is improper security (security through obscurity) means people don't apply real security measures (such as IP whitelisting). And that is why security through obscurity is bad.
As for Wordpress, the default settings and default Wordpress is quite secure these days (have been this way for at least 10 years). It is all the bells and whistles in the form of addons which are the culprit.
If your pentester can't find your sshd on a different port: 1) that is prima facie evidence that it works for a similar (low) skill level of attacker, and 2) you should fire that pentester. I'll leave the reasoning as an exercise for the reader.
> I don't allow traffic from any IPs allocated to China or Russia, among a couple of other countries, and I don't feel like I am missing out.
Now yer talkin'! As a blanket policy, if you have no valid users outside of your own nation and no expectation that will change, why not block everybody who isn't local?
(Of course, that just means any Russians and Chinese who do manage to attack you may be actual spooks, so if that happens you're pwned anyway. ;-) But you'll have cut down on your security logs considerably.)
> Another one is port knocking. Anyone who has read access over the network between client and server can figure out the port knocking process, including a hostile actor who does a MITM (with for example a rogue WiFi AP).
While I appreciate the fact that you're thinking outside of the typical box with regard to threat modelling, such an MITM attack is quite a few orders of magnitude more intentional of an attack than the rest of the crap the average systems/security admin has to deal with. In the case of a non-targeted (ie. not against a specific user or org) you're looking at a malicious network operator, which is far more sophisticated than 99.x% of the bulk scanning and attacks most admins see. In the case of a targeted attack we're talking about funded and probably successful organized crime at the very least, and possibly even nation-state intel orgs. Only motivated, professional attackers tend to get off their butts and travel to a different location to conduct an operation like that.
Kudos for recognizing such a problem, but using that as an excuse not to employ a powerful security technology such as port knocking is rather throwing the baby out with the bathwater. If you're going to be that defeatist, just airgap the system and be done.
Now, if you are willing to go through the effort of whitelisting IPs (which, I suspect, you haven't done yet, or you'd already loathe doing it and not recommend it), the sane way of going about that is to set up a VPN and whitelist the IP of the gateway. Otherwise you've opened up an administrative can of worms that is bad indeed. Nobody wants to have to keep track of Joe Blow's home IP address, which changes weekly at least, for some whitelist.
On the benefit side, mitigating most of the computational load, log analysis load, how much are the baddies poking me while I sleep load, etc...all of these together make changing such defaults a slam dunk IMO.
Yes, echo chambers are annoying - I remember this when I challenged them by explaining to me why being superuser is problematic (hint: I countered their arguments easily, then they got very angry about this; I did this on several IRC channels back in the day, just to prove a point. I managed to get banned on one too in the process.)
But ... obscurity is NOT a security technique. It just has a catchy slogan.
The primary reason why javascript is sometimes - or often - obfuscated is to make it harder to copy/paste and re-use stuff. That's it. Even with sanitizers, de-obfuscating it tends to increase the amount of time one has to spend to uncripple the code. This is the primary function; anything else is just decoy for the most part here.
> Security through obscurity is the practice of reducing exposure by keeping an application's inner workings or implementation details less visible to attackers
Very clearly his attempt to explain it, is already biased. Is obfuscating JavaScript security through obscurity? I mean if we can not agree to the terms, we can't agree or disagree on anything that follows.
Showing fancy images does not add any real argument to the discussion.
> For example, wp_users becomes wp_8df7b8_users. This is often dismissed as "worthless" because it is security through obscurity.
Note that this example does not even follow his own (!!!) definition.
This has nothing to do with obscurity. It simply is another name than the default login name. What would he expect of people to do? Retain the name? And if they change it, are ALL changes in his opinion valid to "security through obscurity"? He picked wp_8df7b8_users here. Is the name "foobar" instead a better name? Or is it "not obscure enough"?