This Greg Egan short story is a useful intuition pump about the possibilities. Not recommended for children. Or before trying to sleep. https://philosophy.williams.edu/files/Egan-Learning-to-Be-Me...
It would be great if instead of "a clinical trial to demonstrate that the Link is safe and useful" we could have a clinical trial to determine whether or not it is.
> My parents were machines. My parents were gods. It was nothing special. I hated them.
and how the story mixes adolescence and feeling special with philosophical ramifications (hinting that focusing on the philosophical ramifications is just an adolescent attempt at feeling special?)
The narrator falls into the same trap at the end, assuming that he is the 1-in-a-million exception. He doesn't realize that everyone has the same experience, they just process it in a healthier way. AI Catcher in the Rye.
There's a week where the jewel and the brain are still paired, but the jewel is in control. The hospitals monitor that the two are similar to within tolerance, but somehow this jewel slips through the net. What makes you think there's more to it than the 'one in a million' explanation?
If you are reasonable instead of rational, the slippage that occurs within that week is fine. It's expected, as he notes, because the jewel doesn't replicate neurons constantly dying, so it can't be a "perfect" copy.
Adolescence is typified by feeling like everything is happening to "me", for the first time ever. It fits the theme to have him solipsistically dramatize a normal experience. You can see this in the last line, where he wonders if the person him ever felt as "real" as he does.
If you upvote this comment, people will see this spoiler warning before the spoilers.
Probably that would be beneficial to a substantial fraction of the people reading the thread.
> He is now able to control a cursor with his thoughts to browse the internet, play games, and continue his educational journey with greater independence.
Once reliable and cheap, the tangible difference this tech is going to make to people's lives is pretty wild.
Curious to know how accurate the cursor movements and clicks are. For example, here he is playing polytopia: https://www.youtube.com/watch?v=mgY70ZWCL1g
In polytopia, a misclick can be about as frustrating/costly as a mouseslip in chess (when you move a piece to the wrong square by mistake).
[1] https://arstechnica.com/science/2024/05/neuralink-to-implant...
0: https://www.vox.com/future-perfect/2022/12/11/23500157/neura...
I also eat meat, it just seems a bit ironic to me.
Eating an animal at least ostensibly has positive value for the people doing so. However, there are plenty of forms of "animal testing" that confer zero positive value. For instance, testing the wrong compound or inserting the wrong implant confers zero benefit. Having improper controls, "testing" nonsensical theories, repeating stale results poorly, inadequate data collection, etc. are just a few ways a test procedure can be totally useless or even actively harmful.
This also ignores one of the other aspects of animal testing which is as a dry run or rehearsal for actual application. You do it right in animals so you are practiced at doing it right for when you need to do it right in humans. "Oh yeah, we royally screwed up in every rehearsal, but we will nail it in production." is not an acceptable approach. You look at the care taken during their practiced procedures on less critical subjects to determine if their practiced procedure is adequate for more critical subjects. A process that kills far more test subjects than others or achieves middling results relative to resource expenditure or that treats subjects as disposable for "advancing science" is not a process fit for human subjects. Assuming ingrained cultural process deficiencies will magically disappear when using changing subjects is foolish.
These are just some of the reasons why people eating a ridiculous number of animals does not and should not waive our invalidate concerns about animal testing procedure.
It is what comes before the eating that we should think about. We are breeding conscious beings (cattle, pigs, chickens) in harrowing conditions, with second order effects on the environment and plant and animal diversity (by clearing space for feed).
Should we stop eating animals? I don't know.
Should we stop testing on animals? If it meant that we cannot develop certain classes of therapies, then probably not.
Should we level up our compassion and care for animals and the environment even if it means humans have less luxury as long as it doesn't hold back increased life and health span? Probably.
I was responding to the argument being made that any animal testing process on a small number of animals is fine since much larger numbers of animals are raised to be eaten. That is emphatically not true for multiple reasons of which I highlighted two distinct, practical reasons why careful animal testing is not merely ethical, but can and does increase the rate of the scientific development of safe procedures fit for usage on humans. Demanding good animal testing process is important even if people still raise and eat animals; it is not trumped either ethically or practically.
I find it difficult to believe that companies do expensive surgery on expensive animals for no reason (other than sadism?). These companies think this testing does in fact have value (and if we don't trust companies to make that determination we probably should restrict animal testing to governments).
But regardless, there's no real way to justify eating meat (given the marginal benefit of taste over vegan food) other than saying the lives and suffering of animals is essentially worthless. There isn't a threshhold you can put which will allow eating but prevent animal testing.
It's called learning. That's why they are doing it in the first place.
As for medicines, I’m not sure what to do about that. Where I draw the line in veganism is essentially where I’d die if I don’t eat the animal. If there’s a necessity, I think it makes some sense. Some medicines are a necessity for people. Yet I don’t like the idea of supporting companies which would likely be testing non-essential medicines on animals as well.
The world isn’t really configured for veganism
I mean I totally understand it but it's pretty much caprice.
This is reductive and lacking any form of nuance. If I eat chicken, should I automatically be okay with heavily industrialized chicken farms, or even setting chickens alight for entertainment? Just because one evolved to be an omnivore doesn't mean one is okay with all forms of killing animals.
(Also of course a lot of the critics don't eat meat, and it's also true that the rest of us should stop, starting from factory farmed meat)
(I currently eat meat.)
Our entire decision system relies on endings justifying meanings. I want a steady job that pays well, so I concede to going to a 4-year institution and paying a decent amount in order for that end to be so. The end justifies the sacrifice in time and finances, so the decision is justified in my mind. If the end were that I had only obtained unemployable skills or knowledge, then that particular end would not have justified the means for me.
So I suppose that when people say the ends don’t justify the means, they’re not really saying it categorically—just that the particular ends being argued don’t justify the particular means.
With the case of animal testing to improve human quality of life, it’s hard to say. Dogs were routinely experimented on and killed to first link diabetes to the pancreas, and later to discover insulin was a substance that could be transferred to preserve life. These medical results have saved hundreds of thousands of lives in the past hundred years. Whether the neuralink experimentation is justified in its potential for quality-of-life improvements in paralysis victims years into the future really depends on where you weigh animal well-being and life in relation to future improvements to human life, as well as whether you believe their experiments are too gratuitous and could be carried out more safely/ effectively on fewer animals.
I thought my argument was clear, but I can try to make it more clear:
- I eat pork. Unfortunately because of people like me there are many many suffering pigs.
- I believe that it is more justified to make a pig suffer for neuroscience research than to be made into a McRoyal. (Let's assume that the suffering is comparable. Please also assume that the suffering is necessary for the particular research and that research has actual potential for useful applications. If there is evidence of unnecessary abuse then I'm not defending such abuse.)
- Therefore it seems silly to me to attack neuroscience researchers instead of me, an omnivore who could be vegetarian/vegan.
I understand that one can argue for both positions at the same time -- argue against research on animals and argue against eating meat. But I think the latter one is much more important than the former. And yet you probably wouldn't attack me for my meat-eating habit. (Maybe because doing so would be impolite.)
- The ends justify the means - Meaning we could justify torture if it prevents terrorism for example. Some people would consider this fine, others not.
- Some moral principles or duties have intrinsic value independent of their outcomes - For example, telling the truth might be considered right not because of its consequences, but because honesty itself is inherently valuable.
- Both means and ends matter - Actions are justified when there's moral harmony between how we act and what we achieve. This suggests that good ends achieved through ethical means have a different moral quality than the same ends achieved through harmful means.
Probably I'd put myself in the latter camps, rather than the first two. But then I haven't thought about this too deeply myself, so happy to hear the opinions of others who might have thought about it more :)
However, wouldn't most people say that? It is kind of a cop-out because it let's you decide on each moral dilemma in a case by case basis -> which I think is actually necessary since you can't say that ends justify/don't justify means blankly.
Do you by any chance know Alex O'Connor? I listened to an ethics episode of his podcast and it was quite interesting and well-spoken in my opinion. (It is about veganism again, I suppose it is a useful theme for ethical arguments.)
https://overcast.fm/+AARh0bWaidM
https://www.youtube.com/watch?v=PAOzGNFamgQ (the same content but video)
This latter approach also extends very nicely to probabilistic methods - if I pass garbage on the beach, I can pick it up with probability Y%, and adjust Y so that if most a lot of people make the same choice, then all the garbage will be picked up.
To use an extreme example, say I have a theory that the brain is an unnecessary organ. Can I go around removing pig brains in the name of “neuroscience research” and get a free pass?
Okay, now suppose I want to test if my new brain implant that I intend to attach with known acutely neurotoxic binding agent is safe for long term use. I then observe that the acutely neurotoxic binding agent causes acute brain damage like it said it would and thus my implant is unsafe for long term use. Do I get a free pass for that even though I killed an animal to learn something the manual already told me?
Okay, now suppose I want to test if implant A is safe for long term use. But when I go to do the surgery I insert implant B because I took the wrong implants out of the storehouse because I did not follow standard practice and go through my checklist as any competent doctor should. I then repeat this say 24 more times before realizing that I have inserted the wrong implants into around half of the test subjects. I then kill the animals when I realize my mistake because no useful data can be drawn due to my mistake. Do I get a free pass for “experiments” that even I acknowledge are worthless because I made a mistake because I ignored standard practice that has practices explicitly designed to cheaply and easily avoid the class of mistake I made?
Killing a pig for high-quality neuroscience research can be worth more than eating it. However, there are plenty of forms of “neuroscience research” that are objectively useless that confer less benefit than eating it or are even actively harmful and thus confer only harm. These forms of “neuroscience research” can still be unethical even if we, as a society, continue to eat meat.
Of course there are proposal review processes for research involving animals, that considers the potential benefits versus the harm done.
> However, there are plenty of forms of “neuroscience research” [involving animals] that are objectively useless
Says who?
You may disagree with the standards and decisions of review processes, but they are ubiquitous today.
No, but Neuralink has proven results and proven useful applications. If you believe that they should publish more data or that there has been a specific misconduct then that is a different argument.
We actually need to evaluate the “neuroscience research” and processes to determine if it constitutes one of these classes?
If no, please explain how my first example is clearly morally superior to eating an animal.
If yes, then please answer the other two concrete hypotheticals I proposed and evaluate their practical and moral content.
I contend that such practices would be unethical and practically worthless, with the benefits being either practically zero or actively negative from engaging in such research practices. So, eating an animal would be morally superior to such bad research practices. Such practices would, furthermore, be strongly dominated by well-known, standard practices which are more ethical, practically useful, and cheaper; thus harm minimization and utility maximization both support the use of standard, known practices in preference.
I also contend that such deviation from standard practices would only be morally justified if you were intentionally attempting to evaluate the standard practices themselves, but that would require both a specific nuanced argument and would preclude such experiments from testing new innovations to avoid disqualifying confounding variables. As such, the proposed hypotheticals do not fit this criteria as they are attempts to “research” some other non-process factor. So you can only argue this point if you wish to argue that intentionally confounding process and research variables is good science.
Animal testing has existed for centuries and will continue to do so until we can fully sinulate a human being.
Why would you cease developing BCIs? It’s not ethical to force another sentient being into biological R&D on their own body. OTOH there’s no problem to enroll someone to a dangerous mission if they’re truly voluntary and get a benefit from it.
“telepathy” gtfo, they’re trying to give their brainchips marketing hype synonyms like how Altman calls ChatGPT AI when really, it’s not artificial intelligence, it’s just ML. But ML sounds a whole lot less exciting in the marketing pitch.
but it's not the AI that Altman is selling, ditto for ML
I mean, we all know helping disabled is not the end-game objective of neuralink. And right now, from a very cynical point of view, disabled people constitute a large reservoir of cobayes and free marketing for Neuralink
I don’t know how much has been invested in R&D on Neuralink, but I doubt we have ever invested that much money in any other technology to provide autonomy to the disabled.
And it is not perfectly clear to me that, for the sole prospect of helping paralysed people, Neuralink is the best way to go. It sure is the one that looks the coolest, but it’s going to be very expensive, hard to fix when something goes wrong, and it is also hard to trust. Those issues do not seem to be avoidable
Don’t get me wrong, I admire the huge QoL gain for the three patients. As individuals, they sure benefited from this. Idk if the same is true of the disabled as a social group
Can you tell us more what you surmise we all think is the end-game objective?
Musk's original stated end-game objective is to give humans a chance against ASI by removing the biggest impediment that humans have to communicate digitally, the keyboard.
This is hard to believe as the truth, as it is extremely short-sighted. If ASI can think 1000x faster than a human brain, and with much more intelligence, then what does giving humans even a 100x improvement in I/O achieve? Also, if ASI is achieved, then it will continue to self-improve. The meat brain is stuck at our current speed.
Please see my HN profile for a privacy rant about the downsides, which only assumes a read capability. Once a write capability is introduced, I mean you gotta be kidding me. Who should you trust with that power? The answer is no one.
The anime movie Vexille (2007) has you covered: https://www.youtube.com/watch?v=p9Ti8mjRXsc&t=34
Technically the worms ("jags") are unintended junk rather than tools of apotheosis, but the overlap is striking.
Generally speaking, the demo is always about finding the green ball on top of a red cube, or the person who went missing in a land slide, but what sells it is detecting and aiming at the dissident hiding under a truck.
And isn't it weird how "think of the children" is always ridiculed but "think of the paralyzed etc." is just fine? I've seen it countless times in the last decades. Just recently when I said on here I want "AI" art to be marked as "AI" made and someone claimed I don't care about the people who have Parkinson's and can't hold a brush, but wouldn't answer why we can't mark it anyway. It's not the people with Parkinson's that want to pass of their creations as hand-made. They're just getting used.
Sure, paralyzed people would love to be able to control a cursor with their mind etc., but even more than that they don't want cuts to social programs, that enable them a dignified life beyond "making them as functional as a healthy person", to allow tax cuts for the super rich. They want friends to have time for them instead of working 3 jobs, that sort of stuff. But Musk and his spiritual brethren are gleefully moving in the opposite direction, as fast and ruthlessly as they can.
So I say this particular doctor is three butchers in a trench coat. I can't prove it, because I can't read minds, but nobody else can either, and this is the "bet" I'm going with. Vulnerable and sick people can only have things that would a.) help super rich people with the same conditions and b.) enable more persecution and exploitation, and an easier discard of undesirable, unproductive or rebellious members of society.
Isn't the difference that "think of the children" is used to ban stuff and "think of the paralyzed" is used to enable stuff?
"think of the children" can and is also be used as a fig leaf, to just ban things or get control, but that fact in turn is then used as a fig leaf for dismissing any concern for children. While "think of the disabled people whose welfare the broligarchy wants to see cut" somehow is just taken without second thought.
I have occasionally wondered if, in some kind of time-travel scenario, I could convince the local royalty that subsidizing healthcare for the masses would ultimately benefit them years down they line when they need an experienced doctor who knows how to do some kind of surgery.
> Someone who has genuine concern for helping people doesn't cut medical programs in fly-by-night operations to leave people with medical devices in their body and whatnot.
Some folks might miss the political reference: https://www.citizen.org/news/egregious-abandonment-of-ongoin...
> but what sells it is detecting and aiming at the dissident hiding under a truck
Mildly relevant: https://xkcd.com/2128/
Your intuition that subsidies can increase outcomes for even the super wealthy is correct, but it should be noted that this already happens today.
Subsidies for healthcare, including for highly specialized and technical procedures that are expensive, yield:
- Increased Access to Cutting-Edge Treatments
- More Skilled & Experienced Surgeons
- Lower Costs Through Economies of Scale
- Encouragement of Medical Research & Innovation
For instance, for heart surgery in particular, in the US, there is Government Subsidies and Assistance (Medicare, Medicaid, CHIP, VA, and ACA) as well as Private and Non-Profit Aid (HealthWell and PAN Foundation, American Heart Association & Mended Hearts, Hospital Financial Assistance).
Then there are major healthcare foundations funded by billionaires, focusing on medical research, global health, and disease prevention. Some of the more notable and impactful ones are:
- Bill & Melinda Gates Foundation
- Chan Zuckerberg Initiative
- Howard Hughes Medical Institute
- Michael & Susan Dell Foundation
- Helmsley Charitable Trust
- Open Society Foundations
- Bloomberg Philanthropies
- The Wellcome Trust
Corporate cyborg parts are an already-predicted nightmare, already taking place, unfolding in slow motion, and soon it will breach the sanctity of human thought.
Edit: And definitely don't suggest adding a government SQL database or you will trigger him. The government doesn't use SQL.
So, what do we do first? Propose a political solution.
We all know how this is going to end, there have been more than enough cyberpunk videogames and novels for us all to read.
That's not what is happening here. These tools (Neuralink and others) enable people who are disabled to participate more in society.
I'm struggling to understand your point. You seem to be saying that Musk is trying sell a product to people but at the same time taking away their ability to pay for it. Logically, that means nobody would be buying the product, which leads me to conclude the thinking you express above is flawed.
But. It also doesn't take a lot of imagination to see what other beneficial uses they promise to bear, as a general device. Imagine having a computer plugged-in permanently in your brain. Both in reading (and reacting by providing a stimulus, whatever it is, however you may do so directly or indirectly), and perhaps even, some day, in writing.
When you see what you can achieve with an individual, customised touch-screen computer in the pocket, something that didn't even exist a quarter of a century ago. The potential. The horizon. How would you not invest in that vision if you had the money for it?
What a striking coincidence that the man behind this project has now access to the resources of a huge country, which administration happens to deport "illegal" immigrants here and there, without due judiciary process (that is, publicly documented), in territories outside of judiciary overview (like Guantanamo).
The same guy who felt brazen enough to make twice a nazi salute in front of televisions.
Far fetched scenario? Yes, obviously. Improbable? Also yes. Impossible? No.
That's an uncharitable take that focuses on the wrong issue, in my opinion.
Noland's life was pretty dire: "Since dislocating his C4-5 vertebrae in a 2016 swimming accident, Arbaugh had dropped out of Texas A&M and returned to live with his family in Yuma, Arizona. Due to the combination of Yuma’s scorching heat — from May to September the average high temperature is 99 degrees or more — and the intense spasms he experienced when sitting in his power chair, Arbaugh spent most of his time in bed, watching TV. With no sensation or function below his shoulders and having limited caregiving hours provided by the state, he relied heavily on his parents and brother and often felt like a burden." [1]
After Neuralink, the abilities that Noland gained is best represented by his own words: “Before, I would wake up and just [watch] my TV,” he says. “Now, I wake up and [work] on my computer. It’s very similar, but at the same time, my daily routine has changed from just watching stuff to being more active and interactive with the world.”
[1] https://newmobility.com/noland-arbaughs-life-as-the-first-ne...
You cannot fight racism/sexism with racism/sexism.
What would your solution be?
* Everyone hates Elon
* For most this is enough to hate neuralink.
* 15ish+% think that embedding stuff in your brain from any company is a bad idea(TM)
* 5-ish% think this is not worth working on at all, or not worth the animal / human research costs
* In the know folks point out that tech like this has been around for roughly 10 years, but research hasn’t progressed past the point where brain injury isn’t a major risk -> this is too early
I don’t read anything here about human autonomy; each of the guys written about have my utmost respect for not just committing suicide — they must be incredibly tough, persistent and positive humans, full stop. The idea that they can’t or shouldn’t be able to weigh the risks and benefits of tech like this feels infantilizing, in the worst way - infantilizing from people who have full mobility.
At any rate, I applaud a company trying to help people like this, EVEN IF their long term goal is an ad-supported BCI (although TBH Elon’s always had significantly better revenue ideas than ads), and I applaud the first few folks willing to risk their health to get access to a better life, and help people down the line from them.
For example, this article discusses medical implants. Safety of those is very important. When the owner of the company is actively dismantling oversight that ensures safety, this directly impacts whether we can trust this product.
I agree that HN should be mostly politically neutral, and for the most part it is. For topics involving Musk, however, one simply cannot ignore their problematic attitude towards anything that might inconvenience them.
This is a piece of marketing from a private company. It is a good thing that people raise criticism missing from it.
until we have a solution to the problem that is Elon Musk, and potential future Elon Musks, this type of technology can only be a net negative to society.
I think basically any of the leaders who brought us the technology we are using today are cults of personality like this, we just forget about the ones that aren't contemporary. I have yet to see us grow without them.
This kind of behavior is not befitting of a company that will need to cultivate an incredible amount of trust from customers before they buy into the idea of a brain implant.
Elon is so effective as a leader he seems to break people’s brains. No other person could have started this company and had even half this success. There’s a reason all the most talented flock to his companies, despite “conventional wisdom” saying they shouldn’t. It takes a lot of self deception to ignore the reality that he obviously must be doing something right.
Well, you have hit the nail on the head. My misanthropic view is that most people are a deluded lot.
I'm sure there are. There may also be people with The Com background (https://cyberscoop.com/the-com-764-cybercrime-violent-crime-...) working on it too: https://krebsonsecurity.com/2025/02/teen-on-musks-doge-team-...
yea more known as the DOGE guy but worked at Neuralink before that. Imagine the potential for abuse.
One of the great examples of this is the infamous "Pedo guy" incident in which he showed himself as very unempathetic and petty the moment people dismissed him as he attempted to hastily insert himself into a tragic moment.
He's also regularly sued people exercising their free speech to comment on or criticise his financial interests, knowingly attempting to drown influential people he doesnt like in legal fees and frivolous lawsuits.
In the past he has participated in doxing governmental employees who might cause him financial damages, often encouraging his followers to harass beuraucrats and lawyers who are just doing their legal jobs.
There are plenty of examples of Elon regularly engaging in bullying of others who may not have access to the resources he does, its not just limited to these few examples.
In my eyes, any measure of success or wealth will never excuse how a person conducts themselves in public. And I think Elon no longer thinks that the rules apply to him as so many are willing to overlook his behavior due to worshiping his money and influence. Elon's nazi salute is the perfect example of this.
So my original statement still holds. Neuralink has a very large mountain to climb when it comes to consumer trust. Products in the Healthcare industry can massively impact people's lives, especially when they dont work as intended. Any company that participates in this space is morally and ethically required to be empathetic to the lives that they impact. And this level of empathy is not something that I see coming from the man behind neuralink which I think should disqualify it as a company with the potential to impact a lot of people.
Is this legal in the US?
This makes it rather galling that Elmu is seeking to shield DOGE employees from such accountability, but understandable when people on Reddit are openly advocating their assassination.
Is this standard of shielding government employees from accountability applicable only to DOGE employees, or could we also have applied it to the many employees receiving death threats from Elon's fan base? Consistency on this would be welcome.
Briefly, in the firearm age, respecting the popular vote was a Nash equilibrium, because if you lost the vote, you probably wouldn't be able to field enough riflemen to win on the battlefield either, so your best option was to lick your wounds and make do under the opposition party until the next election. Despite the resounding defeats of the US by masses of riflemen in Vietnam and Afghanistan, and of the USSR in Afghanistan in between, that equilibrium seems increasingly unstable in the drone age. The first warning signs of this were the staggeringly unequal death tolls in the US's first Iraq invasion, reminiscent of the Scramble for Africa. Recent examples of this instability might include the US's successful initial invasion of Afghanistan, the US's successful eventual defeat of Daesh in western Iraq (despite the relative hostility of current Iraqi leadership to the US, which counts as a sort of defeat), Israel's utter dismemberment of Hizbullah, Israel successfully stymieing Iran's nuclear weapons program, and Ukraine's surprisingly successful resistance to the invasion by Russia's much larger army. Also Hamas doesn't seem to be doing very well at defending Gaza.
Unfortunately the literature I could recommend to you on this topic has mostly been flagged as wrongthink, so I won't recommend that, but Slaughterbots is probably still safe to watch. It contains the memorable line "nuclear is obsolete", a riff on Putin's remarks at Valdai in Sochi 11 years ago: https://www.youtube.com/watch?v=9CO6M2HsoIA
It's fiction, of course, but thought-provoking fiction, scripted by leading AI researchers to be as realistic as possible, and it may have more truth in it than we would like.
As written about by Sam Harris recently and discussed in the (now dead) thread: https://news.ycombinator.com/item?id=42716926
Musk is a terrible person, but also a deeply dishonest and selfish one.
In the grander scheme of things, the US voters do indeed have the right to vote for not providing foreign aid. Which is sad of course, but is a valid political position.
Those are just two of very many recent examples.
https://x.com/ADL/status/1881474892022919403
There also doesn’t seem to be any corroborating data to suggest he’s a nazi. I’m all for calling a spade a spade if he is, but it seems that people are working backwards from the “I hate Musk” position rather than forwards from the facts.
"Sorry officer, I did not flip you off twice, that were both just very awkward gestures"
There is tons of data on how Musk became a far right supporter and sympathizer, like his support of UK racists and the German far right. You seem to still use X, you could just scroll through his posts there and try to not ignore the evidence you see with your own eyes
I struggle to see how doubling down on the hatred is going to convince anyone other than _already hateful_ people of the righteousness of your cause?
But you (and everyone else in this thread, with perhaps a few exceptions) hated him long before the salute, so to try to blame it on that is pretty disingenuous.
I think trying to justify your hatred of someone based on something the did after you started hating them is pretty "not okay", too, fwiw
As for his other views, Wikipedia can speak to it better than I can: https://en.wikipedia.org/wiki/Views_of_Elon_Musk#Race_and_wh...
So which is it? It seems wildly inconsistent to me to intentionally make a nazi salute but verbally deny being antisemitic. I don’t think someone like him would need to rely at all on plausible deniability, given that everyone already seems to hate him and he’s been granted immense power without having been elected to any position.
You are absolutely correct that modern Nazis are not really bothered by the Jews, but you have to remember that fascists just look for easy targets to hate.
Essentially, yeah, fine, he's not a card carrying member of the NSDAP. But he's a hateful individual pursuing a hateful agenda all the same, and he did the salute to signal to edgelords that he sees them.
That was a Hitlergruß, no ifs no buts. More over it wasn't just the once.
Does it make him a nazi? no.
But one has to question why the fuck he thought it was a good idea.
He's terminally online, he knows exactly what it is. He's seen the same memes as us, and knows exactly what that gesture means. So why do it?
That is the far more concerning question.
But thats irrelevant as he appears to be gaining absolute control over the executive.
As far as I understand, most of the leaders of the historical NSDAP (the Nazi party) / the Nazi regime were not Nazis themselves, insofar as they did not believe in whatever was written in Mein Kampf. Nazism was just a mean to grab and hold power. The true believers were basically victims of a con.
So... I don't care whether Elon Musk is actually a Nazi. I do believe that he is willing to use Nazism as a lever, which makes him much more dangerous.
Another example is Albert Speer (whose biography I've also read), where he initially was an opportunists but eventually became active participants in furthering the goals and believing the "mission" of the Nazis, even though initially (and afterwards) wasn't as convinced (edit: by his own accounts, many historians disagree with this today).
Characterizing the leadership/inner-circle/leaders as merely power-seekers who didn't believe their own ideology minimizes their moral culpability and misrepresents the historical record.
Probably not a good idea publicly. I'd say he slipped if he did do it. I do find NS to be very funny because it annoys/offend some people, most comedians similarly will find it funny.
I mean yeah, but he did it more than once, and it wasn't like it was an odd sort of wave, it was a full on parade standard Hitlergruß (thumpy on the chest hand out at the ascribed hitler angle). Which he then repeated to the audience in the front and the people behind him.
Monty python use to do it all the time, as did a number of other comedies. but the important distinction is that comedians aren't in power.
Musk arguably has more power than the president. So him thinking that it can't harm to try the old nazi salute, with unprecedented power isn't a healthy thing for democracy, regardless of who you think should be in power. Do you think he's going to give up that power willingly?
Now coming to the democracy thing - I'm not sure it's the best form of governance as is commonly understood, so I personally don't value it. I don't imply that the opposite of democracy is tyranny either. I suspect that groups exist that are outside of typical govts and personally I'd be a part of such group, than participate in a 'democracy' which caters to a relatively low IQ - the stuff that Monty Python highlights.
I would think that EM has already reached that stage of no wanting the approval of those easily offended people.
He craves approval, the people that he crave it from are just as easily offended as anyone else.
Can you imagine the death threats if he pulled out a pride flag, or said he loved his transgendered kid?
>He craves approval, the people that he crave it from are just as easily offended as anyone else.
I would not make that assumption unless I knew him personally.
Hell, he tried to get the left's approval for years. However nothing he did was ever good enough to satisfy the loudest, most critical voices, which I think has contributed to his abandonment of the left as a whole.
It probably didn't help that he was outright snubbed by the previous administration numerous times. I think that blatant disapproval helped shape who he is today, too.
I'm mostly basing this on pragmatism, really. He wants to succeed, and if the left isn't enabling that, of course he'll try the right. They seem much more welcoming (ironic!), and much more supportive of his goals (also ironic, given Tesla's position opposing fossil fuels and climate change!)
In this scenario, it is the people offended by the Nazi salute, and not the person doing it for outrage bait who are petty?
Brother, I have been accused of a lack of self-awareness, but man this is next level! In your own scenario you have all the power in the world; you dont ever have to weed out anyone, by definition.
The ADL these days mainly exist to shout "antisemite" at anyone criticising Israel. Their giving him a pass was sickening, and undoubtedly related to Republican support of ethnic cleansing in Gaza.
Other Jewish organisations have called it as the world has seen it. One organisation does not speak for a very diverse people.
https://www.theguardian.com/technology/2025/jan/26/elon-musk...
https://www.jta.org/2025/01/21/politics/how-did-the-adl-conc...
https://en.wikipedia.org/wiki/Elon_Musk_salute_controversy#J...
> There also doesn’t seem to be any corroborating data to suggest he’s a nazi.
https://www.wsws.org/en/articles/2023/11/22/ilnh-n22.html
> On November 15, a Zionist account posted a tweet attacking Nazis for being “cowards” and posting ‘Hitler was right.’” In response, a fascist account replied that “Jewish communities have been pushing... dialectical hatred against whites” through “hordes of minorities... flooding their country.”
> Musk responded to the latter post with the statement, “You have said the actual truth.”
Those, and so much more, are the facts. You cherry-pick the ADL, say "there doesn't seem to be anything else", and conclude everyone, including Auschwitz survivors who seriously have better things to do, just "hate Musk".
https://en.wikipedia.org/wiki/Psychopathy_in_the_workplace
> Hare further claims that the prevalence of psychopaths is higher in the business world than in the general population.
And when people point out that they made this gesture their answer is usually not to crack jokes about how they made a Nazi salute in response.
We can maybe disagree about WHY he did it, there's room to discuss if it was genuinely a white nationalism thing or if he was just being an edgelord.
But it WAS a nazi salute and denying this is disingenuous.
Can we admit to ourselves that most edgelords are actually people infatuated with exact same movements and value systems? Literally all edgelords get angry and strongly dislike anyone left leaning. For example, you do not see them harassing right wing, but you do see them harassing perceived sjws.
Somehow, there is no such thing as left wing edge lord. And that is because when left wing people act badly, they are blamed as bad left wing people. Edgelord is just a way to not blame right wing people and attributing them benefit of the doubt never given to the center or to the left.
Your comment is absolutely correct; Musk and his ilk are just people who "joked around" on 4chan and essentially radicalised themselves doing it.
HOWEVER left wing edgelords absolutely exist. Look up (at your peril) the phenomenon of "tankies". They're not your average communist, but rather apologists for Stalin's genocides.
Maybe your comment is more about the terminology at play, but it's interesting to see that the left wing mirror image of the average 4chan poster absolutely exists.
But that is my point - tankie is someone who is a communist, hardcore pro-Stalin. There is no assumption that tankie is someone apolitical who is just joking around. There is no "he is just a tankie, doing things for fun, stop accusing him of being a communist".
Meanwhile, edgelord is someone who is supposedly just joking. A fine guy who just happen to draw swastika to get a reaction. Someone who you should let say and do movement things, because "deep down they do not mean it".
> there's room to discuss if it was genuinely a white nationalism thing or if he was just being an edgelord
This is what I reacted to. You cant replace "edgelord" by "takie" and white nationalism by stalinism in that sentence. It wont work. The "there's room to discuss if it was genuinely a stalinist thing or if he was just being a tankie" does not work, because tankie is literally a stalinist .
> Musk and his ilk are just people who "joked around" on 4chan and essentially radicalised themselves doing it.
Or rather, the were attracted to 4chan because they had the same opinions and values as those people. They did not just joked around, it was what they believed and who they were. And while both center and left pretended they are just playing, they meant it and managed to radicalize other people too.
I mean, I'm a pretty staunch socialist/commie and there are plenty of hard-left edgelords out there "joking" about gulags and executing academics and so on.
You have put joking into quotes. Even in this comment, you are not trying to convince me that they are actually fine, that they are something less then Stalinists.
But — despite all the things that should've (but didn't) set alarm bells ringing in my head at the time — until just after he bought Twitter and immediately starting making harmful decisions with its new rules, the output of his companies looked kinda like it was helping improve the world.
With SpaceX, humanity was finally unlocking that cheap spaceflight the Space Shuttle promised but didn't deliver ever since Rockwell started building the Enterprise-née-Constitution in 1974, which is one of the few areas where their work is still going great.
(Buuuut even then, for Mars missions to be viable they must have a working Sabatier plant that fits in the payload bay and can produce 330 tons of methane every 2 years from a Martian atmosphere and irradiance level, and I've not seen any sign of this actually getting worked on by any Musk-group company; such machines would be really useful for Earth's environment, and it's a requirement for his Mars plans as otherwise the Starship vehicles can't return to Earth).
With Hyperloop we were finally getting high speed transit to compete with polluting flights, but TBC has completely failed to do anything noteworthy, not even when it is news-worthy.
With Tesla, we were finally getting non-polluting cars, when the competition was hydrogen vapourware, milk-floats, an excuse for ongoing corn subsidies, and the occasional slow news day when some back-yard inventor made a car that was propelled by springs and/or hamsters.
This sort of blindness is a major reason liberals can't properly respond to the rise of MAGA or Trumpism. They refuse to understand it. Understanding something doesn't mean you agree. You can't properly criticize something you don't understand, nor can you provide an alternative that answers it.
Go back in time to the 1990s and 2000s.
The shuttle program was winding down. The only way to get humans into space currently on the market was the Russian Soyuz program, which is ancient Soviet technology. The only human habitation in space was the ISS, which everyone knows is a good engineering experimental platform but otherwise a dead end. The DC-X (first vertical landing rocket) was cancelled. The Venturestar was cancelled, and it may not have been a good design anyway for several reasons.
A lot of people are writing about this as the end of the space age, that the whole thing wasn't a good idea to begin with and there is no future there.
Then along comes SpaceX and within a few years they go from small orbital rocket to functional first stages that land themselves and now they almost have a fully reusable super-heavy capable of refueling in orbit.
Now look at cars. Common wisdom in the 1990s and 2000s is that affordable long-range cars are impossible without fossil fuels. There's a popular site called The Oil Drum that pushes the narrative that all motorized transport will end if fossil fuels are depleted. There are hybrids, but they still run on gas, and nothing much has happened to ICE technology since fuel injection in the early 1980s.
There are some EV efforts but they're early and half-assed.
Then along comes Tesla with the roadster and shows that EVs can be not just viable but cool and actually faster with better torque and acceleration than conventional cars. Since then many other car companies have caught up, but I still believe the whole industry would not have moved without Tesla kicking them in the arse.
If you really hate Musk, the question you should be asking is: why does the human race seem to need people like this to advance?
We had the technology to build the Falcon 9 and Starship in the 1990s, maybe even the 1980s. The problem wasn't money. The total cost of Falcon 9 development was comparable to two space shuttle launches.
The situation wasn't as absurd with EVs, but we definitely could have built a commuter EV at least a decade before we did. Look into the GM EV1 from the 1990s, which pre-dated the Nissan LEAF -- the first mass market EV, which did beat Tesla on that front -- and it had similar range and performance. The EV1 was killed in spite of demand becuase the conventional auto industry hated EVs. Some still do, like Toyota.
It really does seem like nothing big happens in human history without some manic unhinged asshole pushing it. We have everything -- ability, intelligence, technology, money -- but we don't do it without one of these people. Why?
Maybe we'd need "visionary" CEOs less if we had an over the counter amphetamine-like drug but with less addictiveness or other side effects.
> The situation wasn't as absurd with EVs, but we definitely could have built a commuter EV at least a decade before we did. Look into the GM EV1 from the 1990s, which pre-dated the Nissan LEAF -- the first mass market EV, which did beat Tesla on that front -- and it had similar range and performance. The EV1 was killed in spite of demand becuase the conventional auto industry hated EVs. Some still do, like Toyota.
Could we have actually built an affordable commuter EV a decade earlier?
Eyeballing this graph, batteries were about 6x more expensive a decade before Tesla actually started delivering the Roadster: https://ourworldindata.org/battery-price-decline
OTOH, perhaps the extra demand would just have made prices fall sooner, given the other graph in the link shows the relationship between market size and price, rather than year of price…
In my view, Elon's spent most of his good will reputation capital. Of course, we still do have the super-fans who are willing to look past his petulant behavior and give him a pass for his bone-head business moves.
The other take is that he's a genius and a hostile takeover of Twitter was just a checkpoint on the way to making US government his puppet state. Congress is twiddling their thumbs while Musk is apparently preparing to siphon off taxpayer dollars into Space X, Tesla or other ventures.
Either way, it's bad. I loathe the man and fear what could happen.
It's sad, but damn the man could play... once... I guess I can listen to the old albums.
It's like that.
Unfortunately rock stars on the spiral don't generally destroy democracy.
Musk fans keep insisting he should get more and more control of my life as an individual who has no interest in buying his products or using his businesses because they aren't good products for me.
They keep insisting that I AM WRONG for being upset about an outright asshole forcing himself into my life.
Why do we need this to advance?
We had everything we needed to build the Falcon 9 in 1985.
We will keep suffering Hitlers until we can build the Autobahn without him.
Many yes, but certainly nowhere near all. So that would seem to invalidate your hypothesis.
If the guy has demonstrated anything, it's that he wants total control.
He has access to a lot of money so maybe these people working on it should continue to work for him. Maybe he wants to charge an outrageous fee for it but ultimately at some point down the road if he can do it others will to and it will be common place for those who need it and probably common place for those who don't need it but want it.
I'm sure he wants to sell it to those who need it, but I don't think that this means he cares that much whether it's successful as a medical device. He generally cares whether some device appears to work well enough that he can sell it, especially to investors, and far less about whether it actually solves a problem/doesn't introduce worse problems.
Tesla FSD is the best example of something he's been selling for at least 7 years now without it actually working as advertised. Cybertruck was sold long before it came out, and now they're producing only a trickle. Roadster has been sold by the tens of thousands and it's not even in a design phase yet. Solar Roofs was presented to investors as a working product when it was a plastic mockup. There are probably others.
https://www.news-medical.net/news/20230824/Brain-computer-in...
https://news.brown.edu/articles/2012/05/braingate2
https://www.ahajournals.org/doi/10.1161/STROKEAHA.123.037719
If you google "BCI brain computer interface paralyzed" you will find a wealth of researchers and organizations working on it which are not Neuralink.
Shaking President Obama's hand with "touch feedback" in 2016: https://www.youtube.com/watch?v=itkgmMLi7l4
Eating a taco in 2018: https://www.youtube.com/watch?v=fUjfA78FuZM
Robot arm in 2018: https://www.youtube.com/watch?v=MjFr0rnbT24
Playing Final Fantasy 14 with a BCI in 2019: https://www.youtube.com/watch?v=WjNHkRH0Dus
Non-invasive robot arm control in 2011: https://www.youtube.com/watch?v=8eOSlzDdOpg
Non-invasive robot arm control in 2020: https://www.youtube.com/watch?v=asDwupMbE2I
Speech/voice generation in 2024: https://www.youtube.com/watch?v=v8frSsvwPp4
The technology to do these sorts of things as proof-of-concepts is fairly old. You do not see widespread deployment because brain surgery betas are not a very good idea. There is insufficient evidence the technology is mature or safe enough to support full-scale deployment. A common class of problem being brain scarring on the invasive insertions that reduce efficacy of the implant requiring further damaging brain surgery to remove the implant in a few years.
When you have insufficiently mature technology for deployment you optimize for research. For that, you only need enough to saturate your researchers with data and well-designed tests which is usually achieved with only a small number of units. This is similar to the reason why you only need a few prototype cars even when you are going to make millions of them. If you are not deploying, then you do not need a lot to saturate your design/development process and making a bunch of each half-baked version prior to the final release candidate is a waste of time.
When the technology is minimally adequate, then you scale up. In contrast, deploying middling quantities of proof-of-concept versions as if that "tests" anything is a recipe for a slow-burning disaster. Nobody else is "trying to compete" on who can deploy more because competing on who can deploy more half-baked brain implants would be unethical.
The hard parts of BCI are: Electrode sensing, but that's a much less difficult problem nowadays. Implant longevity, probably an unsolvable problem without massive advancements in understanding the body. Brain surgery, which will never not be a huge deal because piercing the barriers that protect the brain is just inherently a huge deal and risky to do.
I'm pretty sure Neuralink is the only one mass killing monkeys though.
Note that Elon has also helped push for the killing of US science funding, like funding used to further study BCIs. How convenient for him that all his competition is suddenly going to struggle.
That seems pretty benign compared to what a neural implant could be made to do to someone.
Well of course the device doesn't have to be programmed to be controlled by the host, does it ? Torture entirely by manipulating the compute substrate your mind runs on would be effective† and yet very easy to do... so this is in fact just another torture device.
† Effective in the sense that it would inflict needless misery on people, that's what torture is actually for, it's not an effective interrogation strategy and never has been.
Black Mirror: https://en.wikipedia.org/wiki/Men_Against_Fire
just as an example
I don't like Musk and I find Neuralink spooky in terms of their overall goals, but it's hard to deny how much this invention helps people.
I see the promise, but I've got too many real life examples of security issues to draw on to trust it would even keep working very long — let alone working appropriately and under my control — to allow one to control my body, which an implant would necessarily need to do.
And that's even with 100% of the biological compatibility issues being solved (I'm told those take several years to show up in all the other research examples from everyone else) and assuming that there was no trust deficit with Musk's companies selling products on the promise of what they aspire to do "this year" and don't/them having misleading demos — this is a fundamental issue of digital security being hard.
If an accident like Christopher Reeve's were to happen, I'd wait for something that repaired or regenerated tissue over a chip.
No.
Not abstractions.
I have experience of software, I know how bad the entire industry is.
https://xkcd.com/2030/ applies to everything.
Even without malice, my degree used as case studies the failures of the Therac-25 and the digitalisation of the 1992 failure of the London Ambulance Service computerised dispatch system.
Hospitals and devices do get attacked. Bitcoin ransomware does affect hospitals. These are not abstractions, they are things that actually happen: https://en.wikipedia.org/wiki/Medical_device_hijack
I wasn't being "abstract" when I said the frequency with which attacks are attempted can be measured in Herz, that's an actual anecdote from someone I knew a decade ago.
Not being able to move your hands is not an abstract concept. It can be directly experienced.
"Disability in general" exactly as much an abstraction as "software safety in general".
It's all hypothetical anyway, what's your deal? Are you saying the totally hypothetical life changing cure is something to be impressed by so much, that the real suffering caused by those pursuing it is to be ignored? Ignoring the real suffering for some hypothetical deus ex machina is very cowardly, and if my hypothetical sacrifice reminded you of that, that's fine.
"If Elon Musk gave a shit about anything than profit, and knew his ass from his elbow, and this tech was feasible, and you were paralyzed, would you do it?"
He doesn't, he doesn't, it may not be, and I'm not, so the question is moot. But it's very scientific to ask, and a great way to navigate such society impacting questions, thanks!
This is Reddit-level delusion right here. Please don't bring that here.
I think Musk is driven by both.
Even now, despite the flaws I see in him, I still assume Musk thinks he's improving humanity.
But he needs, and knows he needs, a lot of money for Mars. There's unambiguously a lot of profit motive.
Unlike @computerthings, my objection is on the tech, not the person. The person doesn't help, he also doesn't seem to get the mindset needed for quality software security, but also doesn't make it much worse given how bad this is everywhere.
How much do you think Neuralink is going to cost? How will people who can't get around on their own pay that? How are people who can't work going to pay that?
I don't know why supporters of all these things are so unable to view the whole situation. Musk doesn't want to pay taxes to a government that will support these disabled people. Musk doesn't want to support these disabled people. They are literally pawns for PR to him.
Musk doesn't want to advance the HUMAN RACE. Musk wants to advance CERTAIN PEOPLE.
> Musk doesn't want to advance the HUMAN RACE. Musk wants to advance CERTAIN PEOPLE.
I think he can't tell the difference between those certain people and the human race as a whole. Trans people in particular would be the obvious example of his failure here — ironically, given how much inspiration he's taken from a fictional universe where people can change physical gender by thinking about it a bit and waiting a few months.
It's… not intended as a compliment when I say he "seems sincere" about wanting to advance the human race when it comes with this caveat. Quite the opposite.
Likewise given what else he's "seemed sincere" about in the past and hasn't manifested.
The future of non-elites is unknown. But hopefully either the elites will be magnanimous, or non-elites will create new occupations that will at once, be able to create wealth, and not be able to be performed by bots. Not sure what those new occupations will be? But human ingenuity is an incredible thing, especially if the system remains market capitalism based. Because that will mean your rent and food will depend on you coming up with something to do. I think people will think of something.
If not? Well, let's just say the future might not hold societies as pleasant for non-elites as the societies of today.
A Year of Rick Astley (hey it almost rhymes)
It's tragic in a way. If he stuck to same playbook as practiced by many other early tech billionaires, spending his life on investing, philanthropy, himself and family, the world would probably not have things like common reusable rockets, widespread EV adoption or massive satellite constellations.
His willingness to pour money, and ability get others to pour their money, into various extremely risky ventures, is what made all of that possible. Eventually it would happen anyway, but probably much later.
But I suspect, that very same personality traits that enabled him to do this, are responsible for his current state. Over the years he has lost his self control, to the point that he looks almost childish. Handful of years ago, he opposed people he now works with.
He's now undermining his own companies, with his actions. Even people like Murdoch or Thiel look better in comparison. Not because of what they do, but because they are less visible.
Everything he has ever done, will now be viewed in much worse light. His reputation, sabotaged by the only person who could accomplish that feat. Himself.
Viewed by whom? By you and a bunch of other neurotics that consumed too much CNN?