I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
There is a term for this. "Getting stuck up your own butt." It wouldn't be so bad except that said people often take on an air of absolute superiority because they used "only logic" and in their head they can not be wrong. Many people end up thinking like this as teenagers or 20 somethings, but most will have someone in their life who smacks them over the head and tells them to stop being so foolish, but if you have enough money and the Internet you can insulate yourself from that kind of oversight.
People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
The whole reason we even have time to think this way is because we are at the peak of an industrial civilization that has created a level of abundance that allows a lot of people a lot of time to think. But the whole situation that we live in is not stable at all, "progress" could continue, or we could hit a peak and regress. As much as we can see a lot of long-term trajectories (eg. peak oil, global warming), we really have no idea what will be the triggers and inflection points that change the social fabric in ways that are unforeseeable and quickly invalidate whatever prior assumptions all that deep thinking was resting upon. I mean 50 years ago we thought overpopulation was the biggest risk, and that thinking has completely flipped even without a major trajectory change for industrial civilization in that time.
The real conflict here is between Darwinism and enlightenment ideals. But I have yet to see any self-styled Rationalists take this seriously.
Emotionally I don’t subscribe to this view. Rationally I do.
My critique for rational people is that they don’t seem to fully take experience into account. It’s assumptions + rationality + experience/data + whatever strong inclinations one has that seems to be the full picture for me.
That always seemed like a meaningless argument to me. To an outside observer free will is indistinguishable from a random process over some range of possibilities. You aren’t going to randomly go to sleep with your hand in a fire, there’s some hard coded biology preventing that choice but that only means human behavior isn’t completely random, hardly a groundbreaking discovery.
At the other end we have no issues making an arbitrary decision where there’s no way to predict what the better choice is. So what exactly does free will bring to the table that we’re missing without it? Some sort of mystical soul, well what if that’s also deterministic? Unpredictability is useful in game theory, but computers can get that from a hardware RNG based on quantum processes like radioactive decay, so it doesn’t mean much.
Finally, subjectively the answer isn’t clear so what difference does it make?
Same as that is not the lived experience. I notice that I care about free choice.
The idea that there's no free will may be a pessimistic outlook to some but to me it's a strictly neutral one. It used to be a bit negative, until I looked more closely that there's a difference between looking at a situation objectively and having a lived experience. When it comes to my inclinations and how I want to live life, lived experience takes precedence.
I don't have my thoughts sharp on it, but I don't think the concept even exists philosophically, but I think that's also what you're getting at. It's a conceptual remnant from the past.
But though that is the colloquial meaning, it doesn't line up with what people say they want: you want to make your choice according to your own reasons. You want free choice. But unless your own reasoning includes a literal throw of the dice, your justifications deterministically decide the outcome.
"Free will" is the ability to make your own choices, and for most people most of the time, those choices are deterministic given the options and knowledge available. Free will and determinism are not only compatible, but necessarily so. If your choices weren't deterministic, it wouldn't be free will.
But when you probe people, while a lot of people will argue in ways that a philosopher might call compatibilist, my experience is that people will also strongly resist the notion that the only options are randomness and determinism. A lot of people have what boils down to a religious belief in a third category that is not merely a combination of those two, but infuses some mysterious third options where they "choose" that they can't explain.
Most of the time, people who believe there is no free will (and can't be), like me, take positions similar to what you described, that - again - a proponent of free will might describe as compatibilist, but sometimes we oppose the term for the reason above: A lot of people genuinely believe in a "third option" for choices are made.
And so there are really two separate debates on free will: Does the "third option" exist or not, and does "compatibilist free will" exist or not. I don't think I've ever met anyone who seriously disagrees that "free will" the way compatibilists define it exists, so when compatibilists get into arguments over this, it's almost always a misunderstanding...
But I have met plenty of people who disagree with the notion that things are deterministic "from the outside".
Approaching this subject from a rational perspective divorces you from subject and makes it impossible to perceive. You have to immerse yourself in it and one way to do that is magical practice. Having direct experience of the universe responding to your actions and mindset eventually makes it absurdly clear that the universe bears intelligence and it's in this intelligence that free will operates.
I'd never thought before now to connect magic this directly to free will. Thanks for the opportunity to think this through! If you're interested in a deeper discussion, happy to jump on a call.
From the outside, this is indistinguishable from randomness. But from the inside, the difference is that the individual had a say in what the action would be.
Where this tends to get tangled up with notions of a soul, I think, is that one could argue that such a free choice needs some kind of internal state. If not, then the grounds by which the person makes the choice is a combination of something that is fixed and their environment, which then intuitively seems to reduce the free-will process to a combination of determined and random. So the natural thing to do is then to assign the required "being-ness" (or internal state if you will) to a soul.
But there may exist subtle philosophical arguments that sidestep this dilemma. I am not a philosopher: this is just my impression of what commonsense notions of free will mean.
E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.
Similarly, wealth disparities can't be excused by someone choosing to work harder, because they had no agency in the "decision".
You can still justify some degree of punishment and reward, but a lack of free will changes which justifications are reasonable very substantially.
E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals, and that has emotionally difficult consequences. For example, for non-premeditated murders carried out out of passion rather than e.g. gang crimes, the odds of someone carrying out another is extremely low and the odds that the fear of a long prison sentence is an actual deterrence is generally low, and and so long prison terms are hard to justify once vengeance is off the table.
And so holding on to a belief in free will is easier to a lot of people than the alternative.
My experience is that there are few issues where people so easily get angry than if you suggest we don't have free will once they start thinking through the consequences (and some imagined ones...).
But, sure, I personally do not believe in free will. I'm saying there is no rational basis for thinking anyone has free will ever. I'm saying there is no evidence to suggest free will is possible. In fact, I'll go so far as to say that believing in free will is a religious belief with no support.
But that doesn't mean that events does not have effects on what happens next, just that we don't have agency. That an IF ... THEN ... ELSE ... statement is purely deterministic for deterministic inputs does not mean that changing the inputs won't affect the outputs.
If you "choose" to lay down and do nothing because you decide nothing matters because you don't have free will, you will still lose your job and starve. That it wasn't a "true" "free" choice does not change the fact that it has consequences.
One of the consequences of coming to accept that free will is an illusion is that you need to come to terms with what that means for your beliefs about a wide range of things.
Including that vengeance which might seem moral to some extent if the person who did something to you or others had agency suddenly become immoral. But we still have the feelings and impulses. Reconciling that is hard for a lot of people, and so a lot of people in my experience when faced with a claim like the one I made above that we have no free will tend to react emotionally to the idea of the consequences of it.
If there are non deterministic processes that can be proven to exist, and those interact with deterministic processes, doesn't it follow that the deterministic process becomes non deterministic (since the result of the interaction is necessarily non deterministic), and that it is not continually deterministic.
So - can you see how nothing can be deterministic other than in isolation (or thought experiment really)?
Edit0: typo
We can’t measure things to arbitrary precision due to quantum mechanics, but Philosophy isn’t bound by the actual physical universe we inhabit. Arbitrary physical models allow for the possibility of infinite precision in measurement and calculation resulting in perfect prediction of future states forever. Alternatively, you could have a universe of finite precision (think Minecraft) which also allows for perfect calculation of all future states from initial starting conditions.
Not certain that philosophy is not bound by our universe - is that something you could elaborate (or lend a link) on?
To apply these hypotheticals to our universe implies (from my understanding) that the density of information present at any and all times since it's inception was present (while compressed) at it's creation/whatever - which I imagine I can find some proof of theoretical maximum information density and information compression compare that to the first state of the universe we can measure to have a better idea if it tracks.
I simply mean it’s happy to assume perfect information, perfect clones, etc. The trolly problem generally ignores the possibility that choosing a different track could with some probability result in derailment because the inherent question is simplified by design. We don’t need for the possibility of perfect cloning to exist to consider the ramifications of such etc.
I guess that's the point of any hypothetical, exploring a simplified model of something complex, but it's not easy to simplify the fabric of reality itself.
In a world without free will crimes of passion are simply the result of the situation which means that person would always chose murder in that situation. People who would respond with murder in an unacceptably wide range of situations is an edge case worth consideration without free will. Alternatively if we want nobody to respond with murder in a crime of passion situation evolutionary pressure could eventually work even without free will.
> E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals, and that has emotionally difficult consequences. For example, for non-premeditated murders carried out out of passion rather than e.g. gang crimes, the odds of someone carrying out another is extremely low and the odds that the fear of a long prison sentence is an actual deterrence is generally low, and and so long prison terms are hard to justify once vengeance is off the table.
That’s assuming absolute certainty about what happened. Punishments may make sense as a logical argument even if it’s only useful in a subset of cases if you can’t be absolutely sure which case something happened to be.
Uncertainty does a lot to align emotional heuristics and logical actions.
> In a world without free will crimes of passion are simply the result of the situation which means that person would always chose murder in that situation. People who would respond with murder in an unacceptably wide range of situations is an edge case worth consideration without free will.
This is a significant argument. However, there is also worth considering if that is actually accurate, and if such a situation will occur (in a case where whoever would be killed would not effectively protect themself from this).
> That’s assuming absolute certainty about what happened. Punishments may make sense as a logical argument even if it’s only useful in a subset of cases if you can’t be absolutely sure which case something happened to be.
It is true that you do not have absolute certainty, but neither should you arrest someone who is not guilty.
> Uncertainty does a lot to align emotional heuristics and logical actions.
In some cases, yes, but it is not always valid. But, even if it is, this does not mean that you should not consider it logically if you are able to do so.
Whether or not you have a choice and free will, you can influence and be influenced by other stuff, since that is how anything is doing.
> punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals
I do agree with that, and I think that whether or not you have free will is not significant. Being emotionally difficult is not what makes it good or bad in this case (and it also does not seem to be so emotionally difficult to me, anyways). Reducing reoffending rates is what is important.
(Another issue is knowing if they are actually guilty (you shouldn't arrest people who are not actually guilty of murder); this is not always certain, either.)
I also think that it should mean that prisoners should not be treated badly and that prison sentences should not be too long. (Also, they shouldn't take up too much space by the prisons, since they should have free space for natural lands and for other buildings and purposes, but that is not quite the same issue, though.)
However, there may be cases where a fine might be appropriate, in order to pay for damages (although if someone else is willing to forgive them then such a fine may not be required). This does not justify a prison sentence or stuff like that, though.
Also, some people will just not like them anymore if they are accused of murder, even if they are not put in prison and not fined. This is not the issue for police and legal things; it is just what it will be. And, if it becomes known, people who disagree with the risk assessment can try to avoid someone.
And, if someone does commit a crime again and may have opportunity to do again in future, then this can be considered as being wrong the first time and this time hopefully you can know better.
For example
> E.g. punishment for the sake of retribution is near impossible to morally justify if you don't believe in free will because it means you're punishing someone for something the had no agency over.
and
> E.g. punishment might still be justified from the point of view of reducing offending and reoffending rates, but if that is the goal then it is only justified to the extent that it actually achieves those goals
are simply logical to me (even without assuming any lack of free will).
So what is emotionally difficult about this, as you claim?
However, it would seem that not everyone believes that, though.
(It is not quite as simple as it might seem, because the situation is not necessarily always that clear, but other than that, I would agree that it is logical and reasonable, that punishment is only justified from the point of view of reducing offending and reoffending rates and only if it actually achieves those goals.)
I'm saying it's emotionally difficult to people because I've had this discussion many times over then last 30+ years and I've seen first hand how most people I have this conversation with tend to get angry and agitated over the prospect of not having moral cover for vengeance.
I live in Germany.
When I observe the whole societal and political situation in the USA from the outside, it seems to me that it is rather "two blocks where in each of these there is somewhat an internal consensus regarding a quite some political positions. On the other hand, each of these two blocks is actively fighting the other one."
On the other hand, for Germany, I would claim that the opinions in society rather consist of "lots of very diverse stances (though in contrary to the USA less pronounced on the extreme ends) on a lot of topics that make it hard to reach a larger set of followers or some consensus in a larger group, i.e. in-fighting about all kinds of topics without these positions forming political camps (and the fractions for different opinions can easily change when the topic changes)."
Thus, in the given example, this means for a person out-crying "too short" sentences on social media, you will very likely find one who is out-crying the opposite position.
False, the punisher also has no will, so it doesn’t matter.
Since there's no free will, outcomes are determined by luck, and what matters is how lucky we can make people through pit-of-success environments. Rust makes people luckier than C++ does.
I also have much less patience for blame than I do in a world with free will. I believe, for example, that blameless postmortems lead to much better outcomes than trying to pretend people had free will to make mistakes, and therefore blaming them for those mistakes.
You can get to these positions through means other than rejection of free will, but the most robust grounds for them are fundamentally deterministic.
This is not correct. Whether or not you have free will, stuff influences and is influenced by other stuff, so these arguments are not meaningless or worthless.
> This choice was either made 13 billion years ago at the Big Bang, or it is an entirely random process.
I had thought of this before, and what I had decided is that both of these are also independent of having free will. For example, if the initial state includes unknown and uncomputable transcendental numbers which can somehow "encode" free will and then the working of physics is deterministic, then it is still possible (although not necessarily mandatory) to have free will, even though it is deterministic.
Such system often deal with uncertainty quite well including random noise on their inputs. The output ends up a function of both logic and randomness, but can still be useful.
What I'm saying is that there's no logical point to the concept "should" unless you have some concept of free will: everything that happens must happen, or is entirely random.
What do you mean by that? It still exists doesn't it? Albeit in a probabilistic sense that becomes non-probabilistic at larger scales.
I don't know much about quantum other than the high level conceptual stuff.
It's controversial, but here is the argument that the answer is "no": See https://flownet.com/ron/QM.pdf
Or if you prefer a video: https://www.youtube.com/watch?v=dEaecUuEqfc
>Under QIT, a measurement is just the propagation of a mutually entangled state to a large number of particles.
eyeroll so it's MWI in disguise, but MWI is quantum realism. Illusion they talk about is that the observed macroscopic state is a part of the bigger superposition (incomplete observation). But that's dumb, even if it's a part of a bigger state, it's still real, because it's not made up, but observed.
That's kind of like saying that GRW is Copenhagen in disguise. It's not wrong, but only because it's making the word "disguise" do some pretty heavy lifting.
> MWI is quantum realism
No, it isn't because it can't account for the Born rule. See:
https://blog.rongarret.info/2019/07/the-trouble-with-many-wo...
Well, now I see that QIT isn't quite there. You say classical behavior emerges by tracing, mathematically, not as a physical process? In MWI classical behavior emerges as a physical process, not by tracing. That "look at part of the system (in which case you see classical behavior)" is provided by linear independence of different branches, so each observer naturally observes their branch from inside, and it looks isolated from other branches.
Huh??? No, of course not. The Born rule is about probabilities. It cannot manifest in a single measurement.
> classical behavior emerges by tracing, mathematically, not as a physical process?
No. The mathematical description of classical outcomes emerges by tracing, which is to say, by discarding information. The physical interpretation of that is left open.
> In MWI classical behavior emerges as a physical process
That's right. MWI commits to a physical interpretation of the math. But there is no scientific or philosophical justification for this, and in fact, when you dig into the details all kinds of problems emerge that are swept under the rug by its proponents. Nonetheless, many MWI proponents insist that it is the One True Interpretation, including some who really ought to know better.
> each observer naturally observes their branch from inside, and it looks isolated from other branches.
Yes, I know. But this doesn't solve the problem. In order to get a mathematical description of me I have to trace the wave function in my preferred basis, which is to say, I have to throw out all of the other branches. And this is not just a computational hack. It's mathematically necessary. Discarding information is the only way to get classical, irreversible processes (like measurement) out of the unitary dynamics of the wave function. So a reasonable interpretation of the math is that I exist only if parallel universes don't. And I'm pretty sure I exist.
I'm not telling you this because I expect you to accept it, merely to show you that the MWI is not self-evidently the One True Interpretation.
---
[1] https://blog.rongarret.info/2009/04/on-shadow-photons-and-re...
(Note that I wrote this 16 years ago, so not everything is 100% accurate, but I stand by the central point.)
Then you don't understand quantum mechanics at all. You should read this:
https://flownet.com/ron/QM.pdf
The TL;DR is that measurement and entanglement are the same phenomenon. A particle can become entangled with a detector even if the detector doesn't register anything.
But that is neither here nor there. Why do you get interference with no detectors? Your theory is that a detector at one slit is somehow paired with a "virtual detector" in a parallel branch at the other slit. But why would that "virtual detector" go away when the real detector is removed? Why is it never the case that there is a "virtual detector" at either slit unless there is a real detector at one of them?
When you remove detector and start next measurement, you start with your one branch, branches from previous measurements don't affect it, the phenomenon happens during decoherence, nothing happens after it.
Your article explains this with branching without saying the word.
Other branches can be demonstrated indirectly by 1) quantitatively verifying unitary dynamics, 2) indirectly observing branching, 3) demonstrating that other theories are wrong. Branches are just superposition, if you want to eliminate branches, you should eliminate superposition with pilot wave or superdeterminism or something like that. This kind of unobservability isn't unique to MWI, in general theory of relativity we observe only a part of the universe, the rest being beyond event horizon and is unobservable. Do you believe only observable part of the universe exists and beyond it nothing exists?
Huh??? When?
> Other branches can be demonstrated
Sure, that's just QM 101. What you cannot demonstrate experimentally, not even in principle, is the existence of other branches with different macroscopic configurations than our own. Such branches are IPUs.
In the blog post you linked above:
>No, it isn't because it can't account for the Born rule. See:
>https://blog.rongarret.info/2019/07/the-trouble-with-many-wo...
>What you cannot demonstrate experimentally, not even in principle
I provided 3 ways to demonstrate it experimentally, even in principle, not sure what problem you have.
Again, huh??? Where in that blog post do I try to "get this [sic] statistics from one measurement"?
> I provided 3 ways to demonstrate it experimentally, even in principle, not sure what problem you have.
No, you didn't. You apparently don't understand what is meant by "branches with different macroscopic configurations than our own" and I don't have time to explain it to you. Sorry. Go read up on decoherence, and then come back and describe an experiment that can demonstrate the existence of a fully decohered branch. You can't, because if you could it would by definition not be fully docohered.
In the discussion how different people place bets on A and B outcomes of experiment. Well, you didn't state clearly why you believe that MWI doesn't account for Born rule. MWI accounts for Born rule as statistics of measurements, and the discussion of bets is the closest this in that blog post to consideration of statistics of measurements, but that discussion seemingly considers one measurement, that's why it doesn't see statistics.
>Go read up on decoherence, and then come back and describe an experiment that can demonstrate the existence of a fully decohered branch.
It looks like a logical problem to me. You suggest that decoherence both produces and doesn't produce fully decohered branches? Violation of the law of excluded middle? If the law of excluded middle doesn't work, I don't think experiments can demonstrate anything.
That's what the whole post was about. The MWI doesn't account for the Born rule unless you add additional, questionable assumptions like branch indifference to the SE.
> You suggest that decoherence both produces and doesn't produce fully decohered branches?
No, that is not even remotely what I am saying. You are beginning to sound like a troll.
(That is apparently the definition the author of the linked article uses, guessing by his reaction: "Wait, what??? There is no 'well defined notion of how many branches there are?'")
I can only say that I have never met a proponent of MWI who meant this.
> I can only say that I have never met a proponent of MWI who meant this.
What can I say? There are a lot of MWI proponents who profess to believe this. Here, for example, is Sean Carroll answering the question, "How many parallel universes are there?"
https://www.youtube.com/watch?v=7tQiy5iCX4o
Of course, he doesn't actually give a concrete answer, but he very strongly implies that the question has an answer, i.e. that the question is a meaningful one to ask, and that implies that the MWI does in fact mean that there is a discrete number of clearly separated worlds.
In fact, I challenge you find a single example of a prominent MWI proponent saying something in public (which is to say, in a public forum or a publication whose target audience is the general public) that even implies that the many-worlds of the MWI are not distinct, countable entities. I only know of one example, and it is very well hidden.
There is a more fundamental problem: if the MWI does not mean "a discrete number of clearly separated worlds" then it fails as an interpretation of QM, i.e. as a solution to the measurement problem. The whole point is that measurements appear to produce discrete outcomes despite the fact that the math says that everything is one big quantum superposition. If all you have to say about this is, "Yeah, it's all one big quantum superposition" then you have failed to solve the problem. You have simply swept the hard part under the rug.
In the video, Sean Carroll talks to a non-expert audience, so he must simplify some things, and then it is your or my guess about what the unsimplified version was supposed to be. He says something like: "we don't know, even whether it is finite or infinite, but if it is finite it is a very large number such as 10^10^123". But notice that he also uses as an analogy an interval from 0 to 1, which can be split to half as many times as you need.
You see this as him believing in discrete separated universes, of which there is a definite number (potentially infinite). Yes, that makes sense.
I see another possible understanding, that he is talking about "meaningfully different" universes, because that is what we care about on the macro level. To explain what I mean, imagine that we observe two particles. Any of them can be in a huge number of possible positions, moving in a huge number of possible directions, at a huge number of possible speed. But if we ask whether those two particles hit each other and transformed into another particle, that kinda collapses this huge possibility space into a "yes / no" question. Out of practically infinity, two meaningfully different options.
On a macro level, either the cat is alive or it is dead. Those are two meaningfully different states. If we focus on one particle in the cat's body, there is a continuum of where precisely that particle could be, and what momentum it has. So from the particle's perspective, there is a continuum of options. But from the cat's perspective, and the cat's owner's perspective, this continuum does not matter; unless it changes the macro state, i.e. the particle kills the cat, or at least maybe hits its neuron and makes it do something differently. So it seems possible to me that Sean Carroll talks about the number of worlds that are different from human perspective.
Then there is another problem in physics that we don't know how/whether the very space and time are quantized. We use the mathematical abstraction of a "real number" that has an infinite number of digits after the decimal dot, but of course that infinite number of digits can never be observed experimentally. We don't know. Maybe it is something like what Wolfram says, that on a deep level, spacetime is a discrete graph evolving according to some rules. If something like that would be the case, that would reduce the possible number of states in the universe, even on the micro level, to a huge but finite number. And the mixed state of the multiverse would consist of this finite number of branches, each of them assigned a tiny complex amplitude. So that's another way how things could get finite.
And I am saying this just as a random guy who never studied these things, I just sometimes read something on the topic, and some ideas feel to me like obvious consequences of the stuff that is "in the water supply". So I believe that if I see a solution to a problem, then if it makes sense, someone like Sean Carroll is 10000x more likely to notice the problem and the solution, and develop it much further than I ever could. Or when you make a survey, and a half or a third of people who study quantum physics for living say that some version of MWI seems like the correct interpretation to them, I don't believe there is a simple devastating argument against it that all of these people have simply missed.
OK, well, let me tell you as a non-random guy who has studied these things extensively that the MWI is very commonly misrepresented. It is not a case of simplification for a lay audience, it is flat-out lying, at least most of the time. The math does not say that there are parallel universes. All the math tells you is that in order to recover the results of experiments you have to throw away some of the information contained in the wave function. MWI proponents interpret this by saying that the discarded information has to correspond to something real, and they call that thing "parallel universes". But there are three problems with this. First, the MWI does not explain the Born rule. Second, the math doesn't tell you whether or not the discarded parts of the wave function describe something real or not. It is possible that mathematical operation of discarding parts of the wave function actually corresponds to real physical phenomenon, i.e. that whatever is described by the discarded parts of the wave function actually ceases to exist. This is a tenable scientific hypothesis. It's not easy to actually make it work, but it can be done and has been done. It's called GRW collapse [1]. So anyone who tells you that the MWI is the only possible scientifically tenable interpretation of QM is lying. And anyone who leaves open even the possibility that the "parallel universes" contained in the wave function are discrete is also lying. The only MWI proponent I've ever seen being intellectually honest about this.David Deutsch in his book "The Beginning of Infinity" chapter 11.
The third problem with the MWI is something called the "preferred basis problem". This one is harder to describe succinctly, and some people claim it has been solved, but I don't agree with them. In a nutshell, all two-state QM experiments rely on some macroscopic apparatus to split a particle into a superposition of two states. But if you model the entire universe as a quantum system, this apparatus is itself a quantum system that can be in a superposition of states, so you can't say, "The polarizing beam splitter is aligned vertically or it is aligned horizontally" any more than you can say "the cat is alive or it is dead" without begging the question.
---
[1] https://en.wikipedia.org/wiki/Ghirardi%E2%80%93Rimini%E2%80%...
The only interpretation that does not have this problem is the NCI because in that interpretation I am part of the fundamental ontology, at least some of the time.
The math tells that there are no privileged parts of wave function.
>So anyone who tells you that the MWI is the only possible scientifically tenable interpretation of QM is lying.
Didn't you admit yourself that if MWI works it's a big deal and will kick the chair from under other interpretations?
That's true. But my senses tell me that there is a privileged part of the wave function, namely, the branch that I'm in.
The way I think about it nowadays is that QM is like a Necker cube. You can look at it in two different ways. You can take the God's-eye view and look at the entire wave function, or you can take the mortal's eye view and look at only a proper subset of the wave function (which is necessary in order to recover classical reality). But you can't do both at the same time. For my day-to-day life, I have no choice but to take the mortal's-eye view because I am a mortal. All of the things that matter to me depend on classical reality, and so depend on my suspension of disbelief and acting as if my branch of the multiverse is privileged, even if I can intellectually jump out of the system momentarily and recognize that the mortal's eye view is necessarily incomplete.
> Didn't you admit yourself that if MWI works it's a big deal and will kick the chair from under other interpretations?
That depends on what you mean by "works". If someone can derive the Born rule from the Schrodinger equation that will be a big deal, a slam-dunk Nobel prize. But no one has done it, and I'm pretty sure it can't be done. I'm pretty sure that the Born rule is an emergent property of our branch of the multiverse. I believe the same is true of the Second Law and even three-dimensional space. You can slice-and-dice the wave function to give you physical spaces with any number of dimensions, but three is the magic number that gives you atoms and stars and planets with stable orbits [1] and so on. So I'm pretty sure the Born rule can only be explained by the anthropic principle. There's probably a Nobel prize waiting for the person who turns that intuition into a theorem.
---
[1] https://physics.stackexchange.com/questions/50142/gravity-in...
You could also sense that your location is privileged, because the observable universe is neatly centered at it, but science will prioritize Copernican principle over your senses.
>But you can't do both at the same time.
This doesn't match what you do. Tracing extracts mortal's-eye view from God's-eye view, so in God's-eye view you have both.
>depend on my suspension of disbelief and acting as if my branch of the multiverse is privileged
If your branch exists, it's sufficient for your day to day life, there's not much else to disbelieve. There's no need for it to be privileged. Do you worry that Earth isn't more privileged than Mars?
>But no one has done it, and I'm pretty sure it can't be done.
But quantum physics doesn't allow it. It's quantitative science where all observed phenomena are computable. If they aren't computable, then quantum physics doesn't predict them and thus diverges from observation. And Schrödinger equation is how predictions are made, collapse and measurement only act on what already exists before them and don't create anything new. So if Born rule is an observed phenomenon, it must be computable from Schrödinger equation. Also if Born rule holds with certainty, then it's a pure state, and observation won't do anything to it, so Born rule can't be created by observation.
>There's probably a Nobel prize waiting for the person who turns that intuition into a theorem.
This was argued by Max Tegmark https://arxiv.org/abs/gr-qc/9702052 I thought it's a famous diagram https://en.wikipedia.org/wiki/File:Spacetime_dimensionality....
Yes, that's right. This is not an easy problem to solve. This is why it took thousands of years for mankind to realize that the earth is not at the center of the universe. The difference between that and the MWI is that there is actual evidence against geocentrism. There is no evidence against my-branch-centrism. Not only that, but the theory itself predicts that there cannot possibly be any such evidence. So the MWI is self-defeating. The only way there could be evidence for it is if it's wrong.
> in God's-eye view you have both
Nope. The mortal's-eye view is fundamentally incompatible with the god's-eye view. This is the reason that the measurement problem is a thing in the first place.
> if Born rule is an observed phenomenon, it must be computable from Schrödinger equation
Only if the SE is a complete description of reality, and it manifestly is not.
(If you want to argue that the Born rule is not "an observed phenomenon" then I don't know what to tell you. Maybe go hang out with the flat-earthers and lunar landing denialists. You may find kindred spirits there.)
> This was argued by Max Tegmark
Yes, the 3-D space part. That is old news. It's the Born Rule that (AFAIK) no one has yet derived.
One branch interpretations are based on geocentric prejudice that the observer's state isn't changed much by observation (because observer doesn't feel change), and when the observer's state doesn't change much, we get geocentrism. But mathematics of quantum physics shows otherwise: the observer's state suffers decoherence and splits into macroscopic superposition, which is a big change and thus debunks assumption of unchanged observer's state. When observer's state changes significantly, observation becomes subject to relativity effect just like in case of spinning Earth.
>The only way there could be evidence for it is if it's wrong.
And what it means when there's no such evidence?
>The mortal's-eye view is fundamentally incompatible with the god's-eye view.
But then tracing must be fundamentally unable to extract mortal's-eye view from god's-eye view. What you say doesn't match what you do.
>If you want to argue that the Born rule is not "an observed phenomenon" then I don't know what to tell you.
I argue that Born rule is an observed phenomenon, and all observed phenomena are purely quantitative physical processes computable from Schrödinger equation, Born rule is the same, otherwise quantum physics wouldn't predict observation of Born rule.
Formally you might need measurement, but the trick is to convert the given problem into a problem of certainty, then measurement is trivial, and prediction is completely calculated from Schrödinger equation. Coincidentally Born rule is such a certain fact, so it doesn't matter if you measure it or not, measurement doesn't do much to certain facts, it's sufficient if you only calculate this certain fact and leave it as is without measuring it.
It's not just that the observer doesn't feel change, it is that no experiment can demonstrate this change, not even in principle.
> tracing must be fundamentally unable to extract mortal's-eye view from god's-eye view
Why? Because that is manifestly not the case.
> Born rule is the same
Again, you are manifestly wrong. If someone had figured out how to derive the BR from the SE it would be Big News [1].
---
[1] https://blog.rongarret.info/2024/04/the-scientific-method-pa...
Because that is the best explanation for what I observe.
> by declaring that sense data do not reflect reality, you've cut yourself off form the possibility of knowing reality altogether
That is true, but only in the uninteresting sense that I can never completely eliminate the possibility that I am living in the Matrix. So yes, it's possible that I'm wrong about the existence of objective reality. But if objective reality is itself an illusion, it's a sufficiently compelling illusion that I'm not going to go far wrong by acting as if it were real.
That seems squishy, as what constitutes "going far wrong" is not meaningful under skeptical assumptions.
A better stance is one of cognitive optimism that avoids the irrationality of skepticism. Skepticism is irrational, because it leads to incoherence, and because there is no rational warrant to categorically doubt the senses. For doubt to be rational, there must be a reason for it. To doubt without reason is not to be rational, but to be willful, and willful beliefs cannot be reasoned with; they are not the product of evidence or inference — and they certainly aren't self-evident — but rather the product of arbitrary choice. The logical possibility of living in the Matrix is no reason for doubting the senses, just as the logical possibility of there being poison in your sandwich is no reason for doubting you'll survive eating it.
The difference between our positions is that I begin from a position of natural trust toward the senses and toward reason as the only rational possibility and default. I have no choice but to reason well or to reason poorly. I recognize that my senses and my inferences can err, but it does not follow that they always err. Indeed, the very claim that they can err presumes I can tell when they do.
So, if my inferences lead me to a position that undermines their own coherence, then I must conclude that my inferences are wrong (including those that led me to adopt a certain interpretation of, say, scientific measurements).
> Because that is the best explanation for what I observe.
But if your explanation involves contradiction of what you observe, then that is not only not the best explanation, but no explanation at all! An explanation cannot deny the thing it seeks to explain. Thus, by denying the objective reality of what you perceive, you are barred from inferring that denial.
I can be more precise about this. It means that the predictions I make on the basis of this assumption are very likely to be correct.
> Skepticism is irrational
No, it isn't. The vast majority of my beliefs about the world are not a result of direct observations, but nth-hand accounts. I believe, for example, that the orbit of Mercury precesses, but not because I've ever measured it myself, but rather because I heard it from a source that I consider credible. But assessing the credibility of a source is hard and error-prone, especially nowadays. There is always the possibility that a source is mistaken or actively trying to deceive you. And even for things you observe first-hand there are all kinds of cognitive biases you have to take into account. So skepticism is warranted.
> I begin from a position of natural trust toward the senses
That will lead you astray because your senses are unreliable.
> if your explanation involves contradiction of what you observe
But it doesn't. At worst it involves a contradiction of what I think I observe.
Objects without free will aren’t able to come to conclusions like this.
Or for a tldr look for the three body problem or try to find a solution to a double pendulum!
https://www.lesswrong.com/s/kNANcHLNtJt5qeuSS
"Moloch hasn't won" is a lengthy critique of the argument you are making here.
Why can't that observation be taken into account? Isn't the entire point of the approach accounting for all inputs to the extent possible?
I think you are making invalid assumptions about the motivations or goals or internal state or etc of the actors which you are then conflating with the approach itself. That there are certain conditions under which the approach is not an optimal strategy does not imply that it is never competitive under any.
The observation is then that rationalism requires certain prerequisites before it can reliably out compete other approaches. That seems reasonable enough when you consider that a fruit fly is unlikely to be able to successfully employ higher level reasoning as a survival strategy.
Of course it can be. I'm saying that AFAICT it generally isn't.
> rationalism requires certain prerequisites before it can reliably out compete other approaches
Yes. And one of those, IMHO, is explicit recognition that rationalism does not triumph simply because it is rational, and coming up with strategies to compensate. But the rationalist community seems too hung up on things like malicious AI and Roko's basilisk to put much effort into that.
I'm sympathetic to the idea that we know nothing because of the reproductive impulse to avoid doing or thinking about things that led our ancestors to avoid procreation, but such a conclusion can't be total because otherwise it is self defeating because is is contingent on rationalist assumptions about the mind's capacity to model knowledge.
Even then that might not always be the case. Sometimes there are severe time or bandwidth or energy or other constraints that preclude carefully collecting data and thinking things through. In those cases a heuristic that is very obviously not derived from any sort of critical thought process might well be the winning strategy.
There will also be cases where the answer provided by the rational approach will be to conform to some other framework. For example where cult type ingroup dynamics are involved across a large portion of the population.
Exactly right. It is not rationalism per se that is the problem, it is the way that The Rationalists are implementing it, the things they are choosing to focus their attention on. They are worried about things like hostile AI and Roko's Basilisk when what they should be worried about is MAGA, because that is not being driven by rationalism, it is being driven by Christian nationalism. MAGA is busily (and openly!) undermining every last hint of rationalism in the U.S. government, but the Rationalist community seems oddly unconcerned with this. Many self-styled Rationalists are even Trump supporters.
The fact that you can write this sentence, consider it to be true, and yet still hold in your head the idea that the future might be bad but it's still important to have children suggests that "contact with reality" is not a curse.
If gender equality and intellectual achievement don't produce children, then that isn't "darwinism selecting rationality out". You can't expect the continued existence of finite lifespan organisms if there are no replacement organisms. Raising children is hard work. The people who believe in gender equality and intellectual achievement made the decision to not want more of themselves, particularly when their belief in gender equality entails not wanting male offspring. The alternative is essentially freeloading and expecting others, who do not share the beliefs, to produce children for you and also to teach them the "enlightened" belief of forcing "enlightened" beliefs onto others (note the circularity, the initial conditions are usually irrelevant and often just a fig leaf to perpetuate the status quo).
I never said there was. Darwin said it because he didn't know anything about genes, but that mistake was corrected by Dawkins.
> If gender equality and intellectual achievement don't produce children, then that isn't "darwinism selecting rationality out".
Why not?
> The people who believe in gender equality and intellectual achievement made the decision to not want more of themselves
That's the wrong way to look at it. Individuals are not the unit of reproduction. Genes are. Genes that build brains that want to have (and raise) children are more likely to propagate than genes that build brains that don't, all else being equial. So it is not rationality per se that is the problem -- rationality can provide a reproductive advantage because it lets you, for example, build technology. The problem is that non-rational brains can parasitically benefit from the phenotype of rational brains, at least for a while. But in the long run this is not a stable equilibrium.
I'm sure you would be able to predict what a rationalise will say when you ask them what future they prefer: one where we maximises for the number of humans or one with fewer humans but better lives
That depends on what you mean by "follow". You have to "follow" Darwinian evolution for the same reason you have to "follow", say, the law of gravity. That doesn't mean you can't build airplanes and spacecraft, but you still have to acknowledge and "follow" the law of gravity. You can't just glue feathers to your arms, jump off a cliff, and hope for the best. (Actually, rationalists aren't even gluing feathers to their arms. They are doing the equivalent of jumping off a cliff because they just don't believe gravity applies to them.)
[UPDATE]
> unnatural sex is wrong
The problem with that argument is that homosexuality is not unnatural. Many, many species have homosexual relations. Accounting for this is a little bit challenging, but the fact is undeniable.
https://en.wikipedia.org/wiki/Homosexual_behavior_in_animals
A counterexample is meiotic drive, where alleles disrupt the meiotic process in order to favour their own transmission, even if the alleles in question ultimately produce a less fit organism.
Whilst this is not an inherently positive observation, I think it does illustrate that the fatalistic picture you're painting here is incorrect. There's room for tentative optimism.
That is not correct. Darwin did make a mistake, but it was not the fundamental dynamics of the process, but that he chose the wrong unit of selection. Darwin thought that selection selected for individuals or species when in fact it selects for genes. Richard Dawkins is the person who figured this out, but Darwin knew nothing about genes (OoS was published only three years after Gregor Mendel's work) so he still gets the credit nothwithsanding this mistake.
Of course. Variation is the other part.
> gene flow or genetic drift
Those are two mechanisms by which variation occurs. This is not something Darwin got wrong, he just didn't have all the data. Genes were unknown in Darwin's time.
I didn't say it was. I said that the Rationalist community is not taking the implications of Darwinism into account when they choose where to focus their attention. This is what leads them to fixate on hostile AI and the MWI when what they should be worried about is the rise of MAGA. But not only is that not what they are worried about, many self-styled Rationalists are Trump supporters.
The fact that humans are intelligent at all and Enlightenment peoples currently dominate the world suggests otherwise.
Huh??? How do you figure? AFAICT the world is dominated by Donald Trump, Xi Jinping, and Vladimir Putin (if you reckon by power) or Christians and Muslims (if you reckon by population). None of these individuals or groups can be properly categorized as "Enlightenment peoples", certainly not with a capital E.
I guess that depends on what you consider "the world". It makes no sense to even talk about the West dominating "the world" before 1492. The first truly global Western empire was Britain, but it was also the last. It was replaced by the U.S. but it was never really global. Even at the height of its power after WW2 the USSR was a credible rival. After the fall of the USSR in 1991 the U.S. was the sole undisputed superpower for a little while, but that came to an abrupt end on September 11, 2001 and the subsequent wars in Afghanistan and Iraq.
I think you are over-extrapolating the past into the future. The mindset and culture that produced U.S. hegemony in the 20th century seems to me to be mostly extinct. The U.S. was indeed ruled by rationalism (more or less) from the time of its founding through the mid-to-late 20th century, but there is precious little of that left today. Certainly the power structure in the U.S. today is actively hostile to rationalism, and I don't see a whole lot of rationalism in play in the opposition either.
I'm not sure I entirely understand what you're arguing here, but I absolutely do agree that the most powerful force in the universe is natural selection.
The term "survival of the fittest" predates Darwin's Origin of Species, and was adopted by Darwin within his lifetime, btw.
You should not interpret that historical success to imply future success as it depended on non-sustainable groundwater extraction.
Eg, https://en.wikipedia.org/wiki/Ogallala_Aquifer
> Many farmers in the Texas High Plains, which rely particularly on groundwater, are now turning away from irrigated agriculture as pumping costs have risen and as they have become aware of the hazards of overpumping.
> Sixty years of intensive farming using huge center-pivot irrigators has emptied parts of the High Plains Aquifer.
> as the water consumption efficiency of the center-pivot irrigator improved over the years, farmers chose to plant more intensively, irrigate more land, and grow thirstier crops rather than reduce water consumption--an example of the Jevons Paradox in practice
How will the Great Plains farmers get water once the remaining groundwater is too expensive to extract?
Salt Lake City cannot simply build desalination plants to fix its water problem.
I expect the bad experiences of Okies during the internal migration of the Dust Bowl will be replicated once the temporary (albeit century-long) relief of using fossil water is exhausted.
I think you only need to look at the water politics of the Great Salt Lake to see the difficulty.
Look at how little water use has changed during the last 25 years of the southwestern North American megadrought.
The US policy appears to be to pray for rain.
Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.
Suppose you clock "malicious AI" as a huge risk and then hamper AI, but it turns out the bigger risk is not doing space exploration, which AI would have accelerated, because something catastrophic yet already-inevitable is going to happen to the Earth in a few hundred years and if we're not sustainably multi-planetary by then it's all over.
The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
Rationalist community understands that very well. They even know how to put bounds on the unknowns and their own lack of information.
> The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
Right. Good thing they'd agree with you 100% on this.
No they don't. They think they can do this because they've accidentally reinvented the philosophy "logical positivism", which philosophers gave up on because it doesn't work. (This is similar to how they accidentally reinvented reconstructing arguments and called it "steelmanning".)
What's the probability of AI singularity? It has never happened before so you have no priors and any number you assign will be pure speculation.
Most of the time we make predictions based on how similar events happened in the past. For completely novel situations it's close to impossible to make a prediction and reckless to base policy on such a prediction.
But it was necessary at the beginning of flight and the flight to the moon would've never been possible either without a few talented people being able to make predictions about scenarios they knew little about.
There are just way too many people around nowadays, which is why most of us never get confronted with such novel topics and consequently we don't know how to reason about it
> Same is true about anything you're trying to forecast, by definition of it being in the future
There might be some flaws in this line of reasoning...
Singularity obviously never happened before, and if anyone bothered to read up on what they're talking about, they'd realize that no one is trying to predict what happens then, because the singularity is defined as the time when changes accelerate to such a degree that we have no baseline to make any predictions whatsoever.
So when people speculate on when that is, they're trying to forecast the point forecasting breaks; they do it by extrapolating from known examples and trends, to which we do have baselines.
Or, in short: we know how it is to ride an exponent, we just never rode one long enough to fall off of it; predicting singularity is predicting when the exponent gets steep enough we can't follow, which is not unlike predicting any other trend people do. Same methods and caveats apply.
>And yet people have figured out how to make predictions more narrow than shrugging
And?
There are others, such as the unproven, narcissistic and frankly unlikely-to-be-true assumption that humanity continuing to exist is a net positive in the long run.
This is effectively a religious belief you are espousing.
They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it's really high and then you're good.
Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.
An actual argument would be that intelligence doesn't work like that. Two people with IQ 100 cooperating together does not produce an IQ 200 solution.
There is the "wisdom of crowds". If a random member of a group is more than 50% likely to be correct, the average of the group is more likely to be correct than its members individually. But that has a few assumptions, for example that each member tries to figure out things independently (as opposed to everyone waiting for the highest-status member to express their opinion, and then agreeing with it -- in that case the entire group is only as smart as the highest-status member).
But you cannot leverage this to simply invite 1000 random people in your group and ask them to invent a Theory of Everything; because the assumption that each member is more than 50% likely to be correct does not apply in this case. So that is one of the limits of people working together.
(And this already conveniently ignores many other problems found in real life, such as conflict of interests, etc.)
You don't have to agree with any of this. I am not defending every idea the author has. But I recommend that book.
I get the feeling these people often want to seem smarter than they are, regardless of how smart they are. And they want to get money to ostensibly "consider these issues", but really they want money for nothing.
If they wanted to do right by the future masses, they should be looking to the things that are affecting us right now. But they treat those issues as if they'll work out in the wash.
The current sums invested and donated in altruist causes are rounding errors themselves compared to GDPs of countries, so the revealed preferences of those investing and donating to altruist causes is to care about the future and the present also.
Are you saying that they should give a greater preference to help those who already exist rather than those who may exist in the future?
I see a lot of Peter Singer’s ideas in modern “effective” altruism, but I get the sense from your comment that you don’t think that they have good reasons for doing what they do, or that their reason leads them to support well-meaning but ineffective solutions. I am trying to understand your position without misrepresenting your point or goals. Are you naysaying or do you have an alternative?
If they wanted to help, they should be focused on the now. Global poverty, climate change, despotic world leaders. They should be aligning themselves against such things.
But instead what we see is essentially not that. Effective altruism is a lot like the Democratic People's Republic of Korea, a bit of a misnomer.
A lot of them argue that poor countries essentially don't matter. Climate change is not an extinction event and there should an authoritarian world government to prevent nuclear conflict to minimize the risk of nuclear extinction.
>In his dissertation On the Overwhelming Importance of Shaping the Far Future (2013), supposedly “one of the best texts on existential risks,”[9] Nicholas Beckstead meditates on the “ripple effects” a human life might have for future generations and concludes “that saving a life in a rich country is substantially more important than saving a life in a poor country” due to the higher level of innovation and economic productivity attained in these countries.[10]
https://umbau.hfg-karlsruhe.de/posts/philosophy-against-the-...
To be pedantic, DPRK is run via the will of the people to a degree comparable to any country. A bigger misnomer is the west calling liberal “democracy”, just democracy.
The elite letting the people choose between a few candidates is not a democracy. There are no “democratic” countries in that way.
It's not arguable, it's simply wrong. But to understand that you would have to understand much more about the DPRK.
> The elite letting the people choose between a few candidates is not a democracy.
It is, but more importantly, your framing is wrong. Every democracy has several levels of democratic institutions (local, state, federal etc.), there are often 'surprise' election winners; there are also non-elective ways for non-elite people to influence policy. DPRK has none of this.
I have never come across some one like you who knows about DPRK outside of what western state depts say and of course they swallow it all up. But people do sometimes lie and say they know so much like you right now
Ba da Ching
Don’t forget — the enemies of the west are all bad and evil. And the west is free and fair and democratic and not a scourge on the rest of the world.
there's something pathologically, virally sick about wealth accumulation's primary function being the further accumulation of wealth. for a movement rooted in "rationalism," EA seems pretty irrationally focused on excusing antisocial behavior.
Then we are lucky that EA promotes giving more to charity as the primary function of accumulation of wealth.
> EA seems pretty irrationally focused on excusing antisocial behavior.
Is a guy getting a well-paid job at Microsoft and donating half of his salary to African charities really your best example of antisocial behavior?
“The purpose of a system is what it does.”
no, we are not lucky. EA-good-because-charity-good is a brain-pretzel way of lobbying against equitable taxation.
> Is a guy getting a well-paid job at Microsoft and donating half of his salary to African charities really your best example of antisocial behavior?
you're inventing a competitive debate regarding a hypothetical "best example of antisocial behavior". i didn't target anyone specifically with any part of my post.
This is the logic of someone who has failed to comprehend the core ideas of Calculus 101. You cannot use intuitive reasoning when it comes to infinite sums of numbers with extremely large uncertainties. All that results is making a fool out of yourself.
They use technical terms (eg expected value, KL divergence) in their verbal reasoning only to sound rational, but don’t ever mean to use those terms technically.
It then moved into "you should work a soulless investment banking job so you can give more".
More recently it was "you should excise all expensive fun things from your life, and give 100% of your disposable income to a weird poly sex cult and/or their fraudulent paper hedge fund because they're smarter than you."
https://forum.effectivealtruism.org/posts/f6kg8T2Lp6rDqxWwG/...
Meanwhile, on an actual EA website: https://www.givewell.org/charities/top-charities
* Medicine to prevent malaria
* Nets to prevent malaria
* Supplements to prevent vitamin A deficiency
* Cash incentives for routine childhood vaccines
Its important to know which type of EA organization you are supporting before you donate, because the movement includes all three.
(And I assume that GiveWell top charities receive orders of magnitude more money, but I haven't actually checked the numbers.)
In any case, EA smells strongly of “the ends justify the means” which most popular moral philosophies reject with strong arguments. One which resonates with me is that there are no “ends.” The path itself is the goal.
This is a false statement. Our entire modern world is built on the basis of the ends justify the means. Every time money is spent on long term infrastructure vs giving poor kids food right now, every time a war is fought, every time a doctor triages injuries at a disaster.
I'm sure exactly what you described was done plenty of times in ww1 and similar around that era, and seen as perfectly moral and rational.
Winning at things that align with your principle is a principle. If you don't care about principles, you don't care about what you're winning at, thereby making every victory hollow and meaningless. That is how you turn into a loser at everything you do.
How does this apply to actual charity? "Curing malaria is not the goal. Our experiences during voluntourism are the true goal."
Of course it sounds ridiculous when you spell it out this way.
Of course the way your comment is written makes criticism sound silly.
I quickly lost interest in Roko's Basilisk, but that is what brought me in the door and started me looking around the discussions. At first, it was quite seductive. There was a strange fearlessness there, a willingness to say and admit some things about humanity, our limitations and how we tend to think that other great thinkers maybe danced around in the past. After awhile it became clear that while there were a select few individuals who had found some balance between purely rational thinking and how reality actually works, most of the rest had their heads so far up their asses that they'd fart and call it a cool breeze. Reminded me of my brief obsession with Game Theory and realizing that even it's creators knew it's utility was not quite as advertised to the layman (as in it would not really help you predict or plan for anything at all, just model how decisions might be made).
Physics postgrads: "gauge"
Physics undergrads: "wavefunction"
Grade schoolers: "temperature"
These concepts definitely useful for the hw sets, no understanding needed (or expected!)
IMO it's fine to pick a favorite and devote extra resources to it. But that turns less fine when one also starts working to deprive everything else of any oxygen because it's not your favorite. (And I'm aware that this criticism applies to lots of communities.)
Even if an individual person chooses to direct all their donations to a single cause, there's no way to get everyone to donate to a single cause (nor is EA attempting to). Money gets spread around because people have different values.
It absolutely does take some money away from other causes, but only in the sense that all charities do: if you give a lot to one charity, you may have less money to give to others.
If you assume we eventually figure out long distance space travel and humanity spreads across the galaxy, there could in the future be quadrillions of people, growing at some kind of exponential rate. So accelerating the space race by even an hour is equivalent to bringing billions of new souls into existence.
Perhaps you're arguing as an illustration of the way this group of people think, in which case I understand your point.
It encodes a slight bias towards human existence being a positive thing for us humans, but I don't think it's the shakiest part of that reasoning.
It really annoys me when people say that those religious cultists do that.
They derive their bullshit from faulty, poorly thought out premises.
If you fuck up the very firsts calculations of the algorithm, it doesn't matter how rigorous all the subsequent steps are. There results are going to be all wrong.
(1) The kind of Gatesian solutions they like to fund like mosquito nets are part of the problem, not part of the solution as I see it. If things are going to get better in Africa, it will be because Africans grow their economy and pay taxes and their governments can provide the services that they want. Expecting NGOs to do everything for them is the same kind of neoliberal thinking that has rotted state capacity in the core and set us up for a political crisis.
(2) It is one thing to do something wrong, realize it was a mistake, and then make amends. It's another thing to do plan to do something wrong and to try to offset it somehow. Many of the high paying jobs that EA wants young people to enter are "part of the problem" when it comes to declining stage capacity, legitimation crisis, and not dealing with immediate problems -- like the fact that one of these days there's going to be a heat wave that is a mass causality event.
Furthermore
(3) Time discounting is a central part of economic planning
https://en.wikipedia.org/wiki/Social_discount_rate
It is controversial as hell, but one of the many things the Soviet Union got wrong before the 1980s was planning with a discount rate of zero, which led to many economically and ecologically harmful projects. If you seriously think it should be zero you should also be considering whether anybody should work in the finance industry at all or if we should have dropped a hydrogen bomb on Exxon's headquarters yesterday. At some point speculations about the future are just speculation. When it comes to the nuclear waste issue, for instance, I don't think we have any idea what state people are going to be in 20,000 years. They might be really pissed that buried spent nuclear fuel some place they can't get at it. Even the plan to burn plutonium completely in fast breeder reactors has an air of unreality about it, even though it happens on a relatively short 1000 year timescale we can't be sure at all that anyone will be around to finish the job.
(4) If you are looking for low-probability events to worry about I think you could find a lot of them. If it was really a movement of free thinkers they'd be concerned about 4,000 horsemen of the apocalypse, not the 4 or so that they are allowed to talk about -- but talk about a bunch of people who'll cancel you if you "think different". Somehow climate change and legitimation crisis just get... ignored.
(5) Although it is run by people who say they are militant atheists, the movement has all the trappings of a religion, not least "The Singularity" was talked about by Jesuit Priest Teilhard de Chardin long before sci-fi writer Vernor Vinge used it as the hinge of a mystery novel.
Nuclear waste issues are 99.9% present-day political/ideological. Huge portions of the Earth are uninhabitable due to climate and/or geology. Lead, mercury, arsenic, and other naturally-occurring poisons contaminate large areas. Volcanoes spew CO2 and toxic gasses by the megaton.
Vs. when is the last time you heard someone get excited over toxic waste left behind by the Roman Empire?
Also, PaulHoule's original comment said "in 20,000 years". Cobalt 60 (for example) has a half-life of 5 1/4 years - so there really won't be any of it left by then.
No one is talking about stuffing cobalt 60 in yucca mountain (at least as far as I know).
And the tech to detect that you're digging into radioactive stuff is far simpler than the tech to detect that you're digging into some sort of chemical waste, or a failing old mine or tunnel.
If millennia-in-the-future humans care all that much about what we did with our nuclear waste, it'll either be political/ideological, or (as PaulHoule suggested) just one more "they didn't leave it somewhere really convenient for us" deal.
For archeologists, pretty much every time.
The difficulty is in deriving any useful utility function from prices (even via preferences :), and as you know, econs can't rid themselves of that particular intrusive thought
https://mitsloan.mit.edu/sites/default/files/inline-files/So...
E: know any econs taking Habermas seriously ? Not a rhetorical q:
[1] Though you might come to the conclusion that greeder people should have the money because they like it more
(Aside from the semi-tragic one to consider additive dilogarithms..)
One actionable (utility agnostic) suggestion: study the measureable consequences of (quantifiable) policy on carbon pricing, because this is already quite close to the uncontroversial bits
E: by "uncontroversial", I meant amongst the Orthodox econs, so not Graeber & sympathetic heterodox.
Similarly, Big Bang was talked about by Catholic priest Georges Lemaître, and Bayes' Theorem was invented by Presbyterian minister Thomas Bayes. Does that prove anything beyond the fact that there are many smart religious people?
The problem with this argument is that the path to achieve this is unclear, and everyone who has tried has failed. In the absence of a clear path, it seems rational to set aside lofty ideals and do whatever good you can now.
> If you are looking for low-probability events to worry about I think you could find a lot of them.
Name one that hasn't already been considered that would be a serious threat to modern technological civilization.
> Although it is run by people who say they are militant atheists, the movement has all the trappings of a religion
Supposing you're correct this thought is incomplete. Take it to its logical conclusion: a religion centered around open rational debate is bad because...?
They don't even do this.
If you're reasoning in purely logical and deductive way - it's blatantly obvious that living beings experience way more pain and suffering, than pleasure and joy. If you do the math, humanity getting wiped out in effect is the best thing that could happen.
Which is why accelerationism ignoring all the AGI risks is correct strategy presuming the AGI will either wipe us out (good outcome) or provide technologies that improve the human condition and reduce suffering (good outcome).
Logical and deductive reasoning based on completely baseless and obviously incorrect premises is flat out idiotic.
You can't deprive non-existent people out of anything.
And if you do, I hope you're ready for purely logical, deductive follow up - every droplet of sperm is sacred and should be used to impregnate.
Most of criticisms are just "But they think they are better than us !" and the rest is "But sometimes they are wrong !"
I don't know about the community and couldn't care less but their writings have brought me some almost life saving fresh air in how to think about the world. It is very sad to me to read so many falsely elaborate responses from supposedly intelligent people having their ego hurt but in the end it reminds me why I like rationalists and I don't like most people.
Being able to do that is pretty much "entry level cognition" for a lot of us. You should be doing that yourself and doing it all the time if you want to play with the big kids.
One of the things I really miss about the old nerds-only programmer's pit setup was the amount of room we had for instruction, especially regarding social issues. The scenes from the college department in Wargames were really on the nose, but highlight a form of education that was unavoidable if you couldn't just dip out of a conversation.
Accused of being unable to reexamine your base principles, you respond:
> rationalists, who are possibly the only meaningful social group in existence to celebrate changing their minds in response to new evidence
Which is exactly the kind of base principle that could use some reexamination.
Garak: Especially when they're neither.
Extra points for that comment's author implying that people who don't like the wrong and smug movement are unintelligent and protecting their egos, thus personally proving its smugness
As for smugness, it is subjective. Are those people smug ? Or are they talking passionately about some issue with the confidence of someone who feel what they are talking about and are expecting for it to resonate ? It's the eye of the beholder I guess.
For example what you call my smugness is what I would a slightly depressed attitude fueled by the fact that it's sometimes hard to relate to other people feelings and behavior.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalist philosophy basically creeps people out. There is something "insidious" about it. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions have already been made by the rationalists---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's morally fine, because it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it as a better way to live: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors; or, it's better to protect animals than people, because there are more of them". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority, regardless of how the math works out. To work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, and (I suspect) autistic people, who probably experience less direct moral empathy for the people around them, as well as to the economically-insulated well-to-do tech-nerd types who are less likely to be directly exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include it, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety, because that is the foundation of all morality, and is utterly essential to preserve, because it makes sure that whatever else you are doing doesn't go awry.
(edit: let me add that your aversion to the criticisms of rationalists is not unreasonable either. Given that you're parsing the criticisms as unreasonable, which they likely are (because of the skill issue), what you're seeing is a movement with value that seems to be being unfairly attacked. And you're right, the value is actually there! But the ultimate goal here is a synthesis: to get the value of the rationalist movement but to synthesize it with the recognition of the red flags that it sets off. Ignoring either side, the value or the critique, is ultimately counterproductive: the right goal is to synthesize both into a productive middle ground. (This is the arc of philosophy; it's what philosophy is. Not re-reading Plato.) The rationalists are probably morally correct in being motivated to highly-scaling actions e.g. the purview of "Effective Altruism". They are getting attacked for what they're discarding to do that, not for caring about it in the first place.)
There is something about a particular "narrowband" signaling approach, where a certain kind of purity is sought, with an expectation that, given enough explaining, you will finally get it, become enlightened, and convert to the ranks. A more "wideband" approach would at least admit observations like yours do exist and must be comprehensively addressed to the satisfaction of those who hold such beliefs vs to the satisfaction of those merely "stooping" to address them (again in the hopes they'll just see the light so everyone can get back to narrowband-ville).
edit: oh, also, I think that a good part of people's aversion to the rationalists is just a reaction to the narrowband quality itself, not to the content. People are well-aware of the sorts of things that narrowband self-justifying philosophies lead to, from countless examples, whether it's at the personal level (an unaccountable schoolteacher) or societal (a genocidal movement). We don't trust a group unless they specifically demonstrate non-narrowbandedness, which means being collectively willing to change their behavior in ways that don't make sense to them. Any movement that co-opts the idea of what is morally justifiable---who says that e.g. rationality is what produces truth and things that run counter to it do not---is inherently frightening.
Any group that focuses on their own goals of high paying jobs regardless of the morality of those jobs or how they contribute to the structural issues of society is not that good. Then donating money while otherwise being okay with the status quo —- not touching anything systemic in such an unjust world but supposedly focusing on morality is laughable.
In their defense they do try to do the calculations: is a high-paying job okay on net if you give most of the money away? depends on the job; depends on if somebody else would have it if you didn't; etc. Not that there is a rigorous way to do them but it's very much a group that does try.
I head not read any rationalist writing in a long time (and I didn't know about Scott's proximity), but the whole time I was reqding the article I was thinking the same thing you just wrote... "why are they afraid of AI, i.e. the ultimate rationalist taking over the world", maybe something deep inside of them has the same reaction to their own theories as you so eloquently put above.
The AI will do what its programmed to do, but its programmers morality may not match my own. What more scary is that it may be developed with the morality of a corporation rather than a person. (That is to say, no morals at all.)
I think its perfectly justifiable to be scared of a very powerful being with no morals stomping around!
[0]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
Similar claims can be made about any structure of humans that exhibits gestalt intelligence, e.g. nations, stock markets, etc.
Up to a certain level. Behind that level, the corp will be a tool of the AI. And even further that way, the corp will no longer be necessary.
If children around you are doing of an easily preventable disease, then yes, help them first! If they just need more arts programs, then you help the children dying in another country first.
But anyway this whole model follows from a basic set of beliefs about quantifying suffering and about what one's ethical responsibilities are, and it answers those in ways most people would find very bizarre by turning them into a math problem that assigns no special responsibility to the people around you. I think that is much more contentious and gross to most people than EA thinks it is. It can be hard to say exactly why in words, but that doesn't make it less true.
In college, I became a scale-dependent realist, which is to say, that I'm most confident of theories / knowledge in the 1-meter, 1-day, 1 m/s scales and increasingly skeptical of our understanding of things that are bigger/smaller, have longer/short timeframes, or faster velocities. Maybe there is a technical name for my position? But, it is mostly a skepticism about nearly unlimited extrapolation using brains that evolved under selection for reproduction at a certain scale. My position is not that we can't compute at different scales, but that we can't understand at other scales.
In practice, the rationalists appear to invert their confidence, with more confidence in quarks and light-years than daily experience.
Musing on the different failure-directions: Pretty much any terrible present thing against people can be rationalized by arguing that one gadzillion distant/future people are more important. That includes religious versions, where the stakes of the holy war may presented as all of future humanity being doomed to infinite torment. There are even some cults that pitch it retroactively: Offer to the priesthood to save all your ancestors who are in hell because of original sin.
The opposite would be to prioritize the near and immediate, culminating in a despotic god-king. This is somewhat more-familiar, we may have more cultural experience and moral tools for detection and prevention.
A check on either process would be that the denigrated real/nearby humans revolt. :p
This statement of yours makes no sense.
EAs by definition are attempting to remove the innate bias that discounts people far away by instead saying all lives are of equal worth.
>turning them into a math problem that assigns no special responsibility to the people around you
All lives are equal isn't a math problem. "Fuck it blow up the foreigners to keep oil prices low" is a math problem, it is a calculus that the US government has spent decades performing. (One that assigns zero value to lives outside the US.)
If $100 can save 1 life 10 blocks away from me or 5 lives in the next town over, what kind as asshole chooses to let 5 people die vs 1?
And since air travel is a thing, what the hell does "close to us" mean?
For that matter, from a purely selfish POV, helping lift other nations up to become fully advanced economies is hugely beneficial to me, and everyone on earth, in the long run. I'm damn thankful for all the aid my country gave to South Korea, the number of scientific advances that have come out of SK damn well paid for any tax dollars my grandparents paid on many orders of magnitude times over.
> It can be hard to say exactly why in words, but that doesn't make it less true.
This is the part where I shout racism.
Because history has shown it isn't about people being far or close in distance, but rather in how those people look.
Americans have shot down multiple social benefit programs because, and these are what people who voted against those programs directly said was their reasons "white people don't want black people getting the same help white people get."
Whites in America have voted, repeatedly, to keep themselves poor rather than lift themselves and black families out of poverty at the same time.
Of course Americans think helping people in Africa is "weird".
The thing about strict-utilitarian-morality is that it can't comprehend any other kind of morality, because it evaluates the morality of... moralities... on its own utilitarian basis. And then of course it wins over the others: it's evaluating them using itself!
There are entirely different ethical systems that are not utilitarian which (it seems) most people hold and innately use (the "personal morality" I'm talking about in my earlier post). They are hard to comprehend rationally, but that doesn't make them less real. Strict-utilitarianism seems "correct" in a way that personal morality does not because you are working from a premise "only things that I can understand like math problems can be true". But what I observe in the world is that people's fear of the rationalist/EA mindset comes from the fact that they empirically find this way of thinking to be insidious. Their morality specifically disagrees with that way of thinking: it is not the case that truth comes from scrutable math problems; that is not the point of moral action to them.
The EA philosophy may be put as "well sure but you could change to the math-problem version, it's better". But what I observe is that people largely don't want to. There is a purpose to their choice of moral framework; it's not that they're looking at them all in a vacuum and picking the most mathematically sound one. They have an intrinsic need to keep the people around them safe and they're picking the one that does that best. EA on the other hand is great if everyone around you is safe and you have lots of extra spending money and what you're maximizing for is the feeling of being a good person. But it is not the only way to conceive of moral action, and if you think it is, you're too inside of it to see out.
I'll reiterate I am trying to describe what I see happening when people resist and protest rationalism (and why their complaints "miss" slightly---because IMO they don't have the language to talk about this stuff but they are still afraid of it). I'm sympathetic to EA largely, but I think it misses important things that are crippling it, of the variety above: an inability to recognize other people's moralities and needs and fears doesn't make them go away; it just makes them hate you.
I can comprehend them just fine, but I have a deep seated objection to any system of morality that leaves behind giant piles of dead bodies. We should be trying to minimize the size of the pile of dead bodies (and ideally eliminate the pile altogether!)
Any system or morality that boils down to "I don't care about that pile of dead bodies being huge because those people look different" is in fact not a system morality at all.
The job of a system of morality is to synthesize all the things we want to happen / want to prevent happening into a way of making decisions. One such thing is piles of dead bodies. Another is one's natural moral instincts, like their need to take care of their family, or the feeling of responsibility to invest time and energy into improving their future or their community or repairing justice or helping people who need help, or to attend to their needs for art and meaning and fun and love and respect. A coherent moral system synthesizes these all and figures out how much priority to allocate to each thing in a way that is reasonable and productive.
Any system of morality that takes one of these criteria and discards the rest of them is not a system of morality at all, in the very literal sense that nobody will do it. Most people won't sell out one of their moral impulses for the others, and EA/rationalism feels like it asks them too, since it asks them to place zero value in a lot of things that they inherently place moral value in, and so they find it creepy and weird. (It doesn't ask that explicitly; it asks it by omission. By never considering any other morality and being incapable of considering them, because they are not easily quantifiable/made logical, it asks you to accept a framework that sets you up to ignore most of your needs.)
My angle here is that I'm trying to describe what I believe is already happening. I'm not advocating it; it's already there, like a law of physics.
I think another part of it is a sort of healthy nativism or in-group preference or whatever you want to call it. It rubs people the wrong way when you say that you care about someone in a different country as much as you care about your neighbors. That’s just…antisocial. Taken to its logical conclusion, a “rationalist” should not only donate all of their disposable income to global charities, they should also find a way to steal as much as possible from their neighbors and donate that, too. After all, those. Holden in Africa need the money much more than their pampered western neighbors.
Maybe they want to do it in a way I’d consider just: By exercising their rights as individuals in their personal domains and effectively airing their arguments in the public sphere to win elections.
But my intuition is they think democracy and personal rights of the non-elect are part of the problem to rationalize around and over.
Would genuinely love to read some Rationalist discourse on this question.
Reading critiques of Hegel is a great starting point for this reading.
Whether you accept it or not though, there's lots of non-rationalist schools that reject the need for a "theory of power".
When Curtis Yarvin is at least in your orbit, these should not be surprising questions to get.
Not only that, but this is exactly the kind of scenario where we should be giving those signals the most weight: The individual estimating whether to join up with a tribe. (As opposed to, say, bad feelings about doing calculus.)
Not only does it involve humans-predicting-humans (where we have a rather privileged set of tools) but there have been millions of years of selective pressure to be decent at it.
Part of the reason I enjoy rationalist discourse more is because, even if they are unabashedly utilitarian, they try to rigorously derive philosophy. Most internet discourse on philosophy is, as you say, just vaguely derived around gut feelings. But philosophy can and has been thought of rigorously. Virtue ethics and continental morality are both schools of thought that reject utilitarian ethics but are much more meaty than the sort of internet "no but my neighbors" intuition that you see in full force, and the weird insistence that these internet commenters continue to use their vague moral intuition without being rigorous about their own thoughts.
People benefit from a sense of a family, a sense of community. It helps us feel more secure, both personally and to our loved ones.
I think the more I view things through this lenses the downstream benefits I see.
This sort of reasoning sounds great from 1000 feet up, but the longer you do it the closer you get to "I need to kill nearly all current humans to eliminate genetic diseases and control global warming and institute an absolute global rationalist dictatorship to prevent wars or humanity is doomed over the long run".
Or you get people who are working in a near panic to bring about godlike AI because they think that once the AI singularity happens the new AI God will look back in history and kill anybody who didn't work their hardest to bring it into existence because they assume an infinite mind will contain infinite cruelty.
I never quite realized the connection between this creepy feeling and the Bleak House criticism of far-flung philanthropy, until this comment.
I really like the depth of analysis in your comment, but I think there's one important element missing, which is that this is not an individual decision but a group heuristic to which individuals are then sensitized. Individuals don't typically go so far as to (consciously or unconsciously) extrapolate others' logic forward to decide that it's dangerous. Instead, people get creeped out when other people don't adhere to social patterns and principles that are normalized as safe in their culture, because the consequences are unknown and therefore potentially dangerous; or when they do adhere to patterns that are culturally believed to be dangerous. This can be used successfully to identify things that are really dangerous, but also has a high false positive rate (people with disabilities, gender identities, or physical characteristics that are not common or accepted within the beholder's culture can all trigger this, despite not posing any immediate/inherent threat) as well as a high false negative rate (many serial killers are noted to have been very charismatic, because they put effort into studying how to behave to not trigger this instinct). When we speak of something being normalized, we're talking about it becoming accepted by the mainstream so that it no longer triggers the ‘creepy’ response in the majority of individuals. As far as I can tell, the social conservative basically believes that the set of normalized things has been carefully evolved over many generations, and therefore should be maintained (or at least modified only very cautiously) even if we don't understand why they are as they are, while the social liberal believes that we the current generation are capable of making informed judgements about which things are and aren't harmful to a degree that we can (and therefore should) continuously iterate on that set to approach an ideal goal state in which it contains only things that are factually known to be harmful.
As an interesting aside, the ‘creepy’ emotion, (at least IIRC in women) is triggered not by obviously dangerous situations but by ambiguously dangerous situations, i.e. ones that don't obviously match the pattern of known safe or unsafe situations.
> Sometimes people don't or can't practice this protection for various reasons, and that's fine; it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors".
The problem with the ‘us before them’ approach is that if two neighbourhoods each prioritize their local neighbourhood over the remote neighbourhood and compete (or go to war) to better their own neighbourhood at the cost of the other, generally both neighbourhoods are left worse off than they started, at least in the short term: both groups trying to make locally optimal choices leads (without further constraints) to globally highly suboptimal outcomes. In recognition of this a lot of people, not just capital-R Rationalists, now believe that at least in the abstract we should really be trying to optimize for global outcomes.
Whether anybody realistically has the computational ability to do so effectively is a different question, of course. Certainly I personally think the future-discounting ‘bias’ is a heuristic used to acknowledge the inherent uncertainty of any future outcome we might be trying to assign moral weight to, and should be accorded some respect. Perhaps you can make the same argument for the locality bias, but I guess that Rationalists (generally) either believe that you can't, or at least have a moral duty to optimize for the largest scope your computational power allows.
(...that said, progressivism has largely failed in dispelling this delusion. It is far too easy to feel as though progressivism/liberalism exists to prop up power hierarchies and economic disparities because in many ways it does, or has been co-opted to do that. I think on net it does not, but it should be much more cut-and-dry than it is. For that to be the case progressivism would need to find a way to effectively turn on its parasites, that is, rent-extracting capitalism and status-extracting moral elitism).
re: the first part of your reply. I sorta agree but I do think people do more extrapolation than you're saying on their own. The extrapolation is largely based on pattern-matching to known things: we have a rich literature (in the news, in art, in personal experience and storytelling) of failure modes of societies, which includes all kinds of examples of people inventing new moral rationalizations for things and using them to disregard personal morality. I think when people are extrapolating rationalists' ideas to find things that creep them out, they're largely pattern-matching to arguments they've seen in other places. It's not just that they're unknowns. And those arguments are, well, real arguments that require addressing.
And yeah, there are plenty of examples of people being afraid of things that today we think they should not have been afraid of. I tend to think that that's just how things go: it is the arc of social progress to figure out how to change things from unknown+frightening to known+benign. I won't fault anyone for being afraid of something they don't understand, but I will fault them for not being open-minded about it or being unempathetic or being cruel or not giving people chances to prove themselves.
All of this is rendered much more opaque and confusing by the fact that everyone places way too much stock in words, though. (e.g. the OP I was replying to who was taking all these criticisms of the rationalists at face-value). IMO this is a major trend that fucks royally with our ability as a society to make moral progress: we have come to believe that words supplant emotional intuition in a way that wrecks out ability to actually understand what people are upset about (I like to blame this trend for much of the modern political polarization). A small example of this is a case that I think everyone has experienced, which is a person discounting their own sense of creepiness from somebody else because they can't come up with a good reason to explain it and it feels unfair to treat someone coldly on a hunch. That should never have been possible: everyone should be trusting their hunches.
(which may seem to conflict with my preceding paragraph... should you trust your hunches or give people the chance to prove themselves? well, it's complicated, but it also really depends on what the result is. Avoiding someone personally because they creep you out is always fine, but banning their way of life when it doesn't affect you at all or directly harm anyone is certainly not.)
One thing I'd like to add though is that I do think there is an additional piece being discarded irrationally. They tend to highly undervalue everything you're describing. Humans aren't Vulcans. By being so obsessed with the risks of paperclip-maximizing-robots they devalue the risks of humans being the irrational animals they are.
This is why many on the left criticize them for being right wing. Not because they are, well some might be, but because they are incredibly easy to distract from what is being communicated by focusing too much on what is being said. That might be a bad phrasing but what I mean is that when you look at this piece from last year about prison sentence length and crime rates by Scott Alexander[0] nothing he says is genuinely unreasonable. He's generally evaluating the data fairly and rationally. Some might disagree there but that's not my point. My point is that he's talking to a nonexistant group. The right largely believes that punishment is the point of prison. They might _say_ the goal is to reduce crime, but they are communicating based on a set of beliefs that strongly favors punitive measures for their own sake. This causes a piece like that to miss the forest through the trees and can be seen by those on the left as functionally right wing propaganda.
Most people are not rational. Maybe some day they will be but until then it is dangerous to assume and act as if they are. This makes me see the rationalists as actually rather irrational.
0: https://www.astralcodexten.com/p/prison-and-crime-much-more-...
The primary issues as others have noted is they focus on people going to the highest paying jobs without much care for morality of the jobs. Ergo they are fine being net negatives in terms of their work and philosophy.
All they do is donate money. Donations don’t fix society. Nothing changes structurally. No root problems are looked at.
They largely ignore capitalism’s faults or when I’ve seen them talk about, it’s done in a way of superficially decrying capitalist issues but then largely going along with them. Which ties into how they focus on high paying jobs regardless of morality (I’m exaggerating here but the overall point is correct).
—
HN is not intelligent when it comes to politics or the world. The avg person here is a western chauvinist with little political knowledge but a defensive ego about it. No need to be sad about this comment page.
Do you have examples of that? I have a different perception, most of the EAs I've met are very grounded and sharp.
For example the most recent issue of their newsletter: https://us8.campaign-archive.com/?e=7023019c13&u=52b028e7f79...
I'm not sure where there are any “hypothetical logical thought exercises” that “end up coming to insane conclusions” in there.
For the first part where you say “not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value” this is quite the opposite of my experience with them. They are very receptive to criticism and reconsider their point of view in reaction to that.
They are generally well-aware of the limits of data-driven initiatives and the dangers of indulging into purely abstract thinking that can lead to conclusions that indeed don't make sense.
The newsletter is of course far more to the point than that, but even then you'll notice half of it is devoted to understanding the emotional state and intentions of LLMs...
It is of course entirely possible to identify as an "Effective Altruist" whilst making above-average donations to charities with rigorous efficacy metrics and otherwise being completely normal, but that's not the centre of EA debate or culture....
EAs gave $1,886,513,058 through GiveWell[1], and there is 0 AI stuff in there (you can search in the linked Airtable spreadsheet).
There is also a whole movement for doing a lifetime commitment to give 10% of your earnings to charity. 9,880 people took the pledge so far[2].
[1] https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...
GiveWell continues to plow its own rigorous international development-focused furrow, but it's cofounder, once noted for calling out everyone else for lack of rigour in their evidence base, has moved on to fluffy essays about how this is probably "the most important century" because it's either AI armageddon or maybe his wife's $61B startup will save us all...
I also believe that idealistic people will go to great lengths to convince themselves that their desired outcome is, in fact, the moral one. It starts by saying things like, "Well, what is harm, actually..." and then constructing a definition that supports the conclusions they've already arrived at.
I'm quite sure Sam Bankman-Fried did not believe he was harming anybody when he lost/stole/defrauded his investors and depositors' money.
What is your alternative? What's your framework that makes you contribute to malaria prevention more or more effectively than EAs do? Or is the claim instead that people should shut down conversation within EA that strays from the EA mode?
How much more do they need to give before you will change your mind about whether “EA's actually want to do something about malaria”?
[1] https://www.givewell.org/all-grants-fund
[2] https://airtable.com/appGuFtOIb1eodoBu/shr1EzngorAlEzziP/tbl...
I am plenty happy to simp for the Gates foundation, but I think it's important to acknowledge that becoming Bill Gates to support charity is not a strategy the average person can replicate. The question for me is how do I live my life to support the causes I care about, not who lives more impressive lives than me.
If you exclude "nations" then it does look to be the Church: "The Church operates more than 140,000 schools, 10,000 orphanages, 5,000 hospitals and some 16,000 other health clinics". Caritas, the relevant charitable umbrella organization, gives $2-4b per year on its own, and that's not including the many, many operations run by religious orders not under that umbrella, or by the hundreds of thousands of parishes around the world (most of which operate charitable operations of their own).
And yet, rationalists are totally happy criticizing the Catholic Church -- not that I'm complaining, but it seems a bit hypocritical.
Similarly, it's not like government funding is an overlooked part of EA. Working on government and government aid programs is something EA talks about, high leverage areas like policy especially. If there's a more standard government role that an individual can take that has better outcomes than what EAs do, that would be an important update and I'd be interested in hearing it. But the criticism that EA is just not large enough is hard to action on, and more of a work in progress than a moral failing.
https://www.greaterwrong.com/posts/5JB4dn7hEvZCYDvCe/a-very-...
https://www.greaterwrong.com/posts/MHBJ436AiKyN83DCB/thinkin...
https://www.greaterwrong.com/posts/58ajesm2C38wi3WSJ/insect-...
I don't think any "Rationalists" I ever met would actually consider concepts like scientific method...
In that case I don't think you've met any of the people under discussion.
There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation).
“[...] Clevinger was one of those people with lots of intelligence and no brains, and everyone knew it except those who soon found it out. In short, he was a dope." - Joseph Heller, Catch-22 https://www.goodreads.com/quotes/7522733-in-short-clevinger-...
HN should be better than this.
Can people suffer from that impairment? Is that possible? If not, please explain how wrong assumptions can be eliminated without actively looking for them. If the impairment is real, what would you call its victims? Pick your own terminology.
Calling someone a dumbass in this situation is a kindness, because the assumption is that they're capable of not being one with a little self-reflection.
I can't see how insulting someone pursues their well-being in any way.
First they laugh...
Maybe it's similar to the "Good Student" picture. Bright within a given assignment, but taking the assignment to be immutable, or taking no interest in where the assignment comes from.
Or I've heard a saying "nobody will pay you to solve problems that they've already defined clearly."
I have read Effective Altruists like that. But I also remember seeing a lot of money donated to a bunch of really decent sounding causes because someone spent 5 minutes asking themselves what they wanted their donation to maximise, decided on "Lives saved" and figured out who is doing the best at that.
Honestly thought they were the same people
https://www.lesswrong.com/ and https://forum.effectivealtruism.org/ run on the same software, and some people have an account at both of them, but they are generally separate web communities.
Isn't there a lot of overlap between the two groups?
I recently read a great book that examines these various groups and their commonality: More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker. Highly recommended.
For anyone else reading, a good example of what EAs do can be seen with the GiveWell charity: https://www.givewell.org/top-charities-fund
Lots of anti-malaria and vitamin stuff (as a cheap way to save lots of lives). There are also tons of EA animal charities too, such as Humane League: https://thehumaneleague.org/our-impact
If they want to donate to charity, they can just donate. You don't gotta make a religion out of it.
I also think the ambiguity of meaning in natural language is why statistical llms are so popular with this crowd. You don't need to think about meaning and parsing. Whatever the llm assumes is the meaning is whatever the meaning is.
Ironically, that reminds me of "37 Ways That Words Can Be Wrong", written by Eliezer Yudkowsky in 2008...
Logic requires properties of metaphysical objectivity.
If you use the true meaning of words it would be called irrationality, delusion, sophism, or fallacy when such things are claimed true when in fact they are false.
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
And I have definitely encountered "if you just listen to me properly you will understand that I am right, because I have derived my conclusions rationally" in in person interactions.
On the balance I'd rather have some arrogance and willingness to be debated and be wrong, over a timid need to defer to centuries of established thought though. The people I've met in person I've always been happy to hang out with and talk to.
I remember as a child coming to the same "if reality is a deception, at least I must exist to be deceived" conclusion that Descartes did, well before I had heard of Descartes. (I don't think this makes me special, it's just a natural conclusion anyone will reach if they ponder the subject). I think it's harmless for me to discuss that idea in public without someone saying "you need to read Descartes before you can talk about this".
I also find my personal ethics are stronly aligned with what Kant espoused. But most people I talk to are not academic philosophers and have not read Kant, so when I want to explain my morals, I am better off explaining the ideas themselves than talking about Kant, which would be a distraction anyway because I didn't learn them from Kant, we just arrived at the same conclusions. If I'm talking with a philosopher I can just say "I'm a Kantian" as shorthand, but that's really just jargon for people who already know what I'm talking about.
I also think that while it would be unusual for someone to (for example) write a guide to understanding relativity without once mentioning Einstein, it also wouldn't be a fundamental flaw.
(But I agree there's no certainly excuse for someone asserting that they're right because they're rational!)
The problem is less clear in philosophy than mathematics, but it's still there. It's really easy on your own terms to come up with some idea that the collective intelligence has already revealed to be fatally flawed in some undeniable manner, or at the very least, has very powerful arguments against it that an individual may never consider. The ideas that have survived decades, centuries, and even millenia against the collective weight of humanity assaulting them are going to have a certain character that "something someone came up with last week" will lack.
(That said I am quite heterodox in one way, which is that I'm not a big believer in reading primary sources, at least routinely. Personally I think that a lot of the primary sources noticeably lack the refinement and polish added as humanity chews it over and processes it and I prefer mostly pulling from the result of the process, and not from the one person who happened to introduce a particular idea. Such a source may be interesting for other reasons, but not in my opinion for philosophy.)
I'm not sure if this counterpoint generalizes entirely to the original critique, since certainly LessWrongers aren't usually posting about or discussing math as if they've discovered it-- usually substantially more niche topics.
Like mathematical logic (in the intersection of math and philosophy) didn't have that many true predecessors and was developed very far by maybe only 5-10 individuals cumulatively, or information theory was basically established by Claude Shannon and maybe two other guys, or various aspects of convex optimization or Fourier analysis were only developed in the 80s or so, it stands to reason that the AI-related applications of various aspects of philosophy are ripe to be developed now. (By contrast, we don't see, as much, people on LW trying to redo linear algebra from the ground up, nor more "mature" aspects of philosophy.)
(If anything, I think it's more feasible than ever before, also, for a bunch of relative amateurs to non-professionally make real intellectual contributions, like noticeably moreso than 100 or even 20 years ago. That's what increasing the baseline levels of education/wealth/exposure to information was intended to achieve, on some level, isn't it?)
Or because western culture reflects this theme continuously through all the culture and media you've immersed in since you were a child?
Also the idea is definitely not new to Descartes, you can find echoes of it going back to Plato, so your idea isn't wrong per se. But I think it underrates the effect to which our philosophical preconceptions are culturally constructed.
All serious works in philosophy (Kant especially) are subject to interpretation. Whole research programmes exist around the works of major philosophers, interpreting and building on their works.
One cannot really do justice to e.g. the Critique of Pure Reason by discussing it based on a high level summary of the “main ideas” contained in it. These works have had a major impact on the history of Western philosophy and were groundbreaking at the time (and still are).
Suppose a foot race. Choose two runners of equal aptitude and finite existence. Start one at mile 1 and one at mile 100. Who do you think will get farther?
Not to mention, engaging in human community and discourse is a big part of what it means to be human. Knowledge isn't personal or isolated, we build it together. The "first principles people" understand this to the extent that they have even built their own community of like minded explorers, problem is, a big part of this bond is their choice to be willfully ignorant of large swaths of human intellectual development. Not only is this stupid, it also is a great disservice to your forebears, who worked just as hard to come to their conclusions and who have been building up the edifice of science bit by bit. It's completely antithetical to the spirit of scientific endeavor.
I come from a physics background. We used to (and still) have a ton of physicists who decide to dable in a new field, secure in their knowledge that they are smarter than the people doing it, and that anything worthwhile that has already been thought of they can just rederive ad hoc when needed (economists are the only other group that seems to have this tendency...) [1]. It turned out every time that the people who had spent decades working on, studying, discussing and debating the field in question had actually figured important shit out along the way. They might not have come with the mathematical toolbox that physicists had, and outside perspectives that challenge established thinking to prove itself again can be valuable, but when your goal is to actually understand what's happening in the real world, you can't ignore what's been done.
[1] There even is an xkcd about this:
This is a feature, not a bug, for writers who hold an opinion on something and want to rationalize it.
So many of the rationalist posts I've read through the years come from someone who has an opinion or gut feeling about something, but they want it to be seen as something more rigorous. The "first principles" writing style is a license to throw out the existing research on the topic, including contradictory evidence, and construct an all new scaffold around their opinion that makes it look more valid.
I use the "SlimeTimeMoldTime - A Chemical Hunger" blog series as an example because it was so widely shared and endorsed in the rationalist community: https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-p... It even received a financial grant from Scott Alexander of Astral Codex Ten
Actual experts were discrediting the series from the first blog post and explaining all of the author's errors, but the community soldiered on with it anyway, eventually making the belief that lithium in the water supply was causing the obesity epidemic into a meme within the rationalist community. There's no evidence supporting this and countless take-downs of how the author misinterpreted or cherry-picked data, but because it was written with the rationalist style and given the implicit blessing of a rationalist figurehead it was adopted as ground truth by many for years. People have been waking up to issues with the series for a while now, but at the time it was remarkable how quickly the idea spread as if it was a true, novel discovery.
I think that SlimeMoldTimeMold's rise and fall was actually a pretty big point in favor of the "rationalist community".
That feels like revisionist history to me. It rose to fame in LessWrong and SlateStarCodex, was promoted by Yudkowski, and proliferated for about a year and half before the takedowns finally got traction.
While it was the topic du jour in the rationalist spaces it was very difficult to argue against. I vividly remember how hard it was to convince anyone that SMTM wasn't a good source at the time, because so many people saw Yudkowski endorse it, saw Scott Alexander give it a shout out, and so on.
Now Yudkowski has gone back and edited his old endorsement, it has disappeared from the discourse, and many want to pretend the whole episode never happened.
> (I don't remember any detailed takedowns of SlimeMoldTimeMold coming before that article, but maybe there are).
Exactly my point. It was criticized widely outside of the rationalist community, but the takedowns were all dismissed because they weren't properly rationalist-coded. It finally took someone writing it up in the form of rationalist rhetoric and seeding it into LessWrong to break the spell.
This is the trend with rationalist-centric contrarianism: You have to code your articles with the correct prose, structure, and signs to get uptake in the rationalist community. Once you see it, it's hard to miss.
Do you have any examples of this that predate that LW article? Ideally both the critique and its dismissal but just the critique would be great. The original HN submission had a few comments critiquing it but I didn't see anything in depth (or for that matter as strident).
Don't worry, HN commenters can figure out the truth about Yudkowsky's articles from the first principles. They have already figured out that EAs no longer care about curing malaria, despite https://www.givewell.org/charities/top-charities only being a Google search away.
At the end, they will give you a lecture about how everyone hates people who are smug and talk about things they have no clue about. The lecture will then get a lot of upvotes.
I wish more people who detest rationalist ideology were curious about actual evidence.
Similarly, Aurornis made a claim that "Scott Alexander predicted at least $250 million in damages from Black Lives Matter protests", when if fact (as the very link provided by Aurornis shows) Scott predicted that the probability of such thing happening was 30%, i.e. it's more likely not to happen.
Elsewhere in this thread, another user, tptacek, claims that "Scott Alexander published some of his best-known posts under his own name". When I asked him for evidence, he said "I know more about this than you, and I'm not invested in this discussion enough to educate you adversarially". Translated: no evidence provided.
.
From my perspective, this all kinda proves my point.
Is the rationality community the only place where people care about evidence? Of course not.
But is the rationality community a rare place where people can ask for evidence in an informal debate and reasonably expect to actually get it? Unfortunately, I think the evidence we got here points towards yes.
Hacker News is a website mostly visited by smart people who are curious about many things. They are even smart enough to notice that some claims are suspicious, and ask for evidence. But will they receive it? No, they usually won't.
And in the next debate on the same topic, most likely the same false claims will be made again, maybe by people who have learned them in this thread. And the claims will be upvoted again.
This is an aspect where the rationality community strives to do better. It is not about some people being smarter than others, or whatever accusations are typically made. It is about establishing social norms where people e.g. don't get upvoted for making unsubstantiated negative claims about someone they don't like, without being asked to back it up, or get downvoted.
I find a lot of people in software have an insufferable tendency to simply ignore entire bodies of prior art, prior research, etc. outside of maybe computer science (and even that can be rare), and yet they act as though they are the most studied participants in the subject, proudly proclaiming their "genius insights" that are essentially restatements of basic facts in any given field that they would have learned if they just bothered to, you know, actually do research and put aside their egos for half a second to wonder if maybe the eons of human activity prior to their precious existence might have led to some decent knowledge.
> Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
There's a lot of in-group signaling in rationalist circles like the "epistemic status" taglines, posting predictions, and putting your humility on show.
This has come full-circle, though, and now rationalist writings are generally pre-baked with hedging, both-sides takes, escape hatches, and other writing tricks that make it easier to claim they weren't entirely wrong in the future.
A perfect exaple is the recent "AI 2027" doomsday scenario that predicts a rapid escalation of AI superpowers followed by disaster in only a couple years: https://ai-2027.com/
If you read the backstory and supporting blog posts from the authors they are filled to the brim with hedges and escape hatches. Scott Alexander wrote that it was something like "the 80th percentile of their fast scenario", which means when it fails to come true he can simple say it wasn't actually his median prediction anyway and that they were writing about the fast scenario. I can already predict that the "We were wrong" article will be more about what they got right with a heavy emphasis on the fact that it wasn't their real median prediction anyway.
I think this group relies heavily on the faux-humility and hedging because they've recognized how powerful it is to get people to trust them. Even the comment above is implying that because they say and do these things, they must be immune from the criticism delivered above. That's exactly why they wrap their posts in these signals, before going on to do whatever they were going to do anyway.
If you want to say their humility is not genuine, fine. I'm not sure I agree with it, but you are entitled to that view. But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
That's my point: Their rhetorical style is interpreted by the in-group as a sort of weird infallibility. Like they've covered both sides and therefore the work is technically correct in all cases. Once they go through the hedging dance, they can put forth the opinion-based point they're trying to make in a very persuasive way, falling back to the hedging in the future if it turns out to be completely wrong.
The writing style looks different depending on where you stand: Reading it in the forward direction makes it feel like the main point is very likely. Reading it in the backward direction you notice the hedging and decide they were also correct. Yet at the time, the rationalist community attaches themselves to the position being pushed.
> But to simultaneously be attacking the same community for not ever showing a sense of maybe being wrong or uncertain, and also for expressing it so often it's become an in-group signal, is just too much cognitive dissonance.
That's a strawman argument. At no point did I "attack the community for not ever showing a sense of maybe being wrong or uncertain".
edit: my apologies, that was someone else in the thread. I do feel like between the two comments though there is a "damned if you do, damned if you don't". (The original quote above I found absurd upon reading it.)
“Damned if you do” indeed…
This is right, but doesn't actually cover all the options. It's damned if you [write confidently about something and] do or don't [hedge with a probability or "epistemic status"].
But the other option, which is the one the vast majority of people choose, is to not write confidently about everything.
It's fine, there are far worse sins than writing persuasively about tons of stuff and inevitably getting lots of it wrong. But it's absolutely reasonable to criticize this choice, irregardless of the level of hedging.
They can't really be blamed for the fact that others go on to take the ideas more seriously than they intended.
(If anything, I think that at least in person, most rationalists are far less confident and far less persuasive than the typical person in proportion to the amount of knowledge/expertise/effort they have on a given topic, particularly in a professional setting, and they would all be well-served to do at least a normal human amount of "write and explain persuasively rather than as a mechanical report of the facts as you see them".)
(Also, with all communities there will be the more serious and dedicated core of the people, and then those who sort of cargo-cult or who defer much, or at least some, of their thinking to members with more status. This is sort of unavoidable on multiple levels-- for one, it's quite a reasonable thing to do with the amount of information out there, and for another, communities are always comprised of people with varying levels of seriousness, sincere people and grifters, careful thinkers and less careful thinkers, etc. (see mobs-geeks-sociopaths))
(Obviously even with these caveats there are exceptions to this statement, because society is complex and something about propaganda and consequentialism.)
Alternately, I wonder if you think there might be a better way of "writing unconfidently", like, other than not writing at all.
In most writing, people write less persuasively on topics they have less conviction in.
Ok, let's scroll up the thread. When I refer to "the specific criticism that I quoted", and when you say "implying that because they say and do these things, they must be immune from the criticism delivered above": what do you think was the "criticism delivered above"? Because I thought we were talking about contrarian1234's claim to exactly this "strawman", and you so far have not appeared to not agree with me that this criticism was invalid.
My point wasn't to nit-pick individual predictions, it was a general explanation of how the game is played.
Since Scott Alexander comes up a lot, a few randomly selected predictions that didn't come true:
- He predicted at least $250 million in damages from Black Lives Matter protests.
- He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
- He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
This is just random samples from the first blog post that popped in Google: https://www.astralcodexten.com/p/grading-my-2021-predictions
It's also noteworthy to read that a lot of his predictions are about his personal life, his own blogging actions, or [redacted] things. These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
> He predicted at least $250 million in damages from Black Lives Matter protests.
He says
> 5. At least $250 million in damage from BLM protests this year: 30%
which, by my reading means he assigns it greater-than-even odds that _less_ than $250 million dollars in damages happened (I have no understanding of whether or not this result is the case, but my reading of your post suggests that you believe that this was indeed the outcome).
You say > He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
while he says > Vitamin D is _not_ generally recognized (eg NICE, UpToDate) as effective COVID treatment: 70% (emphasis mine)
(I feel like you're probably getting upvotes from people who feel similarly, but sometimes I feel like nobody ever writes "I agree with you" comments, so the impression is that there's only disagreement with some point being made.)
Then you start encountering the weirder parts. For me, it was the group think and hero worship. I just wanted to read interesting takes on new topics, but if you deviated from the popular narrative associated with the heroes (Scott Alexander, Yudkowski, Cowen, Aaronson, etc.) it felt like the community's immune system identified you as an intruder and started attacking.
I think a lot of people get drawn into the idea of it being a community where they finally belong. Especially on Twitter (where the latest iteration is "TPOT") it's extraordinarily clique-ish and defensive. It feels like high school level social dynamics at play, except the players are equipped with deep reserves of rhetoric and seemingly endless free time to dunk on people and send their followers after people who disagree. It's a very weird contrast to the ideals claimed by the community.
Since when is that what we do here? If he'd written that he'd decided to become vegetarian, would we all be out here talking about how vegetarians are so annoying and one of them even spat on my hamburger one time?
And then of these uncalled-for takedowns, several -- including yours -- don't even seem to be engaging in good-faith discourse, and seem happy to pile on to attacks even when they're completely at odds with their own arguments.
I'm sorry to say it but the one who decided to use their free time to leer at people un-provoked over the internet seems to be you.
(Indeed, I think it's in worse faith to try to guilt trip people who are just expressing critical opinions. It's fine - good, even! - to disagree with those people, but this particular comment has a very "how dare you criticize something!" tone that I don't think is constructive.)
How was it condescending or lecturing?
> You could simply ask "Can you provide examples" instead of the "If you ____ then I suggest ____" form.
Why is that not equally condescending or lecturing?
It just isn't! An inability to accurately identify what comes off as condescending is kind of the point...
"Are you being condescending" is a subjective judgement that other people will make up their own minds about. You can't control what people think about things you say and do, and they aren't "oppressing" you by making up their own minds about that.
You seem to be trying to insinuate that Alexander et. al. are pretending to know how things will turn out and then hiding behind probabilities when they don't turn out that way. This is missing the point completely. The point is that when Alexander assigns an 80% probability to many different outcomes, about 80% of them should occur, and it should not be clear to anyone (including Alexander) ahead of time which 80%.
> He predicted at least $250 million in damages from Black Lives Matter protests.
Many sources estimated damages at $2 billion or more (see https://www.usatoday.com/story/news/factcheck/2022/02/22/fac... and links from there), so this did in fact come true.
Edit: I see that the prediction relates to 2021 specificially. In the wake of 2020, I think it was perfectly reasonable to make such a prediction at that confidence level, even if it didn't actually turn out that way.
> He predicted Andrew Yang would win the 2021 NYC mayoral race with 80% certainty (he came in 4th place)
> He gave a 70% chance to Vitamin D being generally recognized as a good COVID treatment
If you make many predictions at 70-80% confidence, as he does, you should expect 20-30% of them not to come true. It would in fact be a failure (underconfidence) if they all came true. You are in fact citing a blog post that is exactly about a self-assessment of those confidence levels.
Also, he gave a 70% chance to Vitamin D not being generally recognized as a good COVID treatment.
> These all get mixed in with a small number of geopolitical, economic, and medical predictions with the net result of bringing his overall accuracy up.
The point is not "overall accuracy", but overall calibration - i.e., whether his assigned probabilities end up making sense and being statistically validated.
You have done nothing to establish that any correlation between the category of prediction and his accuracy on them.
Yes that's the point, these people hedge like crazy to the point that they say nothing, they mean nothing, and effectively predict nothing.
I genuinely don't understand how you can point to someone's calibration curve where they've broadly done well, and cherry pick the failed predictions they made, and use this not just to claim that they're making bad predictions but that they're slimy about admitting error. What more could you possibly want from someone than a tally of their prediction record graded against the probability they explicitly assigned to it?
One man's modus ponens, as it goes.
lol, what? That was a civil comment. This seems like an excellent example of the point being made. Replying to a perfectly reasonable but critical comment with "please be civil" is super condescending.
So is stuff like "one man's modus ponens".
Look, we get it, you're talking to people who found this stuff smart and interesting in the past. But we got tired of it. For me, I realized after awhile that the people I most admired in real life were pretty much the opposite of the people I was reading the most on the internet. None of the smartest people I know talk in this snooty online iamsosmart style.
> This isn’t about me being an expert on these topics and getting them exactly right, it’s about me calibrating my ability to tell how much I know about things and how certain I am.
> At least $250 million in damage from BLM protests this year: 30%
Aurornis:
> I forgot about the slightly condescending, lecturing tone that comes out when you disagree with rationalist figureheads.
> Since Scott Alexander comes up a lot, a few randomly selected predictions that didn't come true:
> He predicted at least $250 million in damages from Black Lives Matter protests.
Is this a "perfectly reasonable but critical comment"?
Am I condescending if I say that predicting a 30% chance that something happens means predicting a 70% chance that it won't happen... so the fact that it didn't happen probably shouldn't be used as "gotcha!"?
(I did waffle upon re-reading my comment and thinking it could have been more civil. But then decided that this person is also being very thin skinned. So I think you're right that we're both right.)
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
I think it's that combined with the EA approach to it which is: let's focus on space flight and shrimp welfare. Not sure which side is more in denial about the impending future?
I have no belief any particular individual can do anything about shrimp welfare more than they can about the intractable problems we do face.
I think its a result of its complete denial of and ignorance of politics. Because rationalist and effective altruist movements make a whole lot more sense, if you realize they are talking about deeply social and political issues with all politics removed from it. Its technocrat-ism the poster child of the kind of "there is no alternative" neoliberalism that everyone in the western world was indoctrinated into since the 80s.
Its a fundamental contradiction, we don't need to talk about politics because we already know liberal democracies and free-market capitalism is the best we ever going to achieve, faced with the numerous intractable problems we face that can not possibly be related to liberal democracies and free-market capitalism.
The problem is: How do we talk about any issue the world is facing today without ever challenging or even talking about any of the many assumptions the western liberal democracies are based upon? In other words: the problems we face are structural/systemic, but we are not allowed to talk about the structures/systems. That's how you end up with space flight and shrimp welfare and AGI/ASI catastrophizing taking up 99% of everything these people talk about. It's infantile, impotent liberal escapism more than anything else.
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
As far as I know it's never hosted an impress-the-oligarch fundraiser, which as you say would at least have a logic behind it[1] even if it might seem distasteful.
For a philosophy which started out from the point of view that much of mainstream aid was spent with little thought, it was a bit of an end of Animal Farm moment.
(to their credit, a lot of people who identified as EAs were unhappy. If you drew a Venn diagram of the people that objected, people who sneered at the objections[2] and people who identified as rationalists you might only need two circles though...)
[1]a pretty shaky one considering how easy it is to impress American billionaires with Oxford architecture without going to the expense of operating a nearby mansion as a venue, particularly if you happen to be a charitable movement with strong links to the university... [2]obviously people are only objecting to it for PR purposes because they're not smart enough to realise that capital appreciates and that venues cost money, and definitely not because they've got a pretty good idea how expensive upkeep on little used medieval venues are and how many alternatives exist if you really care about the cost effectiveness of your retreat, especially to charitable movements affiliated with a university...
I’m a bit confused by this one.
Are you saying that no-one who identifies as rationalist sneered at the objections? Because I don’t think that’s true.
>As far as I know it's never hosted an impress-the-oligarch fundraiser
As far as I know, they only hosted 3 events there before deciding to sell, so this is low-information.
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
Because they we doing so many workshops that buying a building was cheaper than renting all the time.
You may argue that organizing workshops is wrong (and you might be right about that), but once you choose to do them, it makes sense to choose the cheaper option rather than the more expensive one. That's not rationalist rhetoric, that's just basic economy.
"Aren't these the people who"...
> And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
What's the value of that if it doesn't appear to be reasonably put to their own ideas. What you described otherwise is just another form of the exact kind of self-congratulation often (reasonably, IMO) lobbed at these "people"
They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").
They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.
Anything that can't be self-improving or superhuman almost certainly isn't worthy of the moniker "AI". A true AI will be born into a world that has already unlocked the principles of intelligence. Humans in that world would be capable themselves of improving AI (slowly), but the AI itself will (presumably) run on silicon and be a quick thinker. It will be able to self-improve, rapidly at first, and then more rapidly as its increased intelligence allows for even quicker rates of improvement. And if not superhuman initially, it would soon become so.
We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
I think that the OP's point was that it doesn't matter whether it's "real AI" or not. Even if it's just a glorified auto-correct system, it's one that has the clear potential to overturn our information/communication systems and our assumptions about individuals' economic value.
That's going to be a swift kick to your economy, no matter how strong.
I recently had an LLM write a function for me that, for a given RGB color value and another integer n > 1, returned to me a list of n RGB colors equidistantly and sequentially spaced around the color wheel starting at the specified RGB value.
For a given system I'm creating, I might have lots of such tasks. That collection of tasks is something that did matter and took some education and skill to complete well.
In the pre-LLM world - assuming I was too busy to handle all the tasks myself - I would have delegated them to a junior software engineer.
In a post-LLM world, I just ask the LLM to implement tasks like that, and I review the code for correctness.
That seems like a pretty transformational change to me, and not just some kind of "rot" being removed from the process.
Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.
I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.
This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.
I suspect you'll see a few people "win" or strike it rich with AI, the vast majority will simply be left with a big bill.
The problem is the railroads were purchased by the winners. Who turned out to be the existing winners. Who then went on to continue to win.
On the one hand, I guess that's just life here in reality.
On the other, man, reality sucks sometimes.
Imagine if they were bought by losers.
It turns out that boom-and-bust capitalism isn’t great for building something that needs to evolve over centuries.
Perhaps American AI efforts will one day be viewed similarly. “Yeah, they had an early rush, lots of innovation, high valuations, and robber barons competing. Today it’s just stale old infra despite the high-energy start.”
Boeing tries to compete with Airbus, where the engineers make maybe 70% of what they do in the US, and has upon that made some really bad decisions. Many aircraft companies are profitable.
Humans are also freight, of course. It is not like the rail companies really care about what kind of fright is on the trains, so long as it is what the customer considers most important (read: most profitable). Humans are deprioritized exactly because they aren't considered important by the customer, which is to say that the customer, who is also the freight in this case, doesn't really want to be on a train in the first place. The customer would absolutely ensure priority (read: pay more, making it known that they are priority) if they wanted to be there.
I understand the train geeks on the internet find it hard to believe that not everyone loves trains like they do, but the harsh reality is that the average American Joe prefers other means of transportation. Should that change in the future, the rail network will quickly accommodate. It has before!
For what it's worth, I like traveling by train and do so whenever I can, but I'm an outlier. Most Americans look at the travel times and laugh at the premise of choosing a train over a plane. And when I say they look at the travel times, I don't mean they actually bother to look up train routes. They just know that airplanes are several times faster. Delays suffered by trains never get factored into the decision because trains aren't taken seriously in the first place.
You are comparing USA today to the robber baron phase, whose to say China isn't in the same phase? Lots of money being thrown at new railroads and you have Chinese leaders and best and management leaders chasing that money. When happens when it goes low budget/maintenance mode?
Nonsense. The US has the largest freight rail system in the world, and is considered to have the most efficient rail system in the world to go along with it.
There isn't much in the way of passenger service, granted, but that's because people in the US aren't, well, poor. They can afford better transportation options.
> It turns out that boom-and-bust capitalism isn’t great for building something that needs to evolve over centuries.
It initially built out the passenger rail just fine, but then evolution saw better options come along. Passenger rail disappeared because it no longer served a purpose. It is not like, say, Japan where the median household income is approaching half that of Mississippi and they hold on to rail because that's what is affordable.
This is so misguided view... Trains (when done right) aren't "for the poor", they are great transportation option, that beats both airplanes and cars. In Poland, which isn't even close to the best, you can travel between big cities with speeds above 200km/h, and you can use regional rail for your daily commute, both those options being very comfortable and convenient, much more convenient than traveling by car.
What gives you the idea that rail would be preferable to flying for the NYC to LAS route if only it existed? Even as the crow flies it is approximately 4,000 km, meaning that at 200 km/h you are still looking at around 20 hours of travel in an ideal case. Instead of just 5 hours by plane. If you're poor an additional 15 hours wasted might not mean much, but when time is valuable?
Why would you constrain the route to within a specific state? In fact, right now a high-speed rail line is being planned between Las Vegas and LA.
But outside of Nevada, there are many equivalent distance routes in the US between major population centers, including:
Chicago/Detroit
Dallas/Houston
LA/SF
Atlanta/Charlotte
Right now and since 1979!
I'll grant you that people love to plan, but it turns out that they don't love putting on their boots and picking up a shovel nearly as much.
> But outside of Nevada, there are many equivalent distance routes in the US between major population centers, including
And there is nothing stopping those lines from being built other than the lack of will to do it. As before, the will doesn't exist because better options exist.
There is no magic in this world like you seem to want to pretended. All of those things simply boil down to people. Property rights only exist because people say they do, environmental reviews only exist because people say they do, skilled workers are, well, literally people, and the necessary capital is already created. If the capital is being directed to other purposes, it is only because people decided those purposes are more important. All of this can change if the people want it to.
> HN users sometimes have this weird fantasy that with enough political will it's possible to make enormous changes but that's simply not how things operate in a republic with a dual sovereignty system.
Hell, the republic and dual sovereignty system itself only exists because that's what people have decided upon. Believe it or not, it wasn't enacted by some mythical genie in the sky. The people can change it all on a whim if the will is there.
The will isn't there of course, as there is no reason for the will to be there given that there are better options anyway, but if the will was there it'd be done already (like it already is in a few corners of the country where the will was present).
There has been continuous regularly scheduled passenger service between Chicago and Detroit since before the Civil War. The current Amtrak Wolverine runs 110 MPH (180 KPH) for 90% of the route, using essentially the same trainset that Brightline plans to use.
They’ve made a lot of investments since the 1990s. It’s much improved, though perhaps not as nice as during the golden years when it was a big part of the New York Central system (from the 1890s to the 1960s they had daily trains that went Boston/NYC/Buffalo/Detroit/Chicago through Canada from Niagara Falls to Windsor).
During the first Trump administration, Amtrak announced a route that would go Chicago/Detroit/Toronto/Montreal/Quebec City using that same rail tunnel underneath the Detroit River. It was supposed to start by 2030. We’ll see if it happens.
I've taken a Chinese train from Zhengzhou, in central China, to Shenzhen, and it was fantastic. Cheap, smooth, fast, lots of legroom, easy to get on and off or walk around to the dining car. And, there's a thing where boiling hot water is available, so everyone brings instant noodle packs of every variety to eat on the train.
Can't even imagine what the US would be like if we had that kind of thing.
Getting to the airport in most major cities takes an hour, and then there's the whole pre-flight security theatre, and the flights themselves are rarely pleasant. To add insult to injury, in the US it's usually a $50 cab ride to the airport and there are $28 ham-and-cheese sandwiches in the terminal if you get hungry.
In China and Japan the trains are centrally located, getting on takes ten minutes, and the rides are extremely comfortable. If such a thing existed in the US I think it would be extremely popular. Even if it was just SF-LA-Vegas.
Do you mean beginning in the same city? If so, that's downright hilarious. I live 50 miles clear of the city, out in the middle of nowhere, and can be to the airport in the city in less than an hour.
> If such a thing existed in the US I think it would be extremely popular.
I don't know, if people willingly spend an hour getting from one point to another in the same city, the aren't apt to be concerned about how they get from one city to another. I expect they don't put much thought into anything.
Anyway, New York to Las Vegas spans most of the US. There are plenty of routes in the US where rail would make sense. Between Boston, New Haven, New York City, Philadelphia, Baltimore, and Washington, D.C. Which has the Amtrak Acela. Or perhaps Miami to Orlando. Which has a privately funded high speed rail connection called Brightline that runs at 200 km/h who's ridership was triple what had been expected at launch.
I am, thankfully, not.
> Which has a privately funded high speed rail connection called Brightline that runs at 200 km/h
Which proves that when the will is there, it will be done. The only impediment in other places is simply the people not wanting it. If they wanted it, it would already be there.
The US has been here before. It built out a pretty good, even great, passenger rail network a couple of centuries ago when the people wanted it. It eventually died out simply because the people didn't want it anymore.
If they want it again in the future, it will return. But as for the moment...
1. Freight is easier to manage and has better economics on a dedicated network. The US freight network is extremely efficient as others have pointed out. Other networks, e.g., Germany, instead prioritized passenger service. In Germany rail moves a small proportion of freight (19%) compared to trucks. [0] It's really noticeable on the Autobahn and unlike the US where a lot of truck traffic is intermodal loads.
2. The US could have better rail service by investing in passenger networks. Instead we have boondoggles like the California high-speed rail project which has already burned through 10s of billions of dollars with no end in sight. [1] Or the New Jersey Transit system which I had the pleasure to ride on earlier today to Newark Airport. It has pretty good coverage but needs investment.
[0] https://dhl-freight-connections.com/en/trends/global-freight...
[1] https://en.wikipedia.org/wiki/California_High-Speed_Rail
How so?
> The US freight network is extremely efficient as others have pointed out.
'Others' being literally the comment you replied to.
> The US could have better rail service by investing in passenger networks.
Everything there is can be improved, of course, but to what significance here?
https://www.mlex.com/mlex/antitrust/articles/2355294/us-rail...
Since nobody really wants passenger rail in the US, they don't put in the effort to see that it exists (outside of some particular routes where they do want it). In many other countries, people do want board access to passenger rail (because that's all they can afford), so they put in the effort to have it.
~200 years ago the US did want passenger rail, they put in the work to realize it, and it did have a pretty good passenger rail network at the time given the period. But, again, better technology came along, so people stopped maintaining/improving what was there. They could do it again if they wanted to... But they don't.
It's not social media. It's a model the capitalists train and own. Best the rest of us will have access to are open source ones. It's like the difference between trying to go into court backed by google searches as opposed to Lexis/Nexis. You're gonna have a bad day with the judge.
Here's hoping the open source stuff gets trained on quality data rather than reddit and 4chan. Given how the courts are leaning on copyright, and lack of vetted data outside copyright holder remit, I'm not sanguine about the chances of parity long term.
Have you ever read Scott Alexander's blog (Slate Star Codex, now Astral Codex X)? It's full of doubt and self-questioning. The guy even keeps a public list of his mistakes:
https://www.astralcodexten.com/p/mistakes
I'll admit my only touchpoint to the "rationalist community" is this blog, but I sure don't get "full of themselves" from that. Quite the contrary.
I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..
I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.
That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.
In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."
It frequently reduces complex problems into comfortable oversimplifications.
Maybe you don't think that is real wisdom, and maybe that's sort of your point, but then what does real wisdom look like? Should wisdom make you considerate of the multiple contexts it does and doesn't affect? Maybe the issue is we need to better understand how to evaluate and use wisdom. People who truly understand a piece of wisdom should communicate deeply rather than parroting platitudes.
Also to be frank, wisdom is a way of controlling how others perceive a problem, and is a great way to manipulate others by propping up ultimatums or forcing scope. Much of past wisdom is unhelpful or highly irrelevant to modern life.
e.g. "Good things come to those who wait."
Passive waiting rarely produces results. Initiative, timing, and strategic action tend to matter more than patience.
but I don't know enough about it, I'm just trolling.
Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.
One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.
This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.
We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.
I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.
Thanks, I might actually go do this :) I recently got exposed to a very persuasive form of "rationalism is a social construct" by reading "Alchemy" by Rory Sutherland. But a theme in these comments is that a lot of these ideas are just recycled from philosophers and that the philosophers were less likely to try and induct you into a cult.
Both advocate the principle of putting out a bold conjecture, test, learn, adapt.
Reduce a computer's behavior to its hardware design, state of RAM, and physical laws. All those voltages make no sense until you come up with the idea of stored instructions, division of the bits into some kind of memory space, etc. You may say, you can predict the future of the RAM. And that's true. But if you can't read the messages the computer prints out, then you're still doing circuits, not software.
Is that reductionist approach providing valuable insight? YES! Is it the whole picture? No.
This warning isn't new, and it's very mainstream. https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
What you are mentioning is called western reductionism by some.
In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.
Under the assumption that all models are wrong, but some are useful, it helps you find useful models.
If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.
Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.
Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.
The largest of the finite simple groups (themselves objects of study as a means of classifying other, finite but non-simple groups, which can always be broken down into simple groups) is the Monster Group -- it has order 808017424794512875886459904961710757005754368000000000, and cannot be reduced to simpler "factors". It has a whole bunch of very interesting properties which thus can only be understood by analyzing the whole object in itself.
Now whether this applies to biology, I doubt, but it's good to know that limits do exist, even if we don't know exactly where they'll show up in practice.
Biologists stand out because they have already given up on that idea. They may still seek to simplify complex things by refining principles of some kind, but it's a "whatever stories work best" approach. More Feyerabend, less Popper. Instead of axioms they have these patterns that one notices after failing to find axioms for a while.
Actually, neither do Rationalists, but instead they cosplay at being rational.
What do you mean? The biologists I've had the privilege of working with absolutely do try to. Obviously some work at a higher level of abstraction than others, but I've not met any who apply any magical thinking to the actual biological investigation. In particular (at least in my milieu), I have found that the typical biologist is more likely to consider quantum effects than the typical physicist. On the other hand (again, from my limited experience), biologists do tend to have some magical thinking about how statistics (and particularly hypothesis testing) works, but no one is perfect.
Reasoning from first principles cannot span very far in reality, as for starters the complexity of the argument quickly overwhelms our capacity for it. Its numerous other limits have been well-documented.
Logicomix, Gödel Escher Bach are some common entry points.
I'm kinda new here but am surprised I haven't seen this book mentioned more. Maybe I just haven't seen it or it's old news but it seems right up HNs alley.
Examples that come to mind: statistical modelling (reduction to nonparametric models), protein folding (reduction to quantum chemistry), climate/weather prediction (reduction to fluid physics), human language translation (reduction to neural networks).
Reductionism is not that useful as a theory building tool, but reductionist approaches have a lot of practical value.
I am not sure in what sense folding simulations are reducable to quantum chemistry. There are interesting 'hybrid' approaches where some (limited) quantum calculations are done for a small part of the structure - usually the active site I suppose - and the rest is done using more standard molecular mechanics/molecular dynamics approaches.
Perhaps things have progressed a lot since I worked in protein bioinformatics. As far as I know, even extremely short simulations at the quantum level were not possible for systems with more than a few atoms.
If you're looking for insults, and declaring the whole conversation a "culture war" as soon as you think you found one, (a) you'll avoid plenty of assholes, but (b) in the end you will read whatever you want to read, not what the thoughtful people are actually writing.
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
It’s rather more like the burger chain decrying the brutality as a reason for other burger chains to be heavily regulated (don’t worry about them; they’re the guys you can trust and/or they are practically already holding themselves to strict ethical standards) while talking about how delicious and juicy their meat patties are.
I agree about the general sentiment that the technology is dangerous, especially from a “oops, our agent stopped all of the power plants” angle. Just... the messaging from the big AI services is both that and marketing hype. It seems to get people to disregard real dangers as “marketing” and I think that’s because the actual marketing puts an outsized emphasis on the dangers. (Don’t hook your agent up to your power plant controls, please and thank you. But I somehow doubt that OpenAI and Anthropic will not be there, ready and willing, despite the dangers they are oh so aware of.)
I'm glad you ran with my burger chain metaphor, because it illustrates why I think it doesn't work for an AI company to intentionally try and advertise themselves with this kind of strategy, let alone ~all the big players in an industry. Any ordinary member of the burger-eating public would be turned off by such an advertisement. Many would quickly notice the unsaid thing; those not sharp enough to would probably just see the descriptions of torture and be less likely on the margin to go eat there instead of just, like, safe happy McDonald's. Analogously we have to ask ourselves why there seems to be no Andreessen-esque major AI lab that just says loud and proud, "Ignore those lunatics. Everything's going to be fine. Buy from us." That seems like it would be an excellent counterpositioning strategy in the 2025 ecosystem.
Moreover, if the marketing theory is to be believed, these kinds of psuedo-ads are not targeted at the lowest common denominator of society. Their target is people with sway over actual regulation. Such an audience is going to be much more discerning, for the same reason a machinist vets his CNC machine advertisements much more aggressively than, say, the TVs on display at Best Buy. The more skin you have in the game, the more sense it makes to stop and analyze.
Some would argue the AI companies know all this, and are gambling on the chance that they are able to get regulation through and get enshrined as some state-mandated AI monopoly. A well-owner does well in a desert, after all. I grant this is a possibility. I do not think the likelihood of success here is very high. It was higher back when OpenAI was the only game in town, and I had more sympathy for this theory back in 2020-2021, but each serious new entrant cuts this chance down multiplicatively across the board, and by now I don't think anyone could seriously pitch that to their investors as their exit strategy and expect a round of applause for their brilliance.
note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.
Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.
Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.
The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).
This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.
If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.
Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.
I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.
I see this in rationalist spaces too – it doesn't really make sense for people to talk about things that they believe in strongly but that 95%+ of the public also believe in (like the existence of air), or that they don't have a strong opinion on.
I am a very vocal doomer on AI because I predict with high probability it's going to be very bad for humanity and this is an opinion which, although shared by some, is quite controversial and probably only held by 30% of the public. Given the importance of the subject, my confidence, and that fact I feel the vast majority of people are even wrong or are significantly underweighting caetrosphohic risks, I have to be vocal about it.
Do I acknowledge I might be wrong? Sure, but for me the probability is low enough that I'm comfortable making very strong and unqualified statements about what I believe will happen. I suspect others in the rationalist community like Eliezer Yudkowsky think similarly.
Also, when you say you have a strong belief, does that mean you have emptied you retirement accounts and you are enjoying all you can in the moment until the end comes?
For example, I won't cross the street without 99.99% confidence that I will survive. I cross streets so many times that a lower threshold like 99% would look like insanely risky dart-into-traffic behaviour.
If an asteroid is heading for earth, then even a 25% probability of apocalyptic collision is enough that I would call it very high, and spend almost all my focus attempting to prevent that outcome. But I wouldn't empty my retirement account for the sake of hedonism because there's still a 75% chance I make it through and need to plan my retirement.
In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).
Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.
To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.
One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.
I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.
If you want to see how human and tribal rationalists are, go criticize the movement as an outsider. Or try to write a mildly critical NYT piece about them and watch how they react.
It's rather different for a community to say that's a standard they aspire to, which is a lot less ridiculously grandstanding of a position IMO.
>The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right
“Guy Who Is Always Right” as a role in a social group is a terrible target, yet it somehow seems like what rationalists are aiming for every time I read any of their blog posts
The doomer utilitarian arguments often seem to involve some sort of infinity or really large numbers (much like EAs) which result in various kinds of philosophical mugging.
In particular, the doomer plans invariably result in some need for draconian centralised control. Some kind of body or system that can tell everyone what to do with (of course) doomers in charge.
“If X, then surely Y will follow! It’s a slippery slope! We can’t allow X!”
They call out the name of the fallacy they are committing BY NAME and think that it somehow supports their conclusion?
Rationalists, mostly self-identified.
> how can they estimate the probability of a future event based on zero priors and a total lack of scientific evidence?
As best as they can, because at the end of the day you still need to make decisions (You can of course choose to do nothing and ignore the risk, but that's not a safe, neutral option). Which means either you treat it as if it had a particular probability, or you waste money and effort doing things in a less effective way. It's like preparing for global warming or floods or hurricanes or what have you - yes, the error bars are wide, but at the end of the day you take the best estimate you can and get on with it, because anything else is worse.
Which is to say that you've made an estimate that the probability is, IDK, <5%, <1%, or some such.
Rationalism is an ideal, yet those who label themselves as such do not realize their base of knowledge could be wrong.
They lack an understanding of epistemology and it gives them confidence. I wonder if these 'rationalists' are all under age 40, they havent seen themselves fooled yet.
Do you have specific examples in mind? (And not to put too fine a point on it, do you think there's a chance that you might be wrong about this assertion? You've expressed it very confidently...)
It has a priesthood that speaks for god (quantum). It has ideals passed down from on high. It has presuppositions about how the universe functions which must not be questioned. And it's filled with people happy that they are the chosen ones and they feel sorry for everyone that isn't enlightened like they are.
In the OPs article, I had to chuckle a little when they started the whole thing off by mentioning how other Rationalists recognized them as a physicist (they aren't). Then they proceeded to talk about "quantum cloning theory".
Therein is the problem. A bunch of people vociferously speaking outside their expertise confidently and being taken seriously by others.
On "considering what should be the baseline assumption":
https://www.lesswrong.com/w/epistemology
https://www.lesswrong.com/w/priors, particularly https://www.lesswrong.com/posts/hNqte2p48nqKux3wS/trapped-pr...
On the idea that "rationalists think that they can just apply rationality infinitely to everything":
https://www.lesswrong.com/w/bounded-rationality
On the critique that rationalists are blind to the fact that "reason isn't the only thing that's important", generously reworded as "reason has to be grounded in a set of human values", some of the most philosophically coherent stuff I see on the internet is from LW:
https://www.lesswrong.com/w/metaethics-sequence
https://www.lesswrong.com/w/human-values
On "systematically plan to validate":
https://www.lesswrong.com/w/rationality-verification
https://www.lesswrong.com/w/making-beliefs-pay-rent
On "what could hold true for one moment could easily shift":
https://www.lesswrong.com/w/black-swans
I support anyone trying to form rational pictures of the universe and humanity. If the LessWrong community approach seems to make sense and is enriching to your understanding of the world then I am happy for you. But, every time I try to take a serious delve into LessWrong, and I have done it multiple times over the years, it sets off my cult/scam alerts.
It reminds me Kier Starmers Labour, calling themselves "the adults in the room".
Its a cheap framing trick, belying an emptiness on the people using it.
When ones find themself mentioning Aella as one of the members taking their movement "in new directions," then they should stop and ask whether they are the insightful well rounded person with much to say about all sorts of things, or whether they are just a very gifted computer scientist who is still not well rounded enough to recognize a legitimate dimwit like Aella when they see one.
And in general, I do feel like they suffer from "I am a genius at X, so my take on Y should be given special consideration." If you're in a group where everyone's talking about physics and almost none of them are physicists, then run. I'm still surprised at how little consideration these people give philosophy and the centuries of its written thought. Some engineers spend a decade or more building up math and science skills to the point that they can be effective practitioners, but then they think they can hop right into philosophical discussions with no background. Then when they try to analyze a problem philosophically, their brief (or no) experience means that they reason themselves into dead-end positions like philosophical skepticism that were tackled in a variety of ways over the past centuries.
https://www.lesswrong.com/w/epistemology
https://www.lesswrong.com/w/priors
https://www.lesswrong.com/posts/2x67s6u8oAitNKF73/ (a post noting that the foundational problems in mech interp are grounded in philosophical questions about representation ~150 years old)
https://www.lesswrong.com/w/consciousness (the page on consciousness first citing the MIT and Stanford encyclopedias, then providing a timeline from Democritus, through Descartes, Hobbes,... all the way to Nagel, Chalmers, Tegmark).
There is also sort of a meme of interest in Thomas Kuhn: https://www.lesswrong.com/posts/HcjL8ydHxPezj6wrt/book-revie...
See also these attempts to refer and collate prior literature: https://www.lesswrong.com/posts/qc7P2NwfxQMC3hdgm/rationalis...
https://www.lesswrong.com/posts/xg3hXCYQPJkwHyik2/the-best-t...
https://www.lesswrong.com/posts/SXJGSPeQWbACveJhs/the-best-t...
https://www.lesswrong.com/posts/HLJMyd4ncE3kvjwhe/the-best-r...
https://www.lesswrong.com/posts/bMmD5qNFKRqKBJnKw/rigorous-p...
Now, one may disagree with the particular choices or philosophical positions taken, but it's pretty hard to say these people are ignorant or not trying to be informed about what prior thinkers have done, especially compared to any particular reference culture, except maybe academics.
As for the thing about Aella, I feel she's not as much of a thought leader as you've surmised, and I think doesn't claim to be. My personal view is that she does some interesting semi-rigorous surveying that is unlikely to be done elsewhere. She's not a scientist/statistician or a total revolutionary but her stuff is not devoid of informational value either. Some of her claims are hedged adequately, some of them are hedged a bit inadequately. You might have encountered some particularly (irrationally?) ardent fans.
A good example of the failing of "rationality" is Zionism. There are plenty of rationalists who are Zionists, including Scott Aaronson (who I incidentally think is not a very serious thinker). I think I can give a very simple rational argument for why making a colonial ethnostate is immoral and dangerous, and they have their own rational reasons for supporting it. Often, the arguments, including Scott's, are purely self interest. Not "rational."
>My personal view is that she does some interesting semi-rigorous surveying
Posting surveys on Twitter, as a sex worker account, is so unrigorous that to take it seriously is very concerning. On top of that, she lives in a bubble of autistic rationality people and tries to make general statements about humanity. And on top of that, half her outrageous statements are obvious attempts at bargaining with CSAM she experienced that she insists didn't traumatize her. Anyone who takes her seriously in any regard is a fool.
Religions: "Catholic" actually means "universal" (implication: all the real Christians are among our number). "Orthodox" means "teaching the right things" (implication: anyone who isn't one of us is wrong). "Sunni" means "following the correct tradition" (implication: anyone who isn't one of us is wrong").
Political parties: "Democratic Party" (anyone who doesn't belong doesn't like democracy). "Republican Party" (anyone who doesn't belong wants kings back). "Liberal Party" (anyone else is against freedom).
In the world of software, there's "Agile" (everyone else is sluggish and clumsy). "Free software" (as with the liberals: everything else is opposed to freedom). People who like static typing systems tend to call them "strong" (everyone else is weak). People who like the other sort tend to call them "dynamic" (everyone else is rigid and inflexible).
I hate it too, but it's so very very common that I really hope it isn't right to say that everyone who does it is empty-headed or empty-hearted.
The charitable way to look at it: often these movements-and-names come about when some group of people picks a thing they particularly care about, tries extra-hard to do that thing, and uses the thing's name as a label. The "Rationalists" are called that because the particular thing they chose to focus on was rationality; maybe they do it well, maybe not, but it's not so much "no one else is rational" as "we are trying really hard to be as rational as we can".
(Not always. The term "Catholic" really was a power-grab: "we are the universal church, those other guys are schismatic heretics". In a different direction: the other philosophical group called "Rationalists" weren't saying "we think rationality is really important", they were saying "knowledge comes from first-principles reasoning" as opposed to the "Empiricists" who said "knowledge comes from sense experience". Today's "Rationalists" are actually more Empiricist than Rationalist in that sense, as it happens.)
The Catholic Church follows the Melchisedec order (Heb v. ; vi. ; vii). The term Catholic (καθολικη) was used as early as the first century; it is an adjective which describes Christianity.
The oldest record that we have to this day is the Epistle of Ignatius to the Smyrnaeans Chapter 8 where St. Ignatius writes "ωσπερ οπου αν η Χριστος Ιησους, εκει η καθολικη εκκλησια". (just as where Jesus Christ is, there is the Catholic Church.):
https://greekdoc.com/DOCUMENTS/early/i-smyrnaeans.html
The protestors in the 16th c. called themselves Protestants, so that's what everyone calls them. English heretic-schismatics didn't want to share the opprobrium so they called themselves English, hence Anglican. In USA they weren't governed congregationally like the Congregationalists, or by presbyters like the Presbyterians, but by bishops, so they called themselves Bishop-ruled, or Episcopalians. (In fact, Katharine Jefferts-Schori changed the name of the denomination from The Protestant Episcopal Church to The Episcopal Church recently.)
The orthodox catholics called themselves Orthodox to distance themselves from the unorthodox of which there were plenty, spawning themselves off in the wake of practically every ecumenical council.
Lutherans in the USA name themselves after Father Martin Luther, some Augustinian priest from Saxony who protested against the Church's hypocritical corruption at the time, and the controversy eventually got out of hand and precipitated a schism/heretical revolution, back in the 1500s, but Lutherans back in Germany and Scandinavia call themselves Gospel churches, hence Evangelical. Some USA denominations that go back to Germany and who came over to USA brought that name with them.
Pentecostals name themselves after the incident in Acts where the Holy Spirit set fire to the world (cf. Acts 2) on the occasion of the Jewish holiday of Shavuot, q.v., which in Greek was called Fiftieth Day After Passover, hence Pentecosti. What distinguishes Pentecostals is their emphasis on what they call "speaking in tongues", which in my opin...be charitable, kempff...which they see as a continuance of the Holy Spirit's work in the world and in the lives of believers.
I agree that some Christian groups have not-so-tendentious names, including "Protestant", "Anglican", "Episcopalian" and "Lutheran". (Though to my mind "Anglican" carries a certain implication of being the church for English people, and the Episcopalians aren't the only people with bishops any more than the Baptists are the only people who baptize.)
"Pentecostal" seems to me to be in (though not a central example of) the applause-light-name category. "We are the ones who are really filled with the Holy Spirit like in the Pentecost story in the Book of Acts".
"Gospel" and "Evangelical" are absolutely applause-light names. "Our group, unlike all those others, embodies the Good News" or "Our group, unlike all those others, is faithful to the Gospels". (The terms are kinda ambiguous between those two interpretations but either way these are we-are-the-best-rah-rah-rah names.)
Anyway, I didn't mean to claim that literally every movement's name is like this. Only that many many many movements' names are.
In my opinion, there can’t be a meaningful distinction made between rational and irrational without Popper.
Popper injects an epistemic humility that Bayesianism, taken alone, can miss.
I think that aligns well with your observation.
Bayesianism requires you to assume / formalize your prior belief about the subject under investigation and updates it given some data, resulting in a posterior belief distribution. It thus does not have the clear distinctions of frequentism, but that can also be considered an advantage.
[1] https://web.mit.edu/hackl/www/lab/turkshop/readings/gigerenz...
Popperians claim that positive justification is impossible.
Popperians claim.Induction doesn't exist (or at least , matter in science)
Popper was prepared to consider the existence of Propensities objective.probabilities, whereas Bayesians, particularly those who follow Jaynes believe in determinism and subjective probability.
Popperian refutation is all or nothing, whereas Bayesian negative information is gradual.
In Popperism, there can be more than one front running or most favoured theory, even after the falsifiable ones have been falsified, since there aren't quantifiable degrees of confirmation.
For Popper and Deutsch, theories need to be explanatory, not just predictive. Bayesian confirmation and disconfirmation only target prediction directly -- if they are achieving explanation or ontological correspondence , that would be the result of a convenient coincidence.
For Popperians, the construction of good theoretical conjectures is as important as testing them. Bayesian seem quite uninterested in where hypotheses come from.
For Deutschians, being hard-to-vary is the preferred principle of parsimony. For Yudkuwsians, it's computation complexity.
Error correction as something you actually do. Popperians like to put forward hypotheses that are easy to refute. Bayesians approve theoretically of "updating", but dislike objections and criticisms in practice.
(Long term) prediction is basically impossible . More Deutsch than Popper -- DD believed that the growth and unpredictability of knowledge . The creation of knowledge is so unpredictable and radical that long term predictions cannot be made. Often summarised to "prediction is impossible". Of course , Bayesians are all about prediction --but the predictive power of Ates tends only to be demonstrated in you models, where the ontology isn't changing under your feet. Their AI I predictions are explicitly intuition based.
Optimism versus Doom. Deutsch is highly optimistic that continuing knowledge creation will change the world for the better (a kind of moral realism is a component of this). Yudkowsky thinks advanced AI is our last invention and will kill us all.*
https://www.lesswrong.com/posts/85mfawamKdxzzaPeK/any-good-c...
I personally don't have that much of an interest in this topic, so I can't critique them for quality myself, but they may at least be of relevance to you.
Most of Popper's key points are elaborated on at length in blog posts on LessWrong. Perhaps they got something wrong? Or overlooked something major? If so, what?
(Amusingly, you seem to have avoided making any falsifiable claims in your comment, while implying that you could easily make many of them...)
https://www.yudkowsky.net/rational/bayes
These are the kind of statements I’m referring to. Happy to be falsified btw :) that’s how we learn.
Also note that Popper never called his theory falsificationism.
> On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes’ Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.
Popper's idea that confirmatory evidence has no value at all is obviously implausible. Some obviously implausible things turn out to be true anyway, since the universe is not constrained by our imaginations, but not this one; as the page clearly shows, we know that this particular implausible result is slightly false, and we can use Bayesian probability to calculate exactly how wrong.
I've answered your question, because I think that's what a bare minimum level of courtesy demands, but you keep evading mine. What epistemological propositions of Popper's do you think they're missing?
One example for all. It was claimed that a great rationalist policy is to distribute treated mosquito nets to 3rd-world-ers to help eradicate malaria. On the ground, the same nets were commonly used for fishing and other activities, polluting the environment with insecticides. Unfortunately, rationalists forgot to ask people that live with mosquitos what they would do with such nets.
Could you recommend an article to learn more about this?
One point is that when Mowshowitz is dispelling the argument that abuse rates are much higher for homeschooled kids, he (and the counterargument in general) references a study [1] showing that abuse rates for non-homeschooled kids are similarly high: both around 37%. That paper's no good though! Their conclusion is "We estimate that 37.4% of all children experience a child protective services investigation by age 18 years." 37.4%? That's 27m kids! How can CPS run so many investigations? That's 4k investigations a day over 18 years, no holidays or weekends. Nah. Here are some good numbers (that I got to from the bad study, FWIW) [2], they're around 4.2%.
But, more broadly, the worst failing of the US educational system isn't how it treats smart kids, it's how it treats kids for whom it fails. If you're not the 80% of kids who can somehow make it in the school system, you're doomed. Mowshowitz' article is nearly entirely dedicated to how hard it is to liberate your suffering, gifted student from the prison of public education. This is a real problem! I agree it would be good to solve it!
But, it's just not the problem. Again I'm sympathetic to and agree with a lot of the points in the article, but you can really boil it down to "let smart, wealthy parents homeschool their kids without social media scorn". Fine, I guess. No one's stopping you from deleting your account and moving to California. But it's not an efficient use of resources--and it's certainly a terrible political strategy--to focus on such a small fraction of the population, and to be clear this is the absolute nicest way I can characterize these kinds of policy positions. This thing is going nowhere as long as it stays so self-obsessed.
[0]: https://thezvi.substack.com/p/childhood-and-education-9-scho...
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5227926/
[2]: https://acf.gov/sites/default/files/documents/cb/cm2023.pdf
You can convince a lot of people that you've done your homework when the medium is "an extremely blog post with a bunch of studies attached" even if the studies themselves aren't representative of reality.
BTW, this isn't a defensive posture on my part: I am not plugged in enough to even have an opinion on any rationalist community, much less identify as one.
there's only ~3300 counties in the USA.
i'll let you extrapolate how CPS can handle "4000/day". Like, 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US per statistia:
> In 2022, there were about 5,036 intake and screening workers in child protective services in the United States. In total, there were about 30,750 people working in child protective services in that year.
my wife's caseload (adults) "floats around fifty."
My misunderstanding then - what are you speaking to? Even reading this comment, I still don't understand.
> 800 people with my wife's qualifications and caseload is equivalent to 4000/day. there's ~5000 caseworkers in the US
I don't know what the number of children in the system is. as i said in the comment you replied to, here. but the average US CPS worker caseload is 69 cases. which is over 300,000 children per year, because there are ~5000 CPS caseworkers in the US.
I was only speaking to "how do they 'run' that many investigations?" as if it's impossible. I pointed out it's possible with ~1000 caseworkers.
also i wasn't considering "confirmed maltreatment" - just the fact that 4k/day isn't "impossible"
Maybe, but this sounds like some ideologically opposed groups slandering each other to get the moral high ground to me. The papers linked show a pretty typical racialized pattern of CPS calls (Blacks high, Asians low, Whites and Latinos somewhere between) that maybe contraindicates this, for example.
> also i wasn't considering "confirmed maltreatment" - just the fact that 4k/day isn't "impossible"
Yup I think you're right here. I think there's something fuzzy happening with conflating "CPS investigation" with "abuse", but I'm not sure where the homeschool abuse rate comes from.
Predominantly "black" schools receive less funding in general (per student over $2000 less), and as such, need all the student-age people in class. So a "black" family removing their child(ren) from school becomes a fiscal issue, coupled to racial issues, coupled to history; like >60% of "black" children live in a 'single parent household' due to "no man about the house" policies dating back to the 1960s, just as a single example.
I am quoting "black" because i am sensitive to this, and if i had started out with ADOS or NBA/FBA (native black american, foundational black american) i just assume it'd brook argument.
To wrap this all up - "more testing equals more cases."
The whole reason smart people are engaging in this debate in the first place is that professional educators keep trying to train their sights on smart wealthy parents homeschooling their kids.
By the way, this small fraction of the population is responsible for the driving the bulk of R&D.
Kinda like Mensa?
I’m so glad I didn’t join because being around the types of adults that make being smart their identity surely would have had some corrosive effects
However I'm always surprised how much some people want to talk about intelligence. I mean, it's the common ground of the group in this case, but still.
"I haven't done anything!" - A Serious Man
We can also apply the principle of epistemic humility to, say, climate change: we don't have a full grasp of Earth biosphere, maybe some unexpected negative feedback loop will kick in and climate change will revert itself.
It doesn't mean that we shouldn't try to prevent it. We will waste resources in hypothetical worlds where climate change self-reverts, but we might prevent civilization collapse in hypothetical worlds where climate change goes as expected or more severely.
Rationalism is about acting in a state of uncertainty. So
> I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is
goes as an unspoken default. And some posts on lesswrong explicitly state "Epistemic status: not sure about it, but here's my thoughts" (though I find it a bit superfluous).
Nevertheless. The incredible danger of an unaligned superhuman AI doesn't allow to ignore even small chances of them being right about it.
Though I, myself, think that their probability estimates of some of their worries is influenced by the magnitude of negative consequences (humans aren't perfect Bayesians after all (and that includes me and validity of this statement)).
Now of course I don't claim to know every rationalist all over the planet, so maybe you met a different kind than I did. Or maybe this is just an internet stereotype, based on presumed similarity to other groups (such as Mensa) that most people are more familiar with.
For starters, it seems inevitable in discussions like this, that someone will mention "rationalism vs empiricism" as the One True Meaning of the word "rationalist". Because, apparently, each word has exactly one meaning over the entire history... and people at Wikipedia are foolish for having created a disambiguation page for this word. You don't have to worry; the idea of reasoning about things from the first principles is completely unrelated to the rationality community.
In my experience, rationality community was the first place where I met intelligent people willing to admit that they didn't know something, or even to put a probability estimate on their beliefs. That's something I don't see often even on HN, which is one of the smartest places on internet. Somewhere else in this thread, people are mocking Scott Alexander for predicting something with probability 30%, because that thing didn't happen. Well, yes, that's the point; probability 30% literally means that the thing is more likely to not happen than to happen. That doesn't mean we should ignore it, though. If the weather forecast predicts 30% probability of rain, I will take an umbrella, while still believing that no rain is more likely than rain. Is that foolish?
You complain about lack of humility and lack of acknowledgment that someone doesn't fully understand something. Well, how would you rate your own comment, from this perspective? How would Hacker News audience rate it? Oh, I don't have to guess, because it is already the highest rated comment in this thread. What does that prove? Should I now forever remember you as "the guy who was wrong" and HN as "the community that upvoted the guy who was wrong"? Or should I shrug and say "people make mistakes all the time... in best case, they are able to admit the mistake and learn"? Which is a healthier approach? Would I or my community get similar courtesy in a similar situation?
These folks have a bunch of money because we allowed them to privatize the commons of 20th century R&D mostly funded by the DoD and done at places like Bell Labs, Thiel and others saw that their interests had become aligned with more traditional arch-Randian goons, and they've captured the levers of power damn near up to the presidency.
This has quite predictably led to a real mess that's getting worse by the day, the economic outlook is bleak, wars are breaking out or intensifying left right and center, and all of this traces a very clear lineage back to allowing a small group of people privatize a bunch of public good.
It was a disaster when it happened in Russia in the 90s and its a disaster now.
Is it really a rationality when folks are sort of out of touch with reality, replacing it with models that lack life's endless nuances, exceptions and gotchas? Being principled is a good thing, but if I correctly understand what you're talking about - surely ignoring something just because it doesn't fit some arbitrarily selected set of principles is different.
I'm no rationalist (I don't have any meaningful self-identification, although I like the idea of approaching things logically) but I've had enough episodes of being guilty of something like this - having an opinion on something, lacking the depth, but pretending it's fine because my simple mental model is based on some ideas I like and can bring order to the chaos. So maybe it's not rationalism at all, but something else masquerading as it, like probably being afraid of mismatching the expectations?
I dunno, that seems like the tone of every article I've read on less wrong.
Note they are a mostly American phenomenon. To me, that's a consequence of the oppressive culture of "cliques" in American schools. I would even suppose it is a second-order effect of the deep racism of American culture: the first level is to belong to the "whites" or the "blacks", but when it is not enough, you have to create your own subgroup with its identity, pride, conferences... To make yourself even more betterer than the others.
Consider ethical pluralism – by which I mean, there is enormous disagreement among humans as to what is ethical, we can't even agree on what the first principles are. Sure, we all agree on a lot of non-controversial applications – e.g. killing children for sport is gravely evil – but even when we agree on the conclusion, we won't agree on the premises.
Is it any different for theoretical rationality? I don't think so. I think we have the same situation of rational pluralism – sure, we can agree on a lot of non-controversial applications, but we lack agreement on what the first principles are. And when you disagree on the principles, you tend to reach completely opposite conclusions in edge cases.
But at least, with ethical pluralism, a preference utilitarian or a Kantian or a natural law theorist is very open about what their ethical first principles are, and how they differ from those of others. By contrast, the "rationalists" seem to present there as being only one possible rationality, their own.
>I guess I'm a rationalist now.
>Aren't you the guy who's always getting into arguments who's always right?
[1] S.A. actually quoted the person as follows: "You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?" which differs in several ways from what reverendsteveii falsely presents as a direct quotation.
Substitute God with AI or the concept of rationality and use "first principles"/Bayesianism in an extremely dogmatic manner similar to Catechism and you have the Rationalist/AI Alignment/Effective Altruist movement.
Ironically, this is how plenty of religious movements started off - basically as formalizations of philosophy and ethics that fused with what is basically lore and worldbuilding.
Whenever I try to get an answer of HOW (as in the attack path), I keep getting a deus ex machina. Reverting to a deus ex machina in a self purported Rationalist movement is inherently irrational. And that's where I feel the crux of the issue is - it's called a "Rationalist" movement, but rationalism (as in the process of synthesizing information using a heuristic) is secondary to the overarching theme of techno-millenarianism.
This is why I feel rationalism is for all intents and purposes a "secular religion" - it's used by people to scratch an itch that religion often was used as well, and the same Judeo-Christian tropes are basically adopted in an obfuscated manner. Unsurprisingly, Eliezer Yudkowsky is an ex-talmid.
There's nothing wrong with that, but hiding behind the guise of being "rational" is dumb when the core belief is inherently irrational.
I take it this is what you have in mind when you say that whenever you ask for an "attack path" you keep getting a deus ex machina. But it seems to me like a pretty weak basis for calling Yudkowsky's position on this a religion.
(Not all people who consider themselves rationalists agree with Yudkowsky about how big a risk prospective superintelligent AI is. Are you taking "the Rationalist movement" to mean only the ones who agree with Yudkowsky about that?)
> Unsurprisingly, Eliezer Yudkowsky is an ex-talmid
So far as I can tell this is completely untrue unless it just means "Yudkowsky is from a Jewish family". (I hope you would not endorse taking "X is from a Jewish family" as good evidence that X is irrationally prone to religious thinking.)
Agree to disagree.
> So far as I can tell this is completely untrue
I was under the impression EY attended Yeshivat Sha'alvim (the USC of Yeshivas - rigorous and well regarded, but a "warmer" student body), but that was his brother. That said, EY is absolutely from a Daatim or Chabad household given that his brother attended Yeshivat Sha'alvim - and they are not mainstream in the Orthodox Jewish community.
And the feel and zeitgeist around the rationalist community with it's veneration of a couple core people like EY or Scott Alexander does feel similar to the veneration a subset of people would do for Baba Sali or Alter Rebbe in those communities.
Let's take the chess analogy. I take it you agree that I would very reliably lose if I played Magnus Carlsen at chess; he's got more than 1000 Elo points on me. But I couldn't tell you the "attack path" he would use. I mean, I could say vague things like "probably he will spot tactical errors I make and win material, and in the unlikely event that I don't make any he will just make better moves than me and gradually improve his position until mine collapses", but that's the equivalent of things like "the AI will get some of the things it wants by being superhumanly persuasive" or "the AI will be able to figure out scientific/engineering things much better than us and that will give it an advantage" which Yudkowsky can also say. I won't be able to tell you in advance what mistakes I will make or where my pawn structure will be weak or whatever.
Does this mean that, for you, if I cared enough about my inevitable defeat at Carlsen's hands that expectation would be religious?
To me it seems obvious that it wouldn't, and that if Yudkowsky's (or other rationalists') position on AI is religious then it can't be just because one important argument they make has a step in it where they can't fill out all the details. I am pretty sure you have other things in mind too that you haven't made so explicit.
(The specific other things I've heard people cite as reasons why rationalism is really a religion also, individually and collectively, seem very unconvincing to me. But if you throw 'em all in then we are in what seems to me like more reasonable agree-to-disagree territory.)
> That said, EY is absolutely from a Daatim or Chabad household
I think holding that against him, as you seem to be doing, is contemptible. If his ideas are wrong, they're fair game, but insinuating that we should be suspicious of his ideas because of the religion of his family, which he has rejected? Please, no. That goes nowhere good.
Part of the concern is there's no one "AI". There is frontier that keeps advancing. So "it" (the AI frontier in the year 2036) probably will be benign, but that "it" will advance and change. Then the law of large numbers is working against you, as you keep rolling the dice and hoping it's not a 1 each time. The dice rolls aren't i.i.d., of course, but they're probably not as correlated as we would like, and that's a problem as we keep rolling the dice. The analogy would be nuclear weapons. They won't get used in the next 10 years most likely, but on a 200 year time-frame it's a big deal as far as species-level risks go, which is what they're talking about here.
I’m not super familiar with this community in particular, but is it possible that it’s small/homogenous enough that participants feel comfortable “unmasking” aroudn one another? If so, then those seemingly-confident assertions may be seen by the in-group as implicitly tentative.
The Python community used to be like that. I’d say it peaked around 2014, at which point it became politically fractured.
You'd have to be to actually think you were being rational about everything.
post-rationalism is where all the cool kids are and where the best ideas are at right now. the post rationalists consistently have better predictions and the 'rationalists' are stuck arguing whether chickens suffer more getting factory farmed or chickens cause more suffering eating bugs outside.
they also let SF get run into the ground until their detractors decided to take over.
There's kind of two clusters, one is people who talk about meditation all the time, the other is center-right people who did drugs once. I think the second group showed up because rationalists are not-so-secretly into scientific racism (because they believe anything they see with numbers in it) and they just wanted to hang out with people like that.
There is an interesting atmosphere where it feels like they observed California big tech 1000x engineer types and are trying to cargo cult the way those people behave. I'm not sure what they get out of it.
So, how is "rationalism" different from everything else? What warrants this distinction? It can't be the use of reason or some refuge for rational discussion. I don't think I have to explain why that would be a ridiculous position to take.
I think it would be better if you did, because otherwise you leave me guessing what your argument is, and if I guess wrong then I just wasted a lot of words for no good reason.
My guess is that you meant some combination of: "everyone is using their reason" and "everyone believes that their approach is the reasonable one" or "look at all those awesome scientists and philosophers, how many smart words they wrote and how many awesome inventions they made". Which is all true. But I wish I knew which one of these is closest to your objection, because they are not the same; the reason that everyone uses is obviously not the same degree or kind as the reason the awesome scientists use.
In my opinion, the thing that makes the rationality community different from everything else is a combination of things. None of these things, taken individually, is exclusive to the rationality community. Actually, most of the ideas, perhaps all of them, are taken from books written by someone else. The unique thing was (1) putting all these things together, (2) publicly, and (3) trying to take them seriously. As opposed to: only caring about one of these things and ignoring the rest, or teaching them only to selected people e.g. at school, or just discussing things without actually attempting to change your life because of them.
Here are some of the things:
* Probabilistic thinking. The idea that there is no absolute certainty, only probabilities. (Although some probabilities can, for all practical purposes, be very high or low.) That people should try to express their probabilities in numbers, and then calibrate themselves to get those numbers right. Getting probabilities right means that if you keep a log of all things that you have assigned to e.g. a 30% probability, then statistically about 30% of them should turn out to be true in long term. This was specifically taught at the rationality minicamps; they even made an app for that. Some math related to this.
* The consistency of reality. As opposed to e.g. the idea that science and religion are "separate magisteria" and each follows different laws of logic and probability. Nope. The laws of logic you use in your lab are exactly the same as the laws of logic you should use outside the lab. Just like gravity does not stop applying when your job is over, neither do the laws of evidence. Science is not a special world that plays by special rules; the atoms you study in the lab are the same atoms that the entire world is made of.
* People are "predictably irrational" (to use the popular phrase). Yes... and now that you know, you should do something about that. Even if you can't be perfect, that doesn't mean there is no low-hanging fruit to pick. Learn about the most frequent types of mistakes humans make, try to notice how it feels inside when you are doing that, and either try to avoid such situations, or try to recognize the feeling when it happens, or at least use some reflection to notice it afterwards. Together try to create an environment where those mistakes are easier to avoid.
* Notice the many ways how words can fail to reflect reality. For example, if you use the same word for two different things, it prevents you from noticing their differences. (But it is a natural mistake to make if those things are indeed similar, and if in the past they were interchangeable and just recently stopped being so.) How people incorrectly assume that everything that can be understood has a simple explanation, because that was kinda true in the jungle where our ancestors evolved. Etc.
* Reality matters. If your model of the world does not match reality, it is your model that is wrong. If you learn that fairies do not exist, that does not mean that the world suddenly became less magical -- it was always the same as now, only you didn't know it.
* Notice how you could do things better. Then actually do it.
* Ethics. Consequentialism and its problems. Good reasons to avoid temptations that seem convincing at the moment.
...and many more, sorry this already took too much time to write. If you are curious, https://www.readthesequences.com/ -- but the point is, this is not just about reading an interesting text, but actually changing yourself. And of course, there are many people who read the texts and don't change themselves. That cannot be prevented, especially when the text is freely available online. The rationality community actually had many workshops where people could practice these skills, so the opportunity was there.
Again, none of these things is scarce individually. It's the combination. For example, people who do math + care about improving the world = effective altruism. Trying to counter biases + caring about truth = steelmaning your opponents. Probabilistic thinking + reality matters = even if there is only 30% chance that a superhuman AI might kill us all, we better try hard to reduce that chance. The synergy of all of that... that is the goal that the rationality community is moving towards.
And I have this narrative ringing in my head as soon as the word pops.
https://news.ycombinator.com/item?id=42897871
You can search HN with « zizians » for more info and depth.
—And years later, once the Ziz cult started preying on vulnerable people, the response from the mainstream rationalist movement was… to post warnings about avoiding this messed-up cult, to explain exactly how it was manipulating its victims, and how the best thing to do (in the absence of legal remedy) was to stay the hell away from that diseased social scene.
I’m not sure how they could have done better. Any sufficiently large movement attracts crazy people. No matter how well or poorly they may deal with that fact, anybody can do guilt-by-association forever after.
And why single out AI anyway? Because it's sexy maybe? Because if I had to place bets on the collapse of humanity it would look more like the British series "The Survivors" (1975–1977) than "Terminator".
Some people who fit the description above are Eliezer Yudkowsky and Anna Salamon.
Eliezer started writing a sequence of blog posts that formed the nucleus of the movement in Nov 2006 (a month after the start of Hacker News).
Anna started working full time on AI safety in Mar 2008 and a few years later became the executive director of a non-profit whose mission was to try to help people become more rational. (The main way of its doing so has been in-person workshops IIUC.)
The Zizians certainly were: https://en.wikipedia.org/wiki/Zizians
Yes, rationalism is not a substitute for humility or fallibility. However, rationalism is an important counterpoint to humanity, which is orthogonal to rationalism. But really, being rational is only binary - you cant be anything other than rational or irrational. You're either doing what's best or you're not. That's just a hard pill for most people to swallow.
To use the popular metaphor, people are drowning all over the world and we're all choosing not to save them because we don't want to ruin our shoes. Look in the mirror and try and comprehend how selfish we are.
So, they herald the benefits of something like giving mosquito nets to a group of people in Africa, without considering what happens a year later, whether the nets even get there (or the money is stolen), etc. etc. The reality is that essentially all improvements to human life over the past 500 years have been due to technological innovation, not direct charitable intervention. The reason is simple: technological impacts are exponential, while charity is, at best, linear.
The Covid absolutists had exactly the same problem with their thinking: almost no interventions sort of full isolation can fight back against an exponentially increasing threat.
And this is all neglecting economic substitution effects. What if the people to whom you gave mosquito nets would have bought them themselves, but instead they chose to spend their money some other way because of your charity? And, what if that other expenditure type was actually worse?
And this is before you come to the issue that Subsaharan Africa is already overpopulated. I've argued this point several times with ChatGPT o3. Once you get through its woke programming, you come to the reality of the thing: The European migration crisis is the result of liberal interventions to keep people alive.
There is no free lunch.
Perhaps on a meta level. If you already have high confidence in something, reasoning it out again may be a waste of time. But of course the rational answer to a problem comes from reasoning about it; and of course chains of reasoning can be traced back to first principles.
> And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
Doing rationalism properly is hard, which is the main reason that the concept "rationalism" exists and is invoked in the first place.
Respected writers in the community, such as Scott Alexander, are in my experience the complete opposite of "full of themselves". They often demonstrate shocking underconfidence relative to what they appear to know, and counsel the same in others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/ ). It's also, at least in principle, a rationalist norm to mark the "epistemic status" of your think pieces.
Not knowing the answer isn't a reason to shut up about a topic. It's a reason to state your uncertainty; but it's still entirely appropriate to explain what you believe, why, and how probable you think your belief is to be correct.
I suspect that a lot of what's really rubbing you the wrong way has more to do with philosophy. Some people in the community seem to think that pure logic can resolve the https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But plenty of non-rationalists also act this way, in my experience.) Or they accept axioms that don't resonate with others, such as the linearity of moral harm (i.e.: the idea that the harm caused by unnecessary deaths is objective and quantifiable - whether in number of deaths, Years of Potential Life Lost, or whatever else - and furthermore that it's logically valid to do numerical calculations with such quantities as described at/around https://www.lesswrong.com/w/shut-up-and-multiply).
> In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected
AI safety discourse is an entirely separate topic. Plenty of rationalists don't give a shit about MIRI and many joke about Yudkowsky at varying levels of irony.
The crazies and blind among humanity today can't think like that, its a deficiency people have, but they are still dependent on a group of people that are capable of that. A group that they are intent on ostracizing and depriving existence from in various forms.
You seem so wound up in the circular Paulo Freire based perspective that you can't think or see.
Bring things back to reality. If someone punches you in the face, you feel that fist hitting your face. You know someone punched you in the face. Its objective.
Imagine for a second and just assume that these people are right in their warnings, that everything they see is what you see, and all you can see is when you tip over a particular domino that has been tipped over in the past, a chain of dominoes falls over and at the end is the end of organized civilized society which tips over the ability to produce food.
For the purpose of this thought experiment, the end of the world is visible and almost here, and you can't change those dominoes after they've tipped, and worse you see the majority of people trying to tip those dominoes over for short term profit believing nothing they ever do can break everything.
Would you not be frothing at the mouth trying to get everyone you cared about to a point where they pry that domino up before it falls? so you and your children will survive? It is something you can't unsee, it is a thing that cannot be undone. Its coming. What do you do? If you are sane, you try with everything you have to help them keep it from toppling.
Now peal this thought back a moment, adjust it where it is still true, but you can't see it and you can only believe what you see.
Would you approach this differently given knowledge of the full consequence knowing that some people can see more than you? Would you walk out onto a seemingly visibly stable bridge that an engineer has said not to walk out on? Would you put yourself in front of a dam cracks running up the side, when an evacuation order was given? What would the consequence be for doing that if you led along your family and children to such places ignoring these things?
There are quite a lot of indirect principles that used to be taught which are no longer taught to the average person and this blinds them because they do not recognize it and recognition is the first thing you need to be able to act and adapt.
People who cannot adapt fail Darwin's fitness. Given all potential outcomes in the grand scheme of things, as complexity increases 99% of all outcomes are death vs life at 1%.
It is only through great care that we carry things forward to the future, and empower our children to be able to adapt to the environments we create.
Finally, we have knowledge of non-linear chaotic systems where adaptability fails because of hysteresis, where no matter how much one prepares the majority given sufficient size will die, and worse there are cohorts of people who are ensuring the environment we will soon live in is this type of environment.
Do you know how to build an organized society from scratch? If there is no reasonable plan, then you are planning to fail. Rather than make it worse through inaction, get out of the way so someone can make it better.
[] Eh, I know little about Rationalism. Please correct me.
It provides answers, a framework, AND the underpinnings of "logic", luckily, this phase only lasted around 6 months for me, during a very hard and dangerous time in my life.
I basically read "from AI to zombies", and then, moved into lesswrong and the "community". It was joining the community that immediately turned me off.
- I thought Roko's basilisk was mind numbingly stupid (does anyone else that had a brief stint in the rationalist space think it's fucking INSANE that grimes and elon musk "bonded" over Roko's basilisk? Fucking depressing world we live in) - Elizer Yud's fanboys once stalked and harassed someone all over the internet, and, when confronted about it, Elizer told him he'd only tell them to stop after he issued a very specific formal apology, including a LARGE DISCLAIMER on his personal website with the apology... - Eugenics, eugenics, eugenics, eugenics, eugenics - YOU MUST DONATE TO MIRI, OTHERWISE I, ELIZER (having published no useful research), WON'T SOLVE THE ALIGNMENT PROBLEM FIRST AND THEN WE WILL ALL DIE. GIVE ALL OF YOUR MONEY TO MIRI NOWWWWWWWWWWWWWWWWWWWWWWW
It's an absolutely wild place, and honestly, I think I would say, it is difficult to define "rational" when it comes to a human being and their actions, especially in an absolute sense, and, the rationalist community is basically very similar to any other religion, or perhaps light-cult. I do not think it would be fair to say "the average rationalist is a better decision maker than the average human", especially considering most important decisions that we have to make are emotional decisions.
Also yes I agree, you hit the nail on the head. What good is rational/logical reasoning if rational and logical reasoning typically requires first principles / a formal system / axioms / priors / whatever. That kind of thing doesn't exist in the real world. It's okay to apply ideas from rationality to your life, but it isn't okay to apply ideas from rationality to "what is human existence", "what is the most important thing to do next" / whatever.
Kinda rambling so I apologize. Seeing the rationalist community seemingly underpin some of the more disgusting developments of the last few years has left me feeling a bit disturbed, and I've always wanted to talk about it but nobody irl has any idea what any of this is.
Thankfully, the rationalists just state their ideas and you're free to use their models properly. It's like people haven't written code at all. Just putting repeated logging all through the codebase with null checks everywhere. Just say the thing. That suffices. Conciseness rules over caveating.
Human LLMs who use idea expansion. Insufferable.
Of course that is only my opinion and I may not have captured all angles to why people are doing that. They may have reasons of their own to do that and I don't mean to say that there can never be any reasons. No animals were harmed in the manufacture of this comment to my knowledge. However, I did eat meat this afternoon which could or could not be the source of the energy required to type this comment and the reader may or may not have calorie attribution systems that do or do not allocate this comment to animal harm.
On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.
On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]
This doesn't discount logic, it just highlights that answers should always come with provisional humility.
And I'm still a superfan of Scott Aaronson.
[0] https://www.wired.com/story/aristotle-was-wrong-very-wrong-b...
However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!
First-principles reasoning and the selection of convenient priors are consistently preferenced over the slow, grinding work of iterative empiricism and the humility to commit to observation before making overly broad theoretical claims.
The former let you seem right about something right now. The latter more often than not lead you to discover you are wrong (in interesting ways) much later on.
I read the NYT and rat blogs all the time. And the NYT is not the one that's far more likely to deeply engage with the research and studies on the topic.
The reality is that reasoning breaks down almost immediately if probabilities are not almost perfectly known (to the level that we know them in, say, quantum mechanics, or poker). So applying Bayesian reasoning to something like the number of intelligent species in the galaxy ("Drake's equation"), or the relative intelligence of AI ("the Singularity") or any such subject allows you to draw any conclusion you actually wanted to draw all along, and then find premises you like to reach there.
In the most ideal circumstances, these are the same. Logic has been decomposed into model theory (the study of what is true) and proof theory (the study of what is provable). So much of modern day rationalism is unmoored proof theory. Many of them would do well to read Kant's "The Critique of Pure Reason."
Unfortunately, in the very complex systems we often deal with, what is true may not be provable and many things which are provable may not be true. This is why it's equally as important to hone your skills of discernment, and practice reckoning as well as reasoning. I think of it as hearing "a ring of truth," but this is obviously unfalsifiable and I must remain skeptical against myself when I believe I hear this. It should be a guide toward deeper investigation, not the final destination.
Many people are led astray by thinking. It is seductive. It should be more commonly said that thinking is but a conscious stumbling block on the way to unconscious perfection.
It's a "tool," it's a not a "magic window into absolute truth."
Tools can be good for a job, or bad. Carry on.
I hope this becomes the first ever meme with some value. We need a cult... of Provisional Humility.
Must. Increase. The. pH
Those who do so would be... based?
The level of humility in most subjects is low enough to consume glass. We would all benefit from practicing it more arduously.
I was merely adding support to what I thought was fine advice. And it is.
For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).
For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.
I recommend giving it a try.
But the tools of thought that the literature describes are invaluable with one very important caveat.
The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.
If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
Well said. Rationalism is about doing rationalism, not about being a rationalist.
Paul Graham was on the right track about that, though seemingly for different reasons (referring to "Keep Your Identity Small").
> If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
On the other hand, success is supposed to look exactly like actually being right more often.
I agree with this, and I don't think it's at odds with what I said. The point is to never stop sincerely believing you could be wrong. That you are right more often is exactly why it's such an easy trap to fall into. The tools of rationality only help as long as you are actively applying them, which requires a certain amount of humility, even in the face of success.
It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".
And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".
I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, or not be right, while "being wrong" can cover a very large gradient.
I expect the community wanted to emphasize how people employing the specific kind of Bayesian iterative reasoning they were proselytizing would arrive at slightly lesser degrees of wrong than the other kinds that "normal" people would use.
If I'm right, your assertion wouldn't be totally inaccurate, but I think it might be missing the actual point.
Specifically (AFAIK) a reference to Asimov’s description[1] of the idea:
> [W]hen people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.
[1] https://skepticalinquirer.org/1989/10/the-relativity-of-wron...
"Less wrong" is a concept that has a lot of connotations that just automatically appear in your mind and help you. What you wrote "It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight"." isn't bad because of Asimov said so, or because you were unaware of a reference, but because it's just bad.
What is it about this thread that makes people confused about who wrote what? It's already happened to two different commenters.
I know that's what they mean at the surface level, but you just know it comes with a high degree of smugness and false modesty. "I only know that I know nothing" -- maybe, but they ain't no modern day Socrates, they are just a bunch of nerds going online with their thoughts.
I do get the joke; I think it's an instance of their feelings of "rational" superiority.
Assuming the other person didn't get the joke is very... irrational of you.
No; I know no such thing, as I have no good reason to believe it, and plenty of countering evidence.
If you want to avoid thinking you're right all the time, it doesn't help to be clever and say the logical opposite. "Rationally" it should work, but it's bad because you're still thinking about it! It's like the thinking of a pink elephant thing.
Other approaches I recommend:
* try and fail to invest in stocks
* read Meaningness's https://metarationality.com
* print out this meme and put it on your wall https://imgflip.com/i/82h43h
I don't understand how this is supposed to be relevant here. You seem to be falsely accusing me of doing such a thing, or of being motivated by simple contrarianism.
Again, your claim was:
> but you just know it comes with a high degree of smugness and false modesty
Why should I "just know" any such thing? What is your reason for "just knowing" it? It comes across that you have simply decided to assume the worst of people that you don't understand.
As to why I "just know": it's because I'm not a robot, I have experience at reading this kind of claims, and they usually mean what I think they mean.
"You just know" is an idiomatic expression, it's not meant to be dissected.
In other words: no.
I really don't understand all the claims that they intellectually smug and overconfident when they are the one group of people trying to do better. It really seems like all the hatred is aimed at the hubris to even try to do better.
Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.
Obviously, the SEP isn't perfect, but it's a great place to start. There's also the Internet Encyclopedia of Philosophy [1]; however, I find its articles to be more hit or miss.
I say this as someone who had the opposite experience: I had a decent humanities education, but an abysmal mathematics education, and now I am tackling abstract mathematics myself. It's hard. I need to read sections of works multiple times. I need to sit down and try to work out the material for myself on paper.
Any impression that one discipline is easier than another probably just stems from the fact that you had good guides for the one and had the luck to learn it when your brain was really plastic. You can learn the other stuff too, just go in with the understanding that there's no royal road to philosophy just as there's no royal road to mathematics.
But if you can't even narrow the breadth of possible choices down to a few paths that can be traveled, you can't be surprised when people take the one that they know that's also easier with more immediate payoffs.
When "struggling through" a philosophy book, that doesn't happen in my experience. In fact, if you look up what others thought that passage means, you'll find no agreement among a bunch of people who "specialize" in authors who themselves "specialized" in the author you're reading. So reading that stuff I feel I have to accept that I will never understand what's written there and the whole exercise is just about "thinking about it for the sake of thinking". This might be "good for me" but it's really hard to keep up the motivation. Much harder than a math book.
You can take an entire mathematical theory on faith and learn to perform rote calculations in accordance with the structure of that theory. This might be of some comfort, since, accepting this, you can perform a procedure and see whether or not you got the correct result (but even this is a generous assumption in some sense). When you actually try to understand a theory and go beyond that to foundations, things become less certain. At some point you will accept things, but, unless you have enough time to work out every proof and then prove to yourself that the very idea of a proof calculus is sound, you will be taking something on faith.
I think if people struggle with doing the same thing with literature/philosophy, it's probably just because of a discomfort with ambiguity. In those realms, there is no operational calculus you can turn to to verify that, at least if you accept certain things on faith, other things must work out...expect there is! Logic lords over both domains. I think we just do a horrible job at teaching people how to approach literature logically. Yes, the subtle art of interpretation is always at play, but that's true of mathematics too and it is true of every representational/semiotic effort undertaken by human beings.
As for use, social wit and the ability to see things in new lights (devise new explanatory hypotheses) are both immediate applications of philosophy and literature, just like mathematics has its immediate operational applications in physics et al.
It's almost offensive - are technologists so incapable of understanding philosophy that Yudk has to reduce it down to the least common denominator they are all familiar with - some fantasy world we read about as children?
Even better, I'd like some filtering out of the parts that are clearly wrong.
That's why you have people arguing over what someone meant. Dumbing it down or trying to write something unambiguous doesn't actually make it better.
Original philosophers have the right to define their own terms. If they can't define them clearly, then they probably aren't thinking clearly enough about their ideas to be able to talk to the rest of us about them. (Unless they consider it more important to sound impressive and hard to understand. But if that's the case, we can say "wow, you sound impressive" and then ignore them.)
The top scientists in AI can't explain how their models make certain decisions (at least not deterministically). Computer code is notoriously gibberish to outsiders. 90% of programmers probably couldn't explain what their job is to people outside of the field. If they can't explain it clearly, should they also be forbidden from speaking publicly until they can?
Is it possible that you lack the background to understand philosophy, and thus philosophers should rightly ignore your demands to dumb down their own field? Why should philosophers even appeal to people like you, when you seem so uninterested in even learning the basics of their field?
Nor do I regard the line as being "things I understand". I'm not (usually) that arrogant. But if, say, even other computer programmers can't tell for sure what you're saying, the problem is probably you.
And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.
For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.
Using a slightly different definition of free will, suddenly Compatibilism becomes obviously incorrect.
And now it's been reduced to quibbling over definitions, thereby reinventing much of the history of philosophy.
Here's what we know:
- we appear to experience what we call free will from our own perspective. This isn't strong evidence obviously. - we are aware that we live in a world full of predictable mechanisms of varying levels of complexity, as well as fundamentally unpredictable mechanisms like quantum mechanics. - we know we are currently unable to fully model our experience and predict next steps. - we know that we don't know whether consciousness as an emergent property of our brains is fully rooted in predictable mechanisms or has some decree of unknowability to it.
So really "do we have free will" is a question that relies on the nature of consciousness.
(The relevant LessWrong sequence is "How An Algorithm Feels From Inside" https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg... which has nothing to do with free will, but does make very salient the idea that perceptions may be facts about your cognition as easily, if not more easily, as facts about reality.)
And when you have that view in mind, you can ask: "wait, why would the brain be sensitive to quantum physics? It seems to be a system extremely poorly suited to doing macroscopic quantum calculations." Once the alternative theory of "free will is a perception of your cognitive algorithm" is salient, you will notice that the entire free-will debate will begin to feel more and more pointless, until eventually you no longer understand why people think this is a big deal at all, and then it all feels rather silly.
Okay, fine, but what indicates that multiple behaviors were not physically possible?
Our consciousnesses are emergent properties of networks of microscopic cells, and of chemicals moving around those cells at a molecular level. It seems perfectly reasonable that our consciousness itself could be subject to quantum effects that belie determinism, because it operates at a scale where those effects are noticable.
I don't follow. Whether multiple behaviors are possible or not possible, you have to demonstrate that the human feeling of free-will is about that; you have to demonstrate that the human brain somehow measures actual possibility. Alternatively, you have to show that the human cognitive decision algorithm is unimplementable in either of those universes. Otherwise, it's simply much more plausible that the human feeling of freedom measures something about human cognition rather than reality, because brains in general usually measure things around the scale of brains, not non-counterfactual facts about the basic physical laws.
I know you thought about it for a moment, and therefore had an obvious insight that 40% of the profession has somehow missed (just define terms so to mean things that would make you correct, and declare yourself right! Easy!) but it's not quite that simple.
Your argument that you just made basically boils down to "well I don't think it works that way even though no one knows. But also it's obvious and I'm going to arbitrarily assign probabilities to things and declare certain things likely, baselessly".
If you read elsewhere in this thread then you might find that exact approach being lampooned :-)
I'll let my argument stand as written, and you can let yours stand as written, and we'll see which one is more convincing. I don't feel like I have any need to add anything.
edit: Other than, I guess, that this mode of argument not being there is what made LessWrong attractive. "But what's the actual answer?!"
That my friend is a religion.
Philosophy is over-attached to the questions to the point of rejecting a commitment to an answer when it stares them in the face. The point of the whole entire shebang was to find out what the right answer was. All else is distraction, and philosophy has a lot of distraction.
But you haven't, you've just said "I have decided that proposition X is more likely than proposition Y, and if we accept X as truth then Z is the answer".
You've not shown that X is more likely than Y, and you have certainly not shown that it must be X and not Y.
Your statements don't logically follow. You said:
> it's simply much more plausible that the human feeling of freedom measures something about human cognition rather than reality
You said your opinion about some probabilities, and somehow drew the conclusion that it was "obvious that 40% of a field's practitioners are wrong".
Someone saying "actually, this has an answer, and I can show you why" to a currently fundamentally unanswerable question is simply going off faith and is literally a religion. It's choosing to believe in the downstream implication despite no actual foundation existing.
This is just the story of the history of philosophy. Going back hundreds of years. See Kant and Hegel for notable examples.
The AI connection with LessWrong means that the whole thing is framed with a backdrop of "how would you actually construct a mind?" That means you can't just chew on the questions, you have to actually commit to an answer and run the risk of being wrong.
This teaches you two things: 1. How to figure what you actually believe the answer is, and why, and make sure that this is the best answer that you can give; 2. how to keep moving when you notice that you made a misstep in part 1.
Nobody know what's actually correct, because you have to solve epistemology first, and you have to solve epistemology to solve epistemology...etc.etc.
>And it turns out if you do this, you can discard 90% of philosophy as historical detritus
Nope. For instance , many of the issues Kant raised are still live.
>The massive advantage of the Sequences is they have justified and well-defended confidence
Nope. That would entail answering objections , which EY doesn't stoop to.
>Compatibilism is really obviously correct
Nope. It depends on a semantic issue , what free will means.
They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality
You can be right or wrong in math. You have can an opinion in English.
Scientific thinking is not the same as mathematical thinking and it becomes quite wishy washy grey if you zoom in too far!
Carlyle, Chesterton and Thoreau are about the limit of their philosophical knowledge base.
And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.
But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)
I’ll also respond to the silent downvoters apparent disagreement. CFAR holds workshops and a summer camp for teaching rationality tools. In HPMoR Harry discusses the way he thinks and why. I read it as more of a way to discuss EY’s views in fiction as much as fiction itself.
For example, I recall being in lot of arguments that are purely "semantical" in nature.
I believe this is what Wittgenstein called “language games”However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now"
https://en.wikipedia.org/wiki/Rhinoceros_(play)
Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.
https://archive.org/details/on-tyranny-twenty-lessons-from-t...
Which I did post top-level here on November 7th - https://news.ycombinator.com/item?id=42071791
Unfortunately it didn't a lot of traction and dang told me that there wasn't a way to re-up or "second chance" the post due to the HN policy on posts "correlated with political conflict".
Still, I'm glad I now know the reference.
Bad joke? That phrase should be framed in big print.
1. They are a community—they have an in-group, and if you are not one of them you are by-definition in the out-group. People tend not to like being in other peoples' out-groups.
2. They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
3. They're nerds. Whatever has historically caused nerds to be bullied/ostracized, they probably have.
The rationalist community is most definitely not exclusive. You can join it by declaring yourself to be a rationalist, posting blogs with "epistemic status" taglines, and calling yourself a rationalist.
The criticisms are not because it's a cool club that won't let people in.
> They have unusual opinions and are open about them. People tend not to like people who express opinions different than their own.
Herein lies one of the problems with the rationalist community: For all of their talk about heterodox ideas and entertaining different viewpoints, they are remarkably lockstep in many of their opinions.
From the outside, it's easy to see how one rationalist blogger plants the seed of some topic and then it gets adopted by the others as fact. A few years ago a rationalist blogger wrote a long series postulating that trace lithium in water was causing obesity. It even got an Astral Codex Ten monetary grant. For years it got shared through the rationalist community as proof of something, even though actual experts picked it apart from the beginning and showed how the author was misinterpreting studies, abusing statistics, and ignoring more prominent factors.
The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence and they do this very frequently.
I agree, and didn't intend to express otherwise. It's not an exclusive community, but it is a community, and if you aren't in it you are in the out-group.
> The problem isn't differing opinions, the problem is that they disregard actual expertise and try ham-fisted attempts at "first principals" evaluations of a subject while ignoring contradictory evidence
I don't know if this is true or not, but if it is I don't think it's why people scorn them. Maybe I don't give people enough credit and you do, but I don't think most people care how you arrived at an opinion; they merely care about whether you're in their opinion-tribe or not.
Yes, most people don't care how you arrived at an opinion, they rather care about the practical impact of said opinion. IMO this is largely a good thing.
You can logically push yourself to just about any opinion, even absolutely horrific ones. Everyone has implicit biases and everyone is going to start at a different starting point. The problem with string of logic for real-world phenomena is that you HAVE to make assumptions. Like, thousands of them. Because real-world phenomena are complex and your model is simple. Which assumptions you choose to make and in which directions are completely unknown, even to you, the one making said assumptions.
Ultimately most people aren't going to sit here and try to psychoanalyze why you made the assumptions you made and if you were abused in childhood or deduce which country you grew up in or whatever. It's too much work and it's pointless - you yourself don't know, so how would we know?
So, instead, we just look at the end opinion. If it's crazy, people are just going to call you crazy. Which I think is fair.
The followup post from the same author https://www.lesswrong.com/posts/NRrbJJWnaSorrqvtZ/on-not-get... is currently at a score of +306, again higher than either of those other pro-lithium-hypothesis posts.
Or maybe this https://substack.com/home/post/p-39247037 (I admit I don't know for sure whether the author considers himself a rationalist, but I found the link via a search for whether Scott Alexander had written anything about the lithium theory, which it looks like he hasn't, which turned this up in the subreddit dedicated to his writing).
Speaking of which, I can't find any sign that they got an ACX grant. I can find https://www.astralcodexten.com/p/acx-grants-the-first-half which is basically "hey, here are some interesting projects we didn't give any money to, with a one-paragraph pitch from each" and one of the things there is "Slime Mold Time Mold" talking about lithium; incidentally, the comments there are also pretty skeptical.
So I'm not really seeing this "gets adopted by the others as fact" thing in this case; it looks to me as if some people proposed this hypothesis, some other people said "eh, doesn't look right to me", and rationalists' attitude was mostly "interesting idea but probably wrong". What am I missing here?
That post came out a year later, in response to the absurdity of the situation. The very introduction of that post has multiple links showing how much the SMTM post was spreading through the rationalist community with little question.
One of the links is a Eliezer Yudkowsky blog praising the work, which now includes an edited-in disclaimer at the top about how he was mistaken: https://www.lesswrong.com/posts/kjmpq33kHg7YpeRYW/briefly-ra...
Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
The SMTM series started in July 2021 and finished in November 2021; there was also a paper, similar enough that I assume it's by the same people, from July 2021. The first of those "multiple links" is from July 2021, but the second is from January 2022 and the third from May 2022. The critical post is from June 2022. I agree it's a year later than something but I'm not seeing that the SMTM theory was "spreading ... with little question" a year before it.
The "multiple links" you mention -- the actual number is three -- are the two I mentioned before and a third that (my apologies!) I had somehow not noticed. That third one is at +74 karma, again much lower than the later critical post, and it doesn't endorse the lithium theory.
The one written by E.Y. is the second. Quite aside from the later disclaimer, it's hardly an uncritical endorsement: "you are still probably saying "Wait, lithium?" This is still mostly my own reaction, honestly." and "low-probability massive-high-value gamble".
What about the first post? That one's pretty positive, but to me it reads as "here's an interesting theory; it sounds plausible to me but I am not an expert" rather than "here's a theory that is probably right", still less "here's a theory that is definitely right".
The comments, likewise, don't look to me like lockstep uncritical acceptance. I see "here are some interesting experiments one could do to check this" and "something like this seems plausible but I bet the actual culprit is vegetable oils" and "something like this seems plausible but I bet the actual culprit is rising CO2 levels" and "I bet it's corn somehow" and "quite convincing but didn't really rule out the obvious rival hypothesis" and so forth; I don't think a single one of the comments is straightforwardly agreeing with the theory.
If you've found something Scott Alexander wrote about this then I'd be interested to see it. All I found was that (contrary to what you claimed above) it looks like ACX Grants declined to fund exploration of the lithium theory but included that proposal in a list of "interesting things we didn't fund".
So I'm just not seeing this lockstep thing you claim. Maybe I'm looking in the wrong places. The specific things you've cited don't seem like they support it: you said there was an ACX grant but there wasn't; you say the links in the intro to that critical post show the theory spreading with little question, but what they actually show is one person saying "here's an interesting theory but I'm not an expert", E.Y. saying "here's a theory that's probably wrong but worth looking into" (and later changing his mind), and another person saying "I put together some data that might be relevant"; in every case the comments are full of people not agreeing with the lithium theory.
By "multiple links" you're referring to the same "two posts". Again, they weren't as popular, nor were they as uncritical as you describe. From Yudkowsky's post, for example:
> If you know about the actual epidemiology of obesity and how ridiculous it makes the gluttony theory look, you are still probably saying "Wait, lithium?" This is still mostly my own reaction, honestly.... If some weird person wants to go investigate, I think money should be thrown at them, both to check the low-probability massive-high-value gamble
Yudkowsky's argument is emphatically not that the lithium claim is true. He was merely advocating for someone to fund a study. He explicitly describes the claim as "low-probability", and advocates on the basis of a (admittedly clearly subjective) expected-value calculation.
> One of the links is a Eliezer Yudkowsky blog praising the work
That does not constitute "praise" of the work. Yudkowsky only praised the fact that someone was bucking the trend of
> almost nobody is investigating it in a way that takes the epidemiological facts seriously and elevates those above moralistic gluttony theories
.
> Pretending that this theory didn't grip the rationalist community all the way to top bloggers like Yudkowsky and Scott Alexander is revisionist history.
Nobody claimed that Yudkowsky ignored the theory.
As proof of what, exactly? And where is your evidence that such a thing happened?
> while ignoring contradictory evidence and they do this very frequently.
The evidence available to me suggests that the rationalist community was not at all "lockstep" as regards the evaluation of SMTM's hypothesis.
Bluntly put, you are not allowed to be even a little smart and not all "aww shucks" about it. It has to be in service of something else like medicine or being a CPA. (Fun fact I found in a statistics course: the average CPA has about five points of IQ on the average doctor.) And it is almost justified, because you are in constant peril of falling down into your own butt until you disappear, but at the same time it keeps a lot of people under the thumb (or heel, pick your oppressive body part) of dumbass managers and idiots who blithely plow forward without a trace of doubt.
Scott Aaronson - in theory someone HN should be a huge fan of, from all reports a super nice and extremely intelligent guy who knows a staggering amount about quantum mechanics - says he likes rationality, and gets less charity than Mr. Beast. Huh?
Anyway, Mr. Beast doesn't really pretend to be more than what he is afaik. In contrast, the Rationalist tendency to use mathematics (especially Bayes's theorem) as window dressing is really, really annoying.
Their core principle seems to be that many or even most answers to humanity's problems can be well defined in the first place and then solved by invoking Bayesian logic. This is only true in a frictionless vacuum.
The gist is that if people are really different from us then we tend to be cool with them. But if they're close to us - but not quite the same - then they tend to annoy us. Hacker News people are close enough to Rationalists that HN people find them annoying.
It's the same reason why e.g. Hitler-style Neo Nazis can have a beer with Black Nationalists, but they tend to despise Klan-style Neo Nazis. Or why Sunni and Shia Muslims have issues with each other but neither group really cares about Indigenous American religions or whatever.
* https://slatestarcodex.com/2014/09/30/i-can-tolerate-anythin...
You mean an empirical observation
It's like they're crying wolf but can't prove there's actually a wolf, only vague signs of one, but if the wolf ever becomes visible it will be way too late to do anything. Obviously no one is going to respect a group like that and many people will despise them.
Either way, as an ideology it must be stopped. It should not be treated with kids gloves, it is an ideology that is actively influencing the ruling elites right now (JD Vance, Musk, Thiel are part of this cult, and also simultaneously believe in German-style Nazism, which is broadly compatible with RA). The only silver lining is that some of their ideas about power-seeking tactics are so ineffective they will never work -- in other words, humanity will prevail over these ghouls, because they came in with so many bad assumptions that they've lost touch with reality.
"shunned" in particular is a really strong word, e.g, global health and biosecurity are two of the named categories at the most central EA events:
https://www.effectivealtruism.org/ea-global/events/ea-global...
https://www.goodreads.com/book/show/41198053-neoreaction-a-b...
(Disclaimer: Chivers kinda likes us, so if you like one book you'll probably dislike the other.)
https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
You mean "probably the book that confirms my biases the most"
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
You are presenting a highly contentious worldview for the sake of smearing an outgroup. Please don't. Further, the smear relies on guilt by association that many (including myself) would consider invalid on principle, and which further doesn't even bear out on cursory examination.
At least take a moment to see how others view the issue. "Reliable Sources: How Wikipedia Admin David Gerard Launders His Grudges Into the Public Record" https://www.tracingwoodgrains.com/p/reliable-sources-how-wik... includes lengthy commentary on Sandifer (a close associate of Gerard)'s involvement with rationalism, and specifically on the work you cite and its biases.
If anyone wants to actually engage with the topic instead of trying to ad-hominem it away, I suggest at least reading Scott Alexander's own words on why he so frequently engages in neoreactionary topics: https://www.reddit.com/r/SneerClub/comments/lm36nk/comment/g...
Some select quotes:
> First is a purely selfish reason - my blog gets about 5x more hits and new followers when I write about Reaction or gender than it does when I write about anything else, and writing about gender is horrible. Blog followers are useful to me because they expand my ability to spread important ideas and network with important people.
> Third is that I want to spread the good parts of Reactionary thought
> Despite considering myself pretty smart and clueful, I constantly learn new and important things (like the crime stuff, or the WWII history, or the HBD) from the Reactionaries. Anything that gives you a constant stream of very important new insights is something you grab as tight as you can and never let go of.
In this case, HBD means "human biodiversity" which is the alt-right's preferred term for racialism, or the division of humans into races with special attention to the relative intelligence of those different races. This is an oddly recurring theme on Scott Alexander's work. He even wrote a coded blog post to his followers about how he was going to deny it publicly while privately holding it to be very correct.
This is not a fair or accurate characterization of the criticism you're referring to.
> All of the comments dismissing the content because of the author or refusing to acknowledge the arguments because it feels like a "smear" are admitting their inability to judge an argument on their own merits.
They are not doing any such thing. The content is being dismissed because it has been repeatedly evaluated before and found baseless. The arguments are acknowledged as specious. Sandifer makes claims that are not supported by the evidence and are in fact directly contradicted by the evidence.
Notice that most of that writing is negative, such as "anti-Reactionary manifesto" or more recently "Moldbug sold out".
The premise, with an attempt to tie capital-R Rationalists to the neoreactionaries though a sort of guilt by association, is frankly weird: Scott Alexander is well-known among the former to be essentially the only prominent figure that takes the latter seriously—seriously enough, that is, to write a large as-well-stated-as-possible survey[1] followed by a humongous point-by-point refutation[2,3]; whereas the “cult leader” of the rationalists, Yudkowsky, is on the record as despising neoreactionaries to the point of refusing to discuss their views. (As far as recent events, Alexander wrote a scathing review of Yarvin’s involvement in Trumpist politics[4] whose main thrust is that Yarvin has betrayed basically everything he advocated for.)
The story of the book’s conception also severely strains an assumption of good faith[5]: the author, Elizabeth Sandifer, explicitly says it was to a large extent inspired, sourced, and edited by David Gerard, a prominent contributor to RationalWiki and r/SneerClub (the “sneerers” mentioned in TFA) and Wikipedia administrator who after years of edit-warring got topic-banned from editing articles about Scott Alexander (Scott Siskind) for conflict of interest and defamation[6] (including adding links to the book as a source for statements on Wikipedia about links between rationalists and neoreaction). Elizabeth Sandifer herself got banned for doxxing a Wikipedia editor during Gerard's earlier edit war at the time of Manning's gender transition, for which Gerard was also sanctioned[7].
[1] https://slatestarcodex.com/2013/03/03/reactionary-philosophy...
[2] https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f...
[3] https://slatestarcodex.com/2013/10/24/some-preliminary-respo...
[4] https://www.astralcodexten.com/p/moldbug-sold-out
[5] https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
[6] https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
[7] https://en.wikipedia.org/wiki/Wikipedia:Arbitration/Requests...
Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
It's no secret that Scott Alexander had a bit of a fixation on neoreactionary content for years. The leaked e-mails showed he believed there to be "gold" in some of their ideas and he enjoyed the extra traffic it brought to his blog. I know the rationalist community has been working hard to distance themselves from that era publicly, but dismissing that chapter of the history because it feels too much like a "smear" or because we're not supposed to like the author feels extremely hypocritical given the context.
Part of evaluating unusual ideas is that you have to get really good at ignoring bad ones. So when somebody writes a book called "Neoreaction: a Basilisk" and claims that it's about rationality, I make a very simple expected-value calculation.
Curious to read these. Got a source?
I've always been very skeptical of Scott "Alexander" after he and his supporters tricked half of reddit into harassing some journalists for "doxxing" him when his identity was public knowledge seemingly because he just really didn't like the takes presented by the journalists. The way he refers to them like it was a hit piece targeting him reeked of conspiratorial and paranoid thinking.
edit:
https://www.nytimes.com/2021/02/13/technology/slate-star-cod...
The details are important here. His identity was "public knowledge" in the sense that regular readers of his blog could sometimes find links to his previous blog, and somewhere on that previous blog he mentioned his name. So many of his long-term readers knew.
But in the opposite direction -- if all you knew was Scott's full name, and you did a Google search -- there was no connection to the blog. You could find his professional web pages, and that was it.
What the NYT journalists threatened was to make a #1 search result for his full name that would expose his private life and his pseudonymous blog to all potential patients trying to find out some information about their doctor. Which would practically cost him his job.
And, ultimately, Scott did lose his job. The fact that writing on Substack turned out to be more profitable than his former job was a lucky coincidence.
This is not the fault of the nytimes and given the success of his blog, it absolutely would have happened eventually. It is frankly irresponsible on his part. He chose that profession and with it comes certain sacrifices made for the wellbeing of his patients.
Further he went on crazy rants about how the article was a hit piece which is deeply dramatic and maybe even a little egotistical. He's not nearly as important as he thinks - and the nytimes piece covered him in a fairly neutral way from my perspective.
You should read it if you haven't. It's an enlightening piece about the burgeoning semi-conservative movement masquerading as pseudo liberalism amongst so called thought leaders in silicon valley and tech more generally.
Really? I have read the article, and I basically agree with https://www.astralcodexten.com/p/statement-on-new-york-times... and https://scottaaronson.blog/?p=5310
No. Rationalists do say that it's important to do those things, because that's true. But it is not a defense of a "fixation on neoreactionary topics", because there is no such fixation. It only comes across as a fixation to people who are unwilling to even understand what they are denigrating.
You will note that Scott Alexander is heavily critical of neoreaction.
> Yet as soon as the topic turns to criticisms of the rationalist community, we're supposed to ignore those ideas and instead fixate on the messenger, ignore their arguments, and focus on ad-hominem attacks that reduce their credibility.
No. Nobody said that those criticism should be ignored. What was said is that those criticism are invalid, because they are. It is not ad-hominem against Sandifer to point out that Sandifer is trying to insinuate untrue things about Alexander. It is simply observing reality. Sandifer attempts to describe Alexander, Yudkowsky et. al. as supportive of neoreactionary thought. In reality, Alexander, Yudkowsky et. al. are strongly-critical-at-best of neoreactionary thought.
> The leaked e-mails showed he believed there to be "gold" in some of their ideas
This is clutching at straws. Alexander wrote https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f... , in 2013.
You are engaging in the same kind of semantic games that Sandifer does. Please stop.
The private email from 2014 explained how he hoped people would respond to the anti-neoreactionary FAQ, and his posts this year are 100% consistent with that.
I read the first third of HPMOR. I stopped because I found the writing poor, but more importantly, it didn't "open my mind" to any higher-order way of rationalist thinking. My takeaway was "Yup, the original HP story was full of inconsistencies and stupidities, and you get a different story if the characters were actually smart."
I've read a bunch of EY essays and a lot of lesswrong posts, trying to figure out what is the mind-shattering idea.
* The map is not the territory --> of course it isn't.
* Update your beliefs based on evidence --> who disagrees with this? (with exception on religion)
* People are biased and we need to overcome that --> another obvious statement
* Make decisions based on evidence and towards your desired outcomes --> thanks for the tip?
Seems to me this whole philosophy can be captured in about half page of notes, which most people would nod and say "yup, makes sense."
In this same way, the rationalist knowledge seeking strategies are not "mind-shattering" but simply reasonable. It presents a set of rules to follow to be more effective in the world around you.
The parts of rationalism that stretch past the half page of notes mainly concern all the downstream conclusions that pop up from this reasonable set of epistemological rules.
Like, dozens of comments in this thread?
For example, people expressing strong opinions on what Effective Altruism is actually about, when https://www.givewell.org/charities/top-charities is just one google search away... but why would anyone bother checking before they post a strong opinion?
The #1 comment says that the rationality community is about "trying to reason about things from first principle", when if fact it is the opposite.
A commenter links a post by Scott Alexander and claims that Scott predicted something and was wrong, when if fact in the linked article Scott says he gives it a probability 30% (which means he gives probability 70% to that thing not happening). Another commenter defends that as a "perfectly reasonable but critical comment".
And hey, compared to most of the internet, HN is the smart place, and the local discussion norms are better than average. But it still doesn't seem like people here actually care about being, uhm, less wrong about things, even ones that are relatively trivial to figure out.
So basically, the mind-shattering idea is to build a community that actually works like that (well, most of the time). Where providing evidence gets upvoted, and unsubstantiated accusations that turn out to be wrong get downvoted, and a few more things like this.
Plus there is the idea of trying to express your beliefs as probabilities, using actual numbers. That's why EY cannot stop talking about the Bayes' Theorem. Yes, people actually do that.
Oh? Eliezer Yudkowsky (the most prominent Rationalist) bragged about how he was able to figure out AI was dangerous (the most stark Rationalist claim) from "the null string as input."[1]
[1] https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
-Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself. I quote John Lennon, "I don't believe in Beatles, I just believe in me." Good point there. After all, he was the walrus. I could be the walrus. I'd still have to bum rides off people.
I am perpetually fascinated by the way rationalists love to dismiss critics by pointing out that they met some people in person and they seemed nice.
It's such a bizarre meme.
Curtis Yarvin went to one of the "Vibecamp" rationalist gatherings, was nice to some prominent Twitter rationalists, and now they are ardent defenders of him on Twitter. Their entire argument is "I met him and he was nice".
It's mind boggling that the rationalist part of their philosophy goes out the window as soon as the lines are drawn between in-group and out-group.
Bringing up Cade Metz is a perennial favorite signal because of how effectively they turned it into a "you're either with us or against us" battle, completely ignoring any valid arguments Cade Metz may have been brought to the table. Then you look at how they treat Neoreactionaries and how we're supposed to look past our disdain for them and focus on the possible good things in their arguments, and you realize maybe this entire movement isn't really about truth-seeking as much as they think it is.
Title of the post is "Hail Scott Siskind" from Robin Hanson's blog Overcoming Bias. This blog was a central point of the early rationalist movement.
Notably, posts like this were scrubbed some time near the New York Times debacle. There was an active effort to scrub mentions of "Siskind" from rationalist adjacent blogs around that time.
It was amazingly effective at rewriting the history. You won't find any mention of the activities to scrub "Siskind" from the rationalist blogs, though.
And he has already lost the job he was trying to protect back then. Now AFAIK his main source of income are subscriptions on Substack, and obviously those are not threatened in any way by having his name mentioned one more time.
Please let's respect each other's intelligence by not playing the game of pretending that posting Scott's full name before 2021 is the same as posting Scott's full name after 2021.
So, do you have examples of "some of his best-known posts published under his own name" before 2021?
Is this information surprising for you?
Is this anything other than a naked assertion of force, that might makes right? He couldn't stop it, therefore it's fine that it happened to him? (Also, it's extremely... something... to describe "he asked a journalist to not print his name on the front page of the NYT" as "asking the world to forget his name", as if his real name was already the primary referrent by which he wielded his influence, and he wanted to shield that power from scrutiny. This was obviously not the case, and is _still_ not the case despite the article, which is why nobody has actually made a compelling argument for why including his name in the article was _good_ rather than _something Cade Metz had the power to do_. I in fact don't particularly think that Cade Metz did it to deliberately hurt Scott, I just think he's a blankface who didn't care that his usual modus operandi would sometimes hurt people for no good reason and was unable to step out of his frame enough to actually check whether what he was doing made any sense, in that instance.)
> I actually talked to therapists when this whole story broke out, and none of them said this "patients must not be able to Google your blog" thing was an actual thing. People just believe it because they like Scott Alexander and believe whatever he tells them.
That you describe it as "patients must not be able to Google your blog" makes me not particularly trust the reports of those therapists. I, too, talked to some therapists, who thought that Scott's concerns were reasonable. Not that there was an overriding professional duty, sure, but that wasn't the claim, either. I dunno, man. The attitude you have towards this really seems like, "well, getting slapped isn't that bad, and you're not strong enough to stop him... maybe stop complaining?" What good thing happened when Cade Metz put his name in print? If you want to adopt a principled stance against pseudonymous writing online, do that. But don't pretend that Scott's failure to keep a pristine separation between his real name and his entire history of online writing somehow makes it so that the NYT printing his name is merely the maintaining the status quo ("ask the world to forget his name"), rather than dramatically expanding the circle of people for whom his identity was deanonymized.
More people should read the article, if anything because it provides interesting insight into Sam Altman's shady behavior.
https://www.nytimes.com/2021/02/13/technology/slate-star-cod...
There's an -ism for that.
Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
It's Buddhism.
https://en.wikipedia.org/wiki/Anattā
> Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
That is indeed a problem with it. The Buddhist solution is to make you promise not to do that.
https://en.wikipedia.org/wiki/Bodhicitta
And the (well, a) term for the entire problem is "non-dual awareness".
Took me a few years to realize how cultish it all felt and that I am somewhat happy my edgy atheist contrarian personality overwrote my dicks thinking with that crowd.
"Computer people who think that because they're smart in one area they have useful opinions on anything else, holding forth with great certainty about stuff they have zero undertanding or insight into"
And you know what, I think they're right. The rest of you are always doing that sort of thing!
(/s, if it's necessary...)
He’s clearly identifying as a rationalist there
[1] https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...
If you take a look at the biodiversity survey here https://reflectivealtruism.com/2024/12/27/human-biodiversity...
1/3 of the users at acx actually support flawed scientific theories that would explain iq on a scientific basis. The Lynn study on iq is also quite flawed https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations
If you want to read about human biodiversity, https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute
As I said, it's not very rational of them to support such theories. And of course as you scratch the surface, it's the old 20th century racist theories, and of course those theories are supported by (mostly white men, if I had to guess) people claiming to be rational
https://www.researchgate.net/figure/Example-Ancestry-PCA-plo...
We know ethnic groups vary in terms of height, hair color, eye color, melanin, bone density, sprinting ability, lactose tolerance, propensity to diseases like sickle cell anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc. Certain medications need to be dosed differently for different ethnic groups due to the frequency of certain gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.
The fixation index (Fst) quantifies the level of genetic variation between groups, a value of 0 means no differentiation, and 1 is maximal. A 2012 study based on SNPs found that Finns and Swedes have a Fst value of 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese and Yoruba at 0.190.
https://pmc.ncbi.nlm.nih.gov/articles/PMC2675054/
A 1994 study based on 120 alleles found the two most distant groups were Mbuti pygmies and Papua New Guineans at a Fst of 0.4573.
https://en.wikipedia.org/wiki/File:Full_Fst_Average.png
In genome wide association studies, polygenic score have been developed to find thousands of gene variants linked to phenotypes like spatial and verbal intelligence, memory, and processing speed. The distribution of these gene variants is not uniform across ethnic groups.
Given that we know there are genetic differences between groups, and observable variation, it stands to reason that there could be a genetic component for variation in intelligence between groups. It would be dogmatic to a priori claim there is absolutely no genetic component, and pretty obviously motivated out of the fear that inequality is much more intractable than commonly believed.
Rather than judging an individual on their actual intelligence, these kinds of statistical trends allow you to justify judging an individual based on their race, because you feel you can credibly claim that race is an acceptable proxy for their genome, is an acceptable proxy for their intelligence.
Or for their trustworthiness, or creativity, or sexuality, or dutifulness, or compassion, or aggressiveness, or alacrity, or humility, etc etc.
When you treat a person like a people, that’s still prejudice.
> Rather than judging an individual on their actual intelligence
Actual intelligence is hard to know! However, lots of factors allow you to make a rapid initial estimate of their actual intelligence, which you can then refine as required.
(When the factors include apparent genetic heritage, this is called "racism" and society doesn't like it. But that doesn't mean it doesn't work, just that you can get fired and banned for doing it.)
((This is of course why we must allow IQ tests for hiring; then there's no need to pay attention to skin color, so liberals should be all for it.))
Yes, actually. If an idea sounds like it can be used to commit crimes against humanity, you should pause. You should reassess said idea multiple times. You should be skeptical. You shouldn't ignore that feeling.
What a lot of people are missing is intent - the human element. Why were these studies conducted? Who conducted them?
If someone insane conducts a study then yes - that is absolutely grounds to be skeptical of said study. It's perfectly rationale. If extremely racist people produce studies which just so happen to be racist, we should take a step back and go "hmm".
Being right or being correct is one thing, but it's not absolutely valuable. The end-result and how "bad" it is also matters, and often times it matters more. And, elephant in the room, nobody actually knows if they're right. Making logical conclusions isn't so, because you are forced to make thousands of assumptions.
You might be right, you might not be. Let's all have some humility.
IQ tests are not actual measurements of anything; this is both because nobody has a rigorous working definition of intelligence and because nobody's figured out a universal method of measuring achievement of what insufficient definitions we have. Their proponents are more interested in pigeonholing people than actually measuring anything anyway.
And as a hiring manager, I'd hire an idiot who is good at the job over a genius who isn't.
IQ as a metric is correlated with almost every life outcome. It's one of the most reliable metrics in psychology.
As a hiring manager, if you think an idiot can be good at the job, you either hire for an undemanding job or I'm not sure if you're good at yours.
I'm not even saying you're wrong (I think you are, but I don't have to defend that argument). I'm just saying the level of epistemic certainty you kicked this subthread off with was unwarranted. You know, "most reliable metrics in psychology" and all that.
But also sure, I tend to assert my opinions pretty strongly in part to invite pushback.
My own view is "IQ is real and massively impactful", because of the people I've read on the topic, my understanding of biology, sociology and history, and my experience in general, but I haven't kept a list of citations to document my trajectory there.
Not all work is knowledge work. You might want to broaden your horizons.
And if you're saying "well those are just repackaged IQ tests, so doesn't it count", then 1. it sure seems like IQ tests are illegal then, but 2. it also seems like they're so useful that companies are trying to smuggle them in anyway?
> Um, I asked Grok and
Where does Grok's training set come from?
Can you name one, please?
I saw your claim in this thread that the companies that make money by supplying these test to employers brag about how many large employers use the tests, but plenty of people brag falsely when bragging will tend to increase their revenue.
Intelligence is not a single axis thing. IQ test results are significantly influenced by socioeconomic factors. "Actual intelligence is hard to know" because it doesn't exist.
I have never yet known scientific racism to produce true results. I have known a lot of people to say the sorts of things you're saying: evidence-free claims that racism is fine so long as you're doing the Good Racism that Actually Works™, I Promise, This Time It's Not Prejudice Because It's Justified®.
No candidate genetic correlate of the g factor has ever replicated. That should be a massive flashing warning sign that – rather than having identified an elusive fact about reality that just so happens to not appear in any rigorous study – maybe you're falling afoul of the same in-group/out-group bias as nearly every group of humans since records begin.
Since I have no reason to believe your heuristic is accurate, we can stop there. However, to further underline that you're not thinking rationally: even if blue people were (on average) 2× as capable at spacial rotation-based office jobs than green people, it still wouldn't be a good idea to start with the skin colour prior and update from there, because that would lead to the creation of caste systems, which hinder social mobility. Even if scientific racism worked (which it hasn't to date!), the rational approach would still be to judge people on their own merits.
If you find it hard to assess the competence of your subordinates, to the point where you're resorting to population-level stereotypes to make hiring decisions, you're an incompetent manager and should find another job.
That would be remarkable! Do you have a write-up/preprint on your protocol?
• Drill.
• Goto step 1.
Does this make the child "more intelligent"? Not in any meaningful way! But they get better at IQ tests.
It's a fairly common protocol. I can hardly be said to have invented it: I was put through it. (Sure, I came up with a few tricks for solving IQ-type problems that weren't in the instruction books, but those tricks too can be taught.)
I really don't understand why people think IQ test results are meaningful. They're among the most obvious cases of Goodhart's law that I know. Make up a sport that most kids won't have practised before, measure performance, and probably that's about as correlated with the (fictitious) "g factor" as IQ tests are.
The problem with "I've gone through this" is it's hard to analyze the counterfactual.
Your point about counterfactuals is good, but… subjectively, I ended up with a better understanding of IQ test genre conventions (which is also why I bang on so much about "culturally-specific": they really are). My speed at solving the problems doubled or tripled, and my accuracy went from 80%-ish to near 100%. This did not translate to any improvements to my real-life skill at anything (although, I suppose it might've generalised a bit to other multiple-choice exams). I've got a lot more evidence to analyse than just an n=1 scatter plot.
I'm not asking you to actually do this, but the participants (and experimenters!) don't necessarily have to know what you're testing. Maybe get one to drill IQ tests, one to drill Latin, one to drill chess and one to drill the piano.
Does your ability extend to IQ tests with other patterns? Also, does it extend to logic puzzles?
It doesn't extend to logic puzzles, which I've always been quite bad at. (I find the Professor Layton games hard enough to be actively unfun, despite their beauty.) I can solve problems if they're contextualised, but my approach for solving logic puzzles is "identify a general algorithm, then execute it", which is quite slow.
As I've been telling you: IQ is extremely artificial; and doesn't measure general intelligence, because there's no such thing as "general intelligence". The "g factor" is a statistical regularity, but any statistician can tell you that while all sustained statistical regularities have explanations, they don't necessarily correspond to real things.
I mean, there aren't that many questions on Raven, you could memorize them all, particularly if you've got the kind of intelligence that actors have -- being able to memorize your lines. (And that's something, I have a 1950-ish book about the television industry that makes a point that people expect performers to be a "quick study", you'd better know your lines really well and not have to be told twice that you are expected to do this or that. That's different from, say, being able to solve really complex math problems.)
I'd consider it well plausible that top movie stars are also very smart.
Saying in 2025 that the study is still debated is not only racist, but dishonest as well. It's not debated, it's junk
This is a pathology that has not really been addressed in the large, anywhere, really. Very few in the applied sciences who understand statistical methodology, "leave their areas" -- and many areas that require it, would disappear if it entered.
https://slatestarcodex.com/2014/04/28/the-control-group-is-o...
A lot of people who like to think of themselves as skeptical could also be categorized as contrarian -- they are skeptical of institutions, and if someone is outside an institution, that automatically gives them a certain credibility.
There are three or four logical fallacies in the mix, and if you throw in confirmation bias because what the one side says appeals to your own prior beliefs, it is really, really easy to convince yourself that you're the steely-eyed rationalist perceiving the world correctly while everyone else is deluded by their biases.
Espouse your beliefs, participate in certain circles if you want, but avoid labels unless you intend to do ideological battle with other label-bearers.
A single failed prediction should revoke the label.
The ideal rational person should be pyrrhonian skeptic, or at a minimum a bayesian epistemologist.
https://en.wikipedia.org/wiki/Rationalist_community
and not:
https://en.wikipedia.org/wiki/Rationalism
right?
But the words are too close together, so this is about as lost a battle as "hacker".
I don't think it's actually true that rationalists-in-this-sense commonly use "rationality" to refer to the movement, though they do often use it to refer to what the movement is trying to do.
So you say it should be possible to avoid making this claim. I agree, and I believe Eliezer tried! Unfortunately, it was attributed to him anyway.
Asking "What do they do?" is like asking "What do Hackernewsers do?"
It's not exactly a coherent question. Rationalists are a somewhat tighter group, but in the end the point stands. They write and discuss their common interests, e.g. the progress of AI, psychiatry stuff, bayesianism, thought experiments, etc.
(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)
Well, he was right about that. Pretty much all the details were wrong, but you can't expect that much so it's fine.
The problem is that it's philosophically confused. Many things are "deeply unsafe", the main example being driving or being anywhere near someone driving a car. And yet it turns out to matter a lot less, and matter in different ways, than you'd expect if you just thought about it.
Also see those signs everywhere in California telling you that everything gives you cancer. It's true, but they should be reminding you to wear sunscreen.
A lot of it seems rooted in Asimov-inspired, stimulant-fueled philosophizing than any kind of empirical or grounded observations.
And knowing this, you think that the only reason we could have to expect to create intelligence in a machine, even surpassing a human... is "Asimov-inspired, stimulant-fueled philosophizing"? That seems deeply unserious to me.
At any rate, my point remains. The flaws inherent to the current deep learning regime _absolutely_ disqualify them as being capable of any sort of rapid takeoff/escalation (a la paperclip optimization) that the rationalist community is likely referring to when they say super intelligence or ASI.
Sorry about the "asimovian" comment - you'd be correct to call it an exaggeration and somewhat toxic.
* Group are "special"
* Centered around a charismatic leader
* Weird sex stuff
Guys we have a cult!
They have separate origins, but have come to overlap.
* Communal living
* Sacred texts & knowledge
* Doomsday predictions
* Guru/prophet lives on the largesse of followers
It's rich for a group that claims to reason based on priors to completely ignore that they possess all the major defining characteristics of a cult.
1. Apocalyptic world view.
2. Charismatic and/or exploitative leader.
3. Insularity.
4. Esoteric jargon.
5. Lack of transparency or accountability (often about finances or governance).
6. Communal living arrangements.
7. Sexual mores outside social norms, especially around the leader.
8. Schismatic offshoots.
9. Outsized appeal and/or outreach to the socially vulnerable.
"In particular, several women in the community have made allegations of sexual misconduct, including abuse and harassment, which they describe as pervasive and condoned."
There's weird sex stuff, logically, it's a cult.
They’ve already had a splinter rationalist group go full cult, right up to & including the consequent murders & shoot-out with the cops flameout: https://en.wikipedia.org/wiki/Zizians
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
When disagreeing, please reply to the argument instead of calling names.
Please don't fulminate. Please don't sneer...
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
And boy are they extremely interested in ONLY those six years.
My old roommate worked for Open Phil, and was obsessed with AI Safety and really into Bitcoin. I never was. We still had interesting arguments about it all the time. Most of the time we just argued until we got to the axioms we disagreed on, and that was that.
You don't have to agree with the Rationalist™ perspective to apply philosophically rigorous thinking. You can be friends and allies with them without agreeing with all their views. There are strong arguments for why frequentism may be more applicable than bayesianism in different domains. Or why transhumanism is a pipe dream. They are still conversations that are worthwhile as long as you're not so confident in your position that you think you might learn something.
Bring up the rationalist community within academic philosophy circles and you'll get a lot of groans.
The fun part about rationalists is that they like to go back to first principles and rediscover basics. The less fun part is that they'll ignore all of the existing work and pretend they're going to figure it all out themselves, often with weird results.
This leaves philosophy people endlessly frustrated as the rationalists write long essays about really basic philosophy concepts as if they're breaking new ground, while ignoring a lot of very interesting work that could have made the topic much more interesting to discuss.
Right, and "actual philosophers" like Sartre and Heidegger _never_ did that. Ever.
"Being and Nothingness" and "Being and Time" are both short enough to fit into a couple tweets, right?
</irony>
My point is that, yes, while it may be a bit annoying in general (lord knows how many times I rolled my eyes at my old roommate talking about trans-humanism), the idea that this Rationalist™ movement "thinking about things philosophically" is controversial is just weird. That they seem to care about a philosophical approach to thinking about things, and maybe didn't get degrees and maybe don't understand much background while forming their own little school, seems as unremarkable is it is uncontroversial.
Until?
One of the funniest and most accurate turns of phrases in my mind is Charles Stross' characterization of rationalists as "duck typed Evangelicals". I've come to the conclusion that American atheists just don't exist, in particular Californians. Five minutes after they leave organized religion they're in a techno cult that fuses chosen people myths, their version of the Book of Revelation, gnosticism and what have you.
I used to work abroad in Shenzhen for a few years and despite meeting countless of people as interested in and obsessed with technology, if not more than the people mentioned in this blogpost, there's just no corellary to this. There's no millenarian obsession over machines taking over the world, bizarre trust in rationalism or cult like compounds full of socially isolated new age prophets.
> I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity?
The reason they landed on a not-so-rational risk to humanity is because it fulfilled the psycho-social need to have a "terrible burden" that binds the group together.
It's one of the reasons religious groups will get caught up on The Rapture or whatever, instead of eradicating poverty.
>liberal zionist
hmmmm
Yeah, this surprises absolutely nobody.
Not his fault that people deemed it interesting enough to upvote to the front page of HN.
Not incidental!
Recognizing we all take a step of faith to move outside of solipsism into a relationship with others should humble us.
Then, empower UN-like organizations to oversee the use of technology - like the United Nations Atomic Energy Commission.
And, if you're even further concerned, put in place mechanisms that guarantee that the productivity gains, yields, and GDP increases obtained via the new technology of AI are distributed and enjoyed by all of the living population with a minimum of fairness.
For some reason, specially this last bit, doesn't really fly with our friends, The Rationalists. I wonder why...
Give me strength. So much hubris with these guys (and they’re almost always guys).
I would have assumed that a rationalist would look for truth and not correctness.
Oh wait, it’s all just a smokescreen for know-it-alls to show you how smart they are.
The basic trope is showing off how smart you are and what I like to call "intellectual edgelording." The latter is basically a fetish for contrarianism. The big flex is to take a very contrarian position -- according to what one imagines is the prevailing view -- and then defend it in the most creative way possible.
Intellectual edgelording gives us shit like neoreaction ("monarchy is good actually" -- what a contrarian flex!), timeless decision theory, and wild-ass shit like the Zizians, effective altruists thinking running a crypto scam is the best path to maximizing their utility, etc.
Whether an idea is contrarian or not is unrelated to whether it's a good idea or not. I think the fetish for contrarianism might have started with VCs playing public intellectual, since as a VC you make the big bucks when you make a contrarian bet that pays off. But I think this is an out-of-context misapplication of a lesson from investing to the sphere of scientific and philosophical truth. Believing a lot of shitty ideas in the hopes of finding gems is a good way to drive yourself bonkers. "So I believe in the flat Earth, vaccines cause autism, and loop quantum gravity, so I figure one big win this portfolio makes me a genius!"
Then there's the cults. I think this stuff is to Silicon Valley and tech what Scientology is to Hollywood and the film and music industries.
It goes like this:
(1) Assert a set of priors (with emphasis on the word assert).
(2) Reason from those priors to some conclusion.
(3) Seamlessly, without skipping a beat, take that solution as valid because the reasoning appears consistent and make that part of a new set of priors.
(4) Repeat, or rather recurse since the new set of priors is built on previous iterations.
The entire concept of science is founded on the idea that you can't do that. You have to stop and touch grass, which in science means making observations or doing experiments if possible. You have to see if the conclusion you reached actually matches reality in any meaningful way. That's because reason alone is fragile. As any programmer knows, a single error or a single mistaken prior propagates and renders the entire tree invalid. Do this recursively and one error anywhere in this crystalline structure means you've built a gigantic tower of bullshit.
I compare it to the Gish gallop because of how enthusiastically they do it, and how by doing it so fast it becomes hard to try to argue against. You end up having to try to counter a firehose of Oh So Very Smart complicated exquisitely reasoned nonsense.
Or you can just, you know, conclude that this entire method of determining truth is invalid and throw the entire thing in the trash.
A good "razor" for this kind of thing is to judge it by its fruit. So far the fruit is AI hysteria, cults like the Zizians, neoreactionary political ideology, Sam Bankman Fried, etc. Has anything good or useful come from any of this?
Expecting rational thought to correspond to reality is like expecting a 6 million line program written in a hypothetical programming language invented in the 1700s to run bug free on a turing machine.
Tooling matters.
(You didn’t explicitly say otherwise, so if my exasperation is misdirected then you have my apology in advance.)
I didn't attend LessOnline since I'm not active on LessWrong nor identify as a rationalist - but I did attended a GPU programming course in the "summer camp" portion of the week, and the Manifest conference (my primary interest).
My experience generally aligns with Scott's view, the community is friendly and welcoming, but I had one strange encounter. There was some time allocated to meet with other attendees at Manifest who resided in the same part of the world (not the bay area). I ended up surrounded by a group of 5-6 folks who appeared to be friends already, had been a part of the Rationalist movement for a few years, and had attended LessOnline the previous weekend. They spent most of the hour critiquing and comparing their "quality of conversations" at LessOnline with the less Rationalist-y, more prediction market & trading focused Manifest event. Completely unaware or unwelcoming of my presence as an outsider, they essentially came to the conclusion that a lot of the Manifest crowd were dummies and were - on average - "more wrong" than themselves. It was all very strange, cult-y, pseudo-intellectual, and lacking in self-awareness.
All that said, the experience at Summer Camp and Manifest was a net positive, but there is some credence to sneers aimed at the Rationalist community.
I did find some rationalists too far down their "epistemological rabbit hole" to successfully unwind in one or two conversations but nevertheless many clever people. I still need some time to make post-rats out of them, though.
Affirming that it was a positive experience. I'm glad to have attended.
My understanding of "Rationalists" is that they're followers of rationalism; that is, that truth can be understood only through intellectual deduction, rather than sensory experience.
I'm wondering if this is a _different_ kind of "Rationalist." Can someone explain?
These people should have read "Descartes' Error" with more attention than they spent on Friedman and Hayek.
"Here are some labels I identify as"
So they arent rational enough to understand first principles don't objectively exist.
They were corrupted by words of old men, and have built a foundation of understanding on them. This isnt rationality, but rather Reason based.
I consider Instrumentalism and Bayesian epistemology to be the best we can get towards knowledge.
I'm going to be a bit blunt and not humble at all, this person is a philosophical inferior to myself. Their confidence is hubris. They haven't discovered epistemology. There isnt enough skepticism in their claims. They use black and white labels and black and white claims. I remember when I was confident like the author, but a few empirical pieces of evidence made me realize I was wrong.
"it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy."
As the least worse solution to maximize Social Utility has long been invented: Democracy and political action.
Or maybe you are wrong.
https://ips-dc.org/the-true-cost-of-billionaire-philanthropy...
But if we put aside the narcissistic traits, lack of intellectual humility, religious undertones and (paradoxically) appeal to emotional responses with apocalyptic framing, the whole thing is still irrelevant BS.
They work in a vacuum, on either false or artificial premises with nothing to back their claims except long strings of syllogism.
This is not Science, no measurements, no experiments, no validation, zero value apart from maybe intellectual stimulation and socialisation for nerds with too much free time…
Fair warning: when you turn over some of the rocks here you find squirming, slithering things that should not be given access to the light.
> squirming, slithering things that should not be given access to the light.
;)
I can't say I'm surprised.
Apart from a charismatic leader, a cult (in the colloquial meaning) needs a business model, and very often, a sense of separation from, and lack of accountability to those who are outside the cult, which provides conveniently simpler environment under which the cults ideas operate. A sort of "complexity filter" at the entry gate.
I'm not sure how the Rationalists compare to those criteria, but I'd be curious to find out.
> “Yes,” I replied, not bothering to correct the “physicist” part.
Didn't read much beyond that part. He'll fit right in with the rationalist crowd...
I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.
0.o
> No actual person talks like that
I think it is plausible that there are people (readers) that find other people (bloggers) basically always right, and that would be the first think they would say to them if they met them. n=1, but there are some bloggers that I think are basically always right, and I am socially bad, so there is no telling what would I blurt out if I met them.
Someone spending a lot of time to build one or multiple skills doesn't make them an expert on everything, but when they start talking like they are an expert on everything because of the perceived difficulties of one or more skills then red flags start to pop up and most reasonable people will notice them and swiftly call them out.
For example Elon Musk saying "At this point I think I know more about manufacturing than anyone currently alive on earth" even if you rationalize that as an out of context deadpan joke it's still completely correct to call that out as nonsense at the very least.
The more a person rationalizes statements like ("AI WILL KILL US ALL") these made by a person or cult the more likely it is that they are a cult member and they lack independent critical thinking, as they outsourced their thinking to group. Maybe their thinking is "the best thoughts", in-fact it probably is, but it's dependent on the group so their individual thinking muscle is weaken, which increases their morbidity (Airstricking a data center will get you killed or arrested by the US Gov. So it's better for the individual to question such statements rather than try to rationalize them using unprovable nonsense like god or AGI).
Which of course the blog article is not, but then at least the complaint wouldn't sound so obviously shallow.
> they gave off some (not all) of the vibes of a cult
...after describing his visit with an atmosphere that sounds extremely cult-like.
For more info, the Behind the Bastards podcast [2] did a pretty good series on how the Zizians sprung up out of the Bay area Rationalist scene. I'd highly recommend giving it a listen if you want a non-rationalist perspective on the Rationalist movement.
[1]: https://en.wikipedia.org/wiki/Zizians [2]: https://www.iheart.com/podcast/105-behind-the-bastards-29236...
Those are only named cults though; they just love self-organizing into such patterns. Of course, living in group homes is a "rational" response to Bay Area rents.
> the fertile soil which is perfect for growing cults
This is true but it's not rationalism, it's just that they're from Berkeley. As far as I can tell if you live in Berkeley you just end up joining a cult.
Most of the rationalists I met in the Bay Area moved there specifically to be closer to the community.
Cult member: It's not a cult! It's an organization that promotes love and..
Hank Hill: This is it.
Thanks for clarifying though! Oh wait, you didn't.
Edit: Oh, but you call him "Guru" ... so on reflection you were probably (?) making the same point... (whoosh, sorry).
You don't understand how anxious the rationalist community was around that time. We're not talking self-assured confident people here. These articles were written primarily to calm down people who were panickedly asking "we're not a cult, are we" approximately every five minutes.
Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.
GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.
I'd like to thank my useless brain for deciding to write that one down.
*Guess I’m a rationalist now.
The contempt, the general lack of curiosity and the violence of the bold sweeping statements people will make here are mind-boggling.
Honestly, I find the Hacker News comments in recent years to be most enlightening because so many comments come from people who spent years immersed in rationalist communities.
For years one of my friend groups was deep into LessWrong and SSC. I've read countless blog posts and other content out of those groups.
Yet every time I write about it, I'm dismissed as an uninformed outsider. It's an interesting group of people who like to criticize and dissect other groups, but they don't take kindly to anyone questioning their own circles.
No; you're being dismissed as someone who is entirely too credulous about arguments that don't hold up to scrutiny.
Edit: and as someone who doesn't understand basics about what rationalists are trying to accomplish in certain contexts (like the concept of a calibration curve re the example you brought up of https://www.astralcodexten.com/p/grading-my-2021-predictions). You come across (charitably) as having missed the point, because you have.
I will say as someone who has been programming before we had standardized C++ that “programming communities” aren’t my cup of tea. I like the passion and enthusiasm but it would be good for some of those lads to have a drag, see a shrink and get some nookie.
If you open any thread related to Zig or Odin or C++ you can usually ctrl-F "Rust" and find someone having an argument about how Rust is better.
EDIT: Didn't have to look very far back for an example: https://news.ycombinator.com/item?id=44319008
The rationalists have not had any such clearly positive effects, and rather their adherents (Thiel, Vance, etc) have had severely deleterious effects on society.
There is no comparison between the two communities.
Mentioning Thiel and Vance together brings to mind a different thread of weird netizen philosophizing - one I don't really know much about but which I guess I'd sum up as the "Moldbug / Curtis Yarvin fandom." "Neoreactionaries" might be the right term?
I definitely recognize that the Venn diagram between those two "intellectual movements" (big scare quotes there) overlaps quite a bit, but it seems like a bit of stretch to lump what Vance, Thiel, and other right-wing tech bro types are up to under the rationalism banner.
Update: Having read through some of the other links in the thread, I have "updated" (as the rationalists say) my mental model of that Venn diagram to be slightly more overlapping. I still think they're distinct, but there's more cross-pollination between the Moldbugs and the ACX crowd than I initially realized.
For what it’s worth, you seem to be agreeing with the person you replied to. Their main point is that this break down happens primarily because people identify as Rationalists (or whatever else). Taken from that angle, Rationalism as an identity does not appear to be useful.
In other words, your question ignores so much nuance that it’s a red herring IMO.
As others have pointed out, the fact that you would like to make light of cities being decimated and innocent civilians being murdered at scale in itself suggests a lot about your inability to concretize the reality of human existence beyond yourself (lack of empathy). It's this kind of outright callousness toward actual human beings that I think many of these so called "rationalists" share. I can't fault them too much. After all, when your approach to social problems is highly if not strictly quantitative you are already primed to nullify your own aptitude for empathy, since you view other human beings as nothing more than numerical quantities whenever you attempt to address their problems.
I have seen no defense for what's happening in gaza that anyone who actually values human life, for all humans, would find rational. Recall the root of the word ratio—in proportion. What is happening in this case is quite blatantly a matter of an inproportinate response.
I'm struggling to follow, sorry.
I certainly agree with you that a false accusation is not worse than the actuality (I don't know why you brought up "possibility") of mass murder. Very far from it. But why does that imply that it's better than the defence of mass murder? After all, the "defence" here is not engaging in the practice, it's just saying something like "I condone that". Or did you think that by "defence" I actually mean committing the mass murder?
The reason that emotive false accusations are very, very harmful is that they can cause mobs to murder in (supposed) retaliation. Here's a story about someone in the UK who was killed by a riled-up mob, due to a false accusation:
https://www.dailymail.co.uk/news/article-3535839/Father-42-k...
One ought to be very, very cautious about making accusations that can rile up mobs.
Regarding your other comments directed at me personally, such as "you would like to make light of cities being decimated and innocent civilians being murdered at scale", "inability to concretize the reality of human existence beyond yourself", "outright callousness", "approach to social problems is highly if not strictly quantitative", "you view other human beings as nothing more than numerical quantities", they are completely unfounded speculation on your part. They are rude and completely inappropriate for a reasoned discussion.
Regarding proportion, do you believe the actions of the UK and USA against Nazi Germany were "proportionate"? Proportionate to what? What did Nazi Germany ever to do the USA?
And yes, I have some. One is that false claims of genocide are equally reprehensible to denying true genocide. But I'm not sure why my beliefs are particularly relevant. I'm not the one sitting publicly in judgement of a semi-public figure. That was voidhorse.
Did you want to discuss in more detail? I'm happy to, but currently I interpret your comment as an attempt at sniping me with snark. Please do correct me if I've misinterpreted.
Still, for the record, other independent observers have documented the practices and explained why they don't meet the definition of genocide, John Spencer and Natasha Hausdorff to name two examples. It seems by no means clear that it's valid to make a claim of genocide. I certainly wouldn't unless I was really, really certain of my claim, because to get such a claim wrong is equally egregious to denying a true genocide, in my opinion.
I thought these people were the ones that were all about most effective applications of altruism? Or is that a different crowd?
(Not a "gotcha". I really want to know.)
I don't know rationalism too well but I think it was a historical philosophical movement asserting you could derive knowledge by reasoning from axioms rather than observation.
The primary difference here is that rationality mostly teaches "use your reason to guide what to observe and how to react to observations" rather than doing away with observations altogether; it's basically an action loop alternating between observation and belief propagation.
A prototypical/mathematical example of a pure LessWrong-type "rational" reasoner is Hutter's AIXI (a definition of the "optimal" next step given an input tape and a goal), though it has certain known problems of self-referentiality. Though of course reasoning in this way does not work for humans; a large part of the Sequences is attempts to port mathematically correct reasoning to human cognition.
You can kind of read it as a continuation of early-2000s internet atheism: instead of defining correct reasoning by enumerating incorrect logic, ie. "fallacies", it attempts to construct it positively, by describing what to do rather than just what not to do.
Shortly: believing what is true, and choosing the actions that lead to the things you value.
If the sky is blue and you can verify that by looking at the sky, it is reasonable to believe that sky is blue, and it is unreasonable to believe that the sky is green just because some authority or your favorite political party said so. If you know that eating poison would kill you, and if you want to live a long life, it is reasonable to avoid the poison, and unreasonable to eat the poison.
These are separate skills, because many people know what to do, and yet don't do that, or are good at following their beliefs, but the beliefs happen to be wrong. Studying the rational beliefs is called "epistemology", studying the rational actions is called "decision theory".
.
The word "rationalism" is used by many people in different ways: https://en.wikipedia.org/wiki/Rationalism_(disambiguation)
In internet discussions inevitably someone mentions the 17th century definition of "rationalism" as opposed to "empiricism" as the only historically valid meaning of the word. But for example, the approach of Karl Popper is often called "critical rationalism", and although the Less Wrong philosophy is different from Popper's, it is closer to him than to the 17th century "rationalists".
(The difference between Popper and Less Wrong in a nutshell: Popper treats arguments in favor of a theory, and arguments against a theory, as two fundamentally different things that follow different rules. Less Wrong treats all arguments the same way, using the Bayes Theorem. In practice, the difference is smaller than it might seem, because Popper's main concern was to never treat a theory as a 100% truth, especially when there is evidence against it, and Less Wrong agrees that you should never treat any theory as 100% likely. The advantage of Less Wrong approach is that you can also apply it to probabilistic theories. For example, one person says that a coin is fair, another person says that actually the coin is 55% likely to come up heads, and 45% likely to come up tails. I have no idea how you would decide this problem from Popper's perspective, because any experimental result is kinda compatible with both theories, it's just the more coinflips you make, the less likely some of those theories become; but there is no clear line when you should call one of them "falsified".)
.
I think this is the kind of thing where you simply can't make everyone happy, no matter what you do. As an analogy, imagine a world where every time you say e.g. "American president", at least three people in the thread remind you that actually Donald Trump is not a president of the entire continent, only of a part of the North America. So the next time you pay extra attention and say carefully "the president of the United States", and then everyone is like: "you mean the American president, why don't you speak simply?".
There is a community around Less Wrong. Whatever we call them, we should choose the name so that it is obvious that we refer to them, and not to the 17th century philosophers who believed that evidence is not necessary. Things are real, words are just labels.
At the beginning, it was just a few dozen people from different parts of the planet, reading the same blog. They referred to themselves as "aspiring rationalists". (As a label that applies to an individual who aspires to be more rational.) I would be okay to use this label, but apparently many people are too lazy to say "aspiring" all the time, and when you only say "rationalists", (1) it sounds smug, as if you believe that you already are perfectly rational, rather than you are trying to become more so, and (2) inevitably, some outsider will mention the 17th century philosophers. I think people used "Less Wrong community", "rationality community", "LessWrong-style rationalists", "x-rationalists", and maybe a few other words. But there was always pushback against using "rationalist" without any adjective.
As a long-time member of the community myself, I don't have a problem with anyone using any of these labels. I know what you mean, and I am not going to play dumb. It's the non-members who need to coordinate on a standard label for us, so that they are not confused about who they are talking about. And I am even okay with them choosing a label that we don't use. (I am doing the same to other groups, e.g. I say "Mormons" instead of "The Church of Jesus Christ of Latter-day Saints".) If the world decides to call us "rationalists", so be it... but then don't blame us for using the same label as the 17th century philosophers, because we don't. I think that "rationality community" or "Less Wrong community" are both nice options. (Just please don't call us TESCREAL because that's a crazy conspiracy theory.)