Disclaimer: For ease of writing, I say “effective altruism says this” or “effective altruists believe that.” In reality, effective altruism is a diverse movement, and many effective altruists believe different things.
My very surface level exploration of the ideas was instrumental in getting me out of the Catholic Church, which was extremely valuable to my life! So: full marks for that.
At other times I have felt that the need to use special methods all the time instead of just normal ones that people do already feels like extra work. Society has come up with lots of scripts to deal with the most common problems human beings regularly have, and many or most of them work. Glomming onto new scripts from this special source felt too much like joining a new religion, and I was not up for that.
In short, good for deprogramming from awful beliefs, if you commit to actually trying to be rational and following the answers you come up with. But nothing I learned subsequently quite equaled, "find out what the expert consensus in a field is and believe that, unless you are also an expert in that field." It was good advice, which I think I heard from Scott, and it got me vaccinating my kids. But I don't think all rationalists follow that one. There's a tendency to assume that being very smart and following rationalist techniques should equal or surpass actual expertise, and I can see time and time again that it really, really doesn't.
That didn't work so well during covid, where the experts were just wrong and admitted to lying, or when I took a medication that left me crippled years after I stopped taking it, completely destroying everything I cared about in life.
That didn't work so well with China or wokeness/DEI.
> "I have felt that the need to use special methods all the time instead of just normal ones that people do already feels like extra work. Society has come up with lots of scripts to deal with the most common problems human beings regularly have, and many or most of them work. Glomming onto new scripts from this special source felt too much like joining a new religion, and I was not up for that."
First - the point is to create your own scripts and find ones that work for you.
Second - just because a script works doesn't mean it can't be significantly improved.
Finally - you might be interested in the behavioral economic concept of bounded rationality, which means that our resources are limited, so we can't perfectly optimize. We have to decide, based on incomplete information, what to look into trying to improve, and how much time to spend looking into improving it, and then whether what we found is worth pursuing given the cost/benefit analysis.
So if you don't think something is worth spending time optimizing, that's fine. But maybe something else has low-hanging fruit.
I really liked this blog post, and am waiting for the next installment. Myself, I arrived at EA and Rationalism very late (about 3 years ago) and while I have been reading much about both, I profit from clear explanations likes yours.
In many ways, I feel like both EA and Rationalism are very appealing (I can unironically describe myself as EA and Rat adjacent): a bunch of nerdy, math-loving people keen of discovering truth and doing good sounds right up my alley; but then I find myself flatly disagreeing with some core values and principles of either group.
Take Bayesianism: My instinct is to *really* dislike it when compared to Frequentism. And this is, I suspect, entirely my fault, and a result of my own experiences and weird psychological make-up: I am of a dogmatic disposition I have to fight with all the time, probably a result of a strongly religious and Catholic upbringing, coupled afterwards with a no less strong attachment for years to Marxism and by academic studies in the Humanities which mostly resulted in a loathing for postmodernism, relativism and 'anything goes' discourse. When I see Math being treated as subjective guesswork, all my alarms at bullshitting with esoteric obscurantism with a scientific façade start ringing (which I don't think is a fair assessment of Bayesianism or Bayesians, but it is definitely what I feel). In fact, what attracted me most to math is the search for some field were completely irrefutable, unquestionable, unarguable Truth (or the closest thing attainable by humans) can be attained. So when I read 'this kind of prediction is, in a certain sense, the thing that knowledge is', I just feel that if such be knowledge, I am really not much interested in it.
There's a sense in which Bayesian probability is the opposite of subjective guesswork: it includes the idea that, given what you already know, when you encounter new information, there is exactly one mathematically correct way to update your beliefs to take this new information into account, and anything else is either an approximation or simply wrong.
Yeah, I wasn't trying to dunk on Bayesianism, which I respect as math (and has proven to be much more pragmatically applicable, as in ML). But there is a sense in which the choice of priors inevitably allows for (and usually implies) a lot of subjectivity, ideally to be updated with new information. IRL though, lots of people don't update, find rationalizations for not doing it and/or start with priors that are so skewed in one direction it won't be easy for them to converge in a short amount of time. And from an outsider's point of view, hearing something like 'My p(doom) is 72%' can feel(emphasis on 'feel') like 'This person is giving a very precise numerical claim which is unbacked by irrefutable evidence. It's just vibes and a lot of speculations, ifs, maybes'. And if you've been in the Humanities, you've scenarios play out with people using pseudo-math and esoteric babble meant to force you to acquiesce to the cognoscenti's bs ideological and subjective preferences, making you extremely suspicious.
I think this led to an epistemologically isolated culture, where ideas created in-community are disproportionately favored over ideas that have withstood scrutiny in the wider intellectual world. It's not as bad in contemporary EA as it is in the sequences, but it's still a bad enough problem that I think it's severely distorting peoples beliefs.
I agree EA is epistemologically isolated. I don't think this has much of anything to do with "dismissive attitudes toward domain-level experts" in general, but privileging certain academic fields which EAs have affinity with for historical, cultural, demographic, sociological, and ideological reasons (e.g. neoclassical economics, analytic moral philosophy) over others (e.g. political economy, environmental sciences, experimental economics, heterodox economics, sociology, anthropology, history, <anything> studies, analytic political philosophy, continental philosophy).
I threw in continental philosophy because I plan to write about it, but there are some adjacency segments that are notably influenced by it – Scott A majored in philosophy with a continental focus I think, the cyborgists like Janus etc. use post-structuralism to study AI alignment in LLMs, and of course the post-rationalists.
The important case study to me is the French adjacency which you can find mostly on YouTube¹. While it shares Anglo rats and EAs' reliance on analytic moral philosophy, cognitive science, and game theory, it is also largely downstream of traditional scientific skepticism, heterodox economics (thanks to Gilles of https://youtube.com/@Heu7reka), and environmental sciences (thanks to Rodolphe of https://youtube.com/@lereveilleur and Maxime of https://youtube.com/@Philoxime), and analytic political philosophy (Maxime again).
As either causes or consequences and probably both: They/we are much more open to concerns generally dismissed by Anglo EAs like short-term AI ethics, democratic and civil liberties backsliding, global catastrophic risks from climate change, etc. We have next to no overlap with tech industry circles. We have significantly more overlap with mainstream left-environmentalist and animal rights circles (lots of things I could pick out here but e.g. Shaïman (https://youtube.com/@LeFuturologuePodcast) estimated that probably 20% of the people he met at EA France were communists, and that's someone who is reasonably well-connected to Anglo longtermist circles talking about the actual official EA org).
>In this tiny aside we have a peephole into Bankman-Fried’s impoverished view of reality, and foreshortened understanding of human life itself. Gift-giving is a practice of empathy…the idea that one should simply articulate one’s wants or needs implies a mentally crystalline homo economicus…The radical rejection of gift-giving, then, betrays a reduction of human life and relationships to mere exchanges of value.
This is what bugs me about the way a lot of neurotypicals talk about empathy. You conclude that this human being is incapable of fully experiencing human relationships... because he has different preferences about gift-giving than you do? Just because someone's emotional expressions and relationships look different from yours doesn't mean they're any less real, and I think it's possible to criticize someone's harmful actions without dehumanizing them because they also did some odd-but-harmless things. Empathy isn't a magic spell, it's an unreliable instinct. Overreliance on empathy means that when people encounter someone who's so different from themselves that they can't empathize, as SBF clearly is from you, it's tempting to assume that their mind is just missing a piece, even if their inner life is every bit as full and rich as yours.
To quote this blog post (https://nostalgebraist.tumblr.com/post/144177230539/nab-notes-empathy-this-note-gets-into-more): "[T]he empathic bridge fizzles because, in some sense, the other person was too empathetic. More precisely, they were too secure in their identity as empathetic beings. They didn’t problematize empathy, think of it as a messy through-a-glass-darky thing which depends on mutual effort and can have unexpected twists.…If I look at you and think I know who you are, how you feel, what you value, what you worry about, if I can read you like an open book, then how can I encounter you as a complex human being?"
Kind of agree, but a problem is that it is all unreliable instincts all the way down (meaning I have a really hard time, for example, understanding why EAs and Rats follow the Utilitarian rabbit hole of sacralizing one such instinct - what Haidt would call the Care/Harm foundation- as the central moral axiom from which to derive the whole).
I think the dismissive attitudes towards domain-level experts are a much-needed correction against a society and culture that has gone way, way too far the other way. I think those attitudes are completely justified.
I think as a Bayesian principle, we should have a much higher prior for ideas created in-community; they are simply more likely to be correct than outside ideas.
I don't understand all the EY-hate and insults here, or all the rationalist-hate. Makes me very sad. :(
Your ex's 2009 post was **ridiculous**. It was rightly criticized in the comments and it's based on numerous falsehoods and distortions, chiefly not understanding what extreme rationality is. It's not a higher level of rationality above normal rationality; it's just consistently applying rationality to everything in your life. People who actually tried the techniques, or actually attended bootcamps, explained in the comments and elsewhere how wrong Scott was and how helpful it was.
You overrely on the replication crisis, as though the Sequences were totally based on studies that didn't replicate. They aren't.
I don't understand the problem with EY having a twitter, and your snark about that obviously being a bad thing makes it unclear how much of this is seriously intended vs being SneerClub.
I would really love to know what "claims about both rationality and AI" in the Sequences "turned out to be hilariously false." I don't think this is remotely true, especially about AI.
I would also love to know what you don't like about EY's twitter.
A very useful project would be to make a list of which Sequence posts have held up & are still well worth reading, which are now shown to be wrong thanks to the replication crisis, etc. I appreciate Ozy's links in this post but would love a longer look at it (by Ozy, or by anyone—actually if multiple people did it, it would be great, since then we could compare their (no doubt different) lists).
The first post has nothing whatsoever to do with scaling laws. You have completely misunderstood it. He’s criticizing the idea that emergence is an explanation for something. If you say something is an emergent phenomenon, that describes it. It doesn’t explain it.
Analogy: if we raise a bird in captivity and provide it with materials, it will build a nest. No one has to teach it how; it never saw any examples. It just knows.
Now we have a word for that, which is “instinct.” The bird’s behavior can be described as instinctive. But that doesn’t explain how the hell the bird does it! Emergence is like instinct, it’s a description, not an explanation.
That’s it. That’s the point of the post. It has nothing to do with designing an AI based on scaling laws. It’s not snarking at that idea at all.
(As an aside, people are correct to snark at the idea that scaling laws could make an AGI. They have not and are unlikely to, and ChatGPT seems to have plateaued with lots of additional data, meaning we’ve squeezed as much out of scaling laws as possible. But that’s not what the post was about.)
The third one on priming certainly seems mistaken. The first one on emergence seems entirely correct; "emergence" is not a useful explanation of a system's behavior. The second one seems un-judgeable until we gain a much better understanding of how neural networks and human brains work, and it's so vague that it may be unfalsifiable even once that happens.
I certainly agree that many posts in the sequences were bad; I've also heard that he got many of his claims on quantum physics wrong, though I don't know enough about the field to verify. But you make a pretty strong clam when you say they are "full of claims about both rationality and AI that turned out to be hilariously false", and your claim seems to lack grounding? At best you've presented *one* claim about rationality that was false, and it's pretty understandable how he got that one wrong; priming wasn't just one bad study, it was an entire ecosystem of bad studies that seemed to reinforce each other in the same way that good studies reinforce true things.
TBC I don't mean to say that Eliezer should have figured out that priming was wrong; it would have been impressive if he did but it's not a mark against him that he didn't.
The emergence stuff IMO is Eliezer making a point about rationality in a way that is actually a dig against people he doesn't like. (He does that a lot.) It's true that "emergence" isn't an explanation, but also AI did in fact emerge from (heh) relatively simple models when you just threw an enormous amount of compute at them. A lot of the Sequences reflect Eliezer's belief at the time that GOFAI was the correct approach to AI, which turned out not to be the case. Again, I don't think Eliezer is at fault for this-- he made predictions that were reasonable at the time.
The Sequences are nearly twenty years old! If we're not looking at them and going "wow, they're full of wrong stuff in retrospect", then that would mean that Eliezer's entire project failed.
As far as the third post, on priming and anchoring bias, it's completely correct.
Anchoring bias is robust and well-established by many studies. It replicates.
We are tossed about by priming. That's also been established. Just because a small number of priming studies did not replicate doesn't mean the entire field is bunk.
You seem to be misunderstanding the replication crisis - it was a crisis because we had to rethink the process. It was about the process we used, not about the small number of inconsequential studies that didn't replicate. The latter was not the important part and did not affect long-standing and established knowledge in the field.
"Anchoring effects have been demonstrated in many credible studies since the 1970s (Kahneman & Tversky, 1973)."
Anchoring is a form of priming, so not all priming is bullshit.
Kahneman also says "I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested."
So I don't think any change is needed in that LW post.
Finally, I'm not sure what you mean by "With a small number of exceptions easily observed through introspection" - are you saying you think introspection is an accurate way of determining these things? It pretty clearly isn't, and there's robust research on the unreliability of introspection.
> I was here for all of it: I’ve been reading LessWrong since it was part of Robin Hanson’s blog Overcoming Bias.
Wait, didn't you once say you were brought in by HPMOR? (I remember this surprised me at the time, as I thought you'd been around longer!)
Anyway more on topic I feel like I should link here my recent DW post on the significance of Eliezer Yudkowsky: https://sniffnoy.dreamwidth.org/588263.html But, I guess in that post I didn't really give him credit for all the rationality writing, even though a lot of it was quite good and important. I didn't focus on that because 1. that wasn't the point of the post and 2. I was thinking of that as largely continuous with earlier rationality writers. Which it is, but some of it focuses on things they didn't (e.g. beware of categories). So yeah.
I don't think that's true at all. It was never a niche subject among AI pioneers. Turing and Minsky already believed in it. The paperclip minimizer experiment is more or less lifted from Minsky's thought experiment about an AI being tasked to solve the Riemann hypothesis (mentioned in Artificial Intelligence: A Modern Approach). The transhumanists just kept the torch during the AI winters.
Interesting! Do you think I'm correct about the mood at the time among transhumanists, science-fiction fans, etc., at large, that largely they hadn't heard of such concerns except in easily dismissable anthropormophic form, expressed by general luddites? It seems to me like even the serious, nonanthropomorphic version of this is much older, it wasn't popular until Yudkowsky. Am I mistaken there?
I wasn't there back then so I'm not sure. I think Bostrom, Kurzweil, James Hughes, Bill Joy, etc. were developing the proto-longtermist tradition in parallel among the same time, but EY was the one to defend it among and against transhumanists interested in AGI in particular he was meeting in the Singularity Summits like Goertzel, Legg, etc. during the latter days of the second AI winter. Obviously science fiction fans would be aware of Asimov (who considered Minsky one of the two smartest people he ever met alongside Carl Sagan) and 2001 (which Minsky was a technical consultant for).
I mean sure Minsky was well-known but I don't believe that thought experiment was, I don't think I ever heard about it before encountering LW. Did it appear in anything widely-read?
Kurzweil is the sort of person I'm thinking of who y'know just generally assumed AI would of course be good.
It appeared in Section 26.3 (titled "The Ethics and Risks of Developing Artificial Intelligence") of the 2003 edition of Artificial Intelligence: A Modern Approach, the reference textbook on AI (not sure if it was already in earlier editions).
Yeah not sure about mentioning Kurzweil here. I think his x-risk worries were about grey goo and stuff, and that kinda faded alongside the rest of molecular nanotechnology, as it turned out that the molecular nanotechnology we run on ruthlessly optimized by billions of years of evolution is probably as efficient as you can get under the laws of physics.
Oh wow, I pretty strongly disagree. Evolution is absurdly inefficient. A nanobot programmed to consume all material and use it to copy itself would be far more efficient than anything evolution came up with.
You might want to read the post of mine this is actually responding to, but, in short -- we're not talking about anthroporphic AIs but about anthropomorphic arguments about AI. "AIs will be angry about being enslaved and revolt against us" is an example of an argument based on anthropomorphizing the AI. I'm saying a large part of what makes Eliezer Yudkowsky significant is that he popularized non-anthropomorphic arguments for AI being dangerous.
Yes -- that sort of thing generally would have gotten a different counterargument; I guess I failed to include that in the post, I didn't think of it. I would say that the general response to those sorts of concerns would be either "well we wouldn't do that then, it's a computer, we wouldn't program it that way" or "it's an artificial intelligence, it'll understand what we mean". And of course a fair bit of what Yudkowsky wrote is addressing those two things, particularly the second -- a whole lot of space is dedicated to "the hard part isn't getting it to know what we mean, the hard part is getting it to *care*". But I don't think that's a point that was widely appreicated before. (With the first, the problems are a bit more obvious...)
This comment was removed for praising a Ziz blog post while expressing violent sentiments. I presume these sentiments were not meant literally, but given events I have an absolute zero tolerance policy for *any* even conceivably pro-violence sentiment attached to agreement with *anything* from Ziz's blog. Sorry. I dismissed a lot of people's statements as rhetorical exaggerations and then they committed murder, I am flinchy.
My very surface level exploration of the ideas was instrumental in getting me out of the Catholic Church, which was extremely valuable to my life! So: full marks for that.
At other times I have felt that the need to use special methods all the time instead of just normal ones that people do already feels like extra work. Society has come up with lots of scripts to deal with the most common problems human beings regularly have, and many or most of them work. Glomming onto new scripts from this special source felt too much like joining a new religion, and I was not up for that.
In short, good for deprogramming from awful beliefs, if you commit to actually trying to be rational and following the answers you come up with. But nothing I learned subsequently quite equaled, "find out what the expert consensus in a field is and believe that, unless you are also an expert in that field." It was good advice, which I think I heard from Scott, and it got me vaccinating my kids. But I don't think all rationalists follow that one. There's a tendency to assume that being very smart and following rationalist techniques should equal or surpass actual expertise, and I can see time and time again that it really, really doesn't.
That didn't work so well during covid, where the experts were just wrong and admitted to lying, or when I took a medication that left me crippled years after I stopped taking it, completely destroying everything I cared about in life.
That didn't work so well with China or wokeness/DEI.
> "I have felt that the need to use special methods all the time instead of just normal ones that people do already feels like extra work. Society has come up with lots of scripts to deal with the most common problems human beings regularly have, and many or most of them work. Glomming onto new scripts from this special source felt too much like joining a new religion, and I was not up for that."
First - the point is to create your own scripts and find ones that work for you.
Second - just because a script works doesn't mean it can't be significantly improved.
Finally - you might be interested in the behavioral economic concept of bounded rationality, which means that our resources are limited, so we can't perfectly optimize. We have to decide, based on incomplete information, what to look into trying to improve, and how much time to spend looking into improving it, and then whether what we found is worth pursuing given the cost/benefit analysis.
So if you don't think something is worth spending time optimizing, that's fine. But maybe something else has low-hanging fruit.
I really liked this blog post, and am waiting for the next installment. Myself, I arrived at EA and Rationalism very late (about 3 years ago) and while I have been reading much about both, I profit from clear explanations likes yours.
In many ways, I feel like both EA and Rationalism are very appealing (I can unironically describe myself as EA and Rat adjacent): a bunch of nerdy, math-loving people keen of discovering truth and doing good sounds right up my alley; but then I find myself flatly disagreeing with some core values and principles of either group.
Take Bayesianism: My instinct is to *really* dislike it when compared to Frequentism. And this is, I suspect, entirely my fault, and a result of my own experiences and weird psychological make-up: I am of a dogmatic disposition I have to fight with all the time, probably a result of a strongly religious and Catholic upbringing, coupled afterwards with a no less strong attachment for years to Marxism and by academic studies in the Humanities which mostly resulted in a loathing for postmodernism, relativism and 'anything goes' discourse. When I see Math being treated as subjective guesswork, all my alarms at bullshitting with esoteric obscurantism with a scientific façade start ringing (which I don't think is a fair assessment of Bayesianism or Bayesians, but it is definitely what I feel). In fact, what attracted me most to math is the search for some field were completely irrefutable, unquestionable, unarguable Truth (or the closest thing attainable by humans) can be attained. So when I read 'this kind of prediction is, in a certain sense, the thing that knowledge is', I just feel that if such be knowledge, I am really not much interested in it.
There's a sense in which Bayesian probability is the opposite of subjective guesswork: it includes the idea that, given what you already know, when you encounter new information, there is exactly one mathematically correct way to update your beliefs to take this new information into account, and anything else is either an approximation or simply wrong.
Yeah, I wasn't trying to dunk on Bayesianism, which I respect as math (and has proven to be much more pragmatically applicable, as in ML). But there is a sense in which the choice of priors inevitably allows for (and usually implies) a lot of subjectivity, ideally to be updated with new information. IRL though, lots of people don't update, find rationalizations for not doing it and/or start with priors that are so skewed in one direction it won't be easy for them to converge in a short amount of time. And from an outsider's point of view, hearing something like 'My p(doom) is 72%' can feel(emphasis on 'feel') like 'This person is giving a very precise numerical claim which is unbacked by irrefutable evidence. It's just vibes and a lot of speculations, ifs, maybes'. And if you've been in the Humanities, you've scenarios play out with people using pseudo-math and esoteric babble meant to force you to acquiesce to the cognoscenti's bs ideological and subjective preferences, making you extremely suspicious.
I don't think EA has ever fully recovered from the reckless arrogance of the Sequences, and in particular the dismissive attitudes toward domain level experts contained within. (see topher's post here for examples: https://topherhallquist.wordpress.com/2015/07/30/lesswrong-against-scientific-rationality/)
I think this led to an epistemologically isolated culture, where ideas created in-community are disproportionately favored over ideas that have withstood scrutiny in the wider intellectual world. It's not as bad in contemporary EA as it is in the sequences, but it's still a bad enough problem that I think it's severely distorting peoples beliefs.
I agree EA is epistemologically isolated. I don't think this has much of anything to do with "dismissive attitudes toward domain-level experts" in general, but privileging certain academic fields which EAs have affinity with for historical, cultural, demographic, sociological, and ideological reasons (e.g. neoclassical economics, analytic moral philosophy) over others (e.g. political economy, environmental sciences, experimental economics, heterodox economics, sociology, anthropology, history, <anything> studies, analytic political philosophy, continental philosophy).
https://bobjacobs.substack.com/p/the-ea-community-inherits-the-problems
https://bobjacobs.substack.com/p/how-democratic-is-effective-altruism
sometimes I do wonder what a EA downstream of say, continental philosophy, heterodox economics, whatever, would be
I threw in continental philosophy because I plan to write about it, but there are some adjacency segments that are notably influenced by it – Scott A majored in philosophy with a continental focus I think, the cyborgists like Janus etc. use post-structuralism to study AI alignment in LLMs, and of course the post-rationalists.
The important case study to me is the French adjacency which you can find mostly on YouTube¹. While it shares Anglo rats and EAs' reliance on analytic moral philosophy, cognitive science, and game theory, it is also largely downstream of traditional scientific skepticism, heterodox economics (thanks to Gilles of https://youtube.com/@Heu7reka), and environmental sciences (thanks to Rodolphe of https://youtube.com/@lereveilleur and Maxime of https://youtube.com/@Philoxime), and analytic political philosophy (Maxime again).
As either causes or consequences and probably both: They/we are much more open to concerns generally dismissed by Anglo EAs like short-term AI ethics, democratic and civil liberties backsliding, global catastrophic risks from climate change, etc. We have next to no overlap with tech industry circles. We have significantly more overlap with mainstream left-environmentalist and animal rights circles (lots of things I could pick out here but e.g. Shaïman (https://youtube.com/@LeFuturologuePodcast) estimated that probably 20% of the people he met at EA France were communists, and that's someone who is reasonably well-connected to Anglo longtermist circles talking about the actual official EA org).
¹: This choice of medium itself is noticeably specific enough to that adjacency that it shows up like this in global EA surveys: "Of the 17 responses (10%) which referenced YouTube, 9 mentioned French channels called Mr. Phi and Science4All and 2 mentioned a Polish channel called Everyday Hero." https://forum.effectivealtruism.org/posts/tzFcqGmCA6ePeD5wm/ea-survey-2020-how-people-get-involved-in-ea#Where_People_First_Hear_of_EA__Other
huh!! are any of the French rat-adjs followable somewhere non-video, like Twitter or something?
https://bsky.app/starter-pack/philoxime.bsky.social/3larjt2vyqn2c
https://monsieurphi.com/
https://science4all.org/about/
https://tournesol.app/about
https://www.securite-ia.fr/
Hot DAMN thank you for sharing that link, incredibly useful. You might be interested in my work (and forthcoming book) on SBF. One link of many: https://davidzmorris.substack.com/p/so-bored-im-going-to-die-the-miseducation
This is kind of off-topic, but...
>In this tiny aside we have a peephole into Bankman-Fried’s impoverished view of reality, and foreshortened understanding of human life itself. Gift-giving is a practice of empathy…the idea that one should simply articulate one’s wants or needs implies a mentally crystalline homo economicus…The radical rejection of gift-giving, then, betrays a reduction of human life and relationships to mere exchanges of value.
This is what bugs me about the way a lot of neurotypicals talk about empathy. You conclude that this human being is incapable of fully experiencing human relationships... because he has different preferences about gift-giving than you do? Just because someone's emotional expressions and relationships look different from yours doesn't mean they're any less real, and I think it's possible to criticize someone's harmful actions without dehumanizing them because they also did some odd-but-harmless things. Empathy isn't a magic spell, it's an unreliable instinct. Overreliance on empathy means that when people encounter someone who's so different from themselves that they can't empathize, as SBF clearly is from you, it's tempting to assume that their mind is just missing a piece, even if their inner life is every bit as full and rich as yours.
To quote this blog post (https://nostalgebraist.tumblr.com/post/144177230539/nab-notes-empathy-this-note-gets-into-more): "[T]he empathic bridge fizzles because, in some sense, the other person was too empathetic. More precisely, they were too secure in their identity as empathetic beings. They didn’t problematize empathy, think of it as a messy through-a-glass-darky thing which depends on mutual effort and can have unexpected twists.…If I look at you and think I know who you are, how you feel, what you value, what you worry about, if I can read you like an open book, then how can I encounter you as a complex human being?"
Kind of agree, but a problem is that it is all unreliable instincts all the way down (meaning I have a really hard time, for example, understanding why EAs and Rats follow the Utilitarian rabbit hole of sacralizing one such instinct - what Haidt would call the Care/Harm foundation- as the central moral axiom from which to derive the whole).
Yeah, this. They're obviously incorrect in this.
For what it's worth, there are plenty of us rationalists who won't push people off trollies!
My favorite counterexample to utilitarianism is gang rape - clearly wrong and clearly positive-utility.
I think the dismissive attitudes towards domain-level experts are a much-needed correction against a society and culture that has gone way, way too far the other way. I think those attitudes are completely justified.
I think as a Bayesian principle, we should have a much higher prior for ideas created in-community; they are simply more likely to be correct than outside ideas.
I don't understand all the EY-hate and insults here, or all the rationalist-hate. Makes me very sad. :(
Your ex's 2009 post was **ridiculous**. It was rightly criticized in the comments and it's based on numerous falsehoods and distortions, chiefly not understanding what extreme rationality is. It's not a higher level of rationality above normal rationality; it's just consistently applying rationality to everything in your life. People who actually tried the techniques, or actually attended bootcamps, explained in the comments and elsewhere how wrong Scott was and how helpful it was.
You overrely on the replication crisis, as though the Sequences were totally based on studies that didn't replicate. They aren't.
I don't understand the problem with EY having a twitter, and your snark about that obviously being a bad thing makes it unclear how much of this is seriously intended vs being SneerClub.
I would really love to know what "claims about both rationality and AI" in the Sequences "turned out to be hilariously false." I don't think this is remotely true, especially about AI.
I would also love to know what you don't like about EY's twitter.
Ozy also has a twitter. I think this is "EY is like the rest of us" snark.
Great post, Ozy. Thanks.
A very useful project would be to make a list of which Sequence posts have held up & are still well worth reading, which are now shown to be wrong thanks to the replication crisis, etc. I appreciate Ozy's links in this post but would love a longer look at it (by Ozy, or by anyone—actually if multiple people did it, it would be great, since then we could compare their (no doubt different) lists).
They have all still held up and are all worth reading.
None has been "shown to be wrong."
To pick a few examples: The Futility of Emergence (https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence) is snarking at the set of ideas about how to make an AGI that would eventually turn into scaling laws. Neural Categories (https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb/p/yFDKvfN6D87Tf5J9f) includes similar snarking. Priming and Contamination (https://www.lesswrong.com/s/pmHZDpak4NeRLLLCw/p/BaCWFCxBQYjJXSsah) uncritically claims we are tossed about by priming.
Do not cite the Deep Magic to me, Witch, I was there when it was written.
What? No, this is just plain wrong.
The first post has nothing whatsoever to do with scaling laws. You have completely misunderstood it. He’s criticizing the idea that emergence is an explanation for something. If you say something is an emergent phenomenon, that describes it. It doesn’t explain it.
Analogy: if we raise a bird in captivity and provide it with materials, it will build a nest. No one has to teach it how; it never saw any examples. It just knows.
Now we have a word for that, which is “instinct.” The bird’s behavior can be described as instinctive. But that doesn’t explain how the hell the bird does it! Emergence is like instinct, it’s a description, not an explanation.
That’s it. That’s the point of the post. It has nothing to do with designing an AI based on scaling laws. It’s not snarking at that idea at all.
(As an aside, people are correct to snark at the idea that scaling laws could make an AGI. They have not and are unlikely to, and ChatGPT seems to have plateaued with lots of additional data, meaning we’ve squeezed as much out of scaling laws as possible. But that’s not what the post was about.)
The third one on priming certainly seems mistaken. The first one on emergence seems entirely correct; "emergence" is not a useful explanation of a system's behavior. The second one seems un-judgeable until we gain a much better understanding of how neural networks and human brains work, and it's so vague that it may be unfalsifiable even once that happens.
I certainly agree that many posts in the sequences were bad; I've also heard that he got many of his claims on quantum physics wrong, though I don't know enough about the field to verify. But you make a pretty strong clam when you say they are "full of claims about both rationality and AI that turned out to be hilariously false", and your claim seems to lack grounding? At best you've presented *one* claim about rationality that was false, and it's pretty understandable how he got that one wrong; priming wasn't just one bad study, it was an entire ecosystem of bad studies that seemed to reinforce each other in the same way that good studies reinforce true things.
TBC I don't mean to say that Eliezer should have figured out that priming was wrong; it would have been impressive if he did but it's not a mark against him that he didn't.
The emergence stuff IMO is Eliezer making a point about rationality in a way that is actually a dig against people he doesn't like. (He does that a lot.) It's true that "emergence" isn't an explanation, but also AI did in fact emerge from (heh) relatively simple models when you just threw an enormous amount of compute at them. A lot of the Sequences reflect Eliezer's belief at the time that GOFAI was the correct approach to AI, which turned out not to be the case. Again, I don't think Eliezer is at fault for this-- he made predictions that were reasonable at the time.
The Sequences are nearly twenty years old! If we're not looking at them and going "wow, they're full of wrong stuff in retrospect", then that would mean that Eliezer's entire project failed.
As far as the third post, on priming and anchoring bias, it's completely correct.
Anchoring bias is robust and well-established by many studies. It replicates.
We are tossed about by priming. That's also been established. Just because a small number of priming studies did not replicate doesn't mean the entire field is bunk.
You seem to be misunderstanding the replication crisis - it was a crisis because we had to rethink the process. It was about the process we used, not about the small number of inconsequential studies that didn't replicate. The latter was not the important part and did not affect long-standing and established knowledge in the field.
With a small number of exceptions easily observed through introspection, priming is a result of underpowered and p-hacked studies. See, for example https://replicationindex.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/ (which has Daniel Kahneman saying as much in the comments!).
At least some anchoring is real: see https://replicationindex.com/2021/06/04/incidental-anchoring-audit/
"Anchoring effects have been demonstrated in many credible studies since the 1970s (Kahneman & Tversky, 1973)."
Anchoring is a form of priming, so not all priming is bullshit.
Kahneman also says "I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested."
So I don't think any change is needed in that LW post.
Finally, I'm not sure what you mean by "With a small number of exceptions easily observed through introspection" - are you saying you think introspection is an accurate way of determining these things? It pretty clearly isn't, and there's robust research on the unreliability of introspection.
Upvoted for CS Lewis quote! But I was also on LW when these were written.
> I was here for all of it: I’ve been reading LessWrong since it was part of Robin Hanson’s blog Overcoming Bias.
Wait, didn't you once say you were brought in by HPMOR? (I remember this surprised me at the time, as I thought you'd been around longer!)
Anyway more on topic I feel like I should link here my recent DW post on the significance of Eliezer Yudkowsky: https://sniffnoy.dreamwidth.org/588263.html But, I guess in that post I didn't really give him credit for all the rationality writing, even though a lot of it was quite good and important. I didn't focus on that because 1. that wasn't the point of the post and 2. I was thinking of that as largely continuous with earlier rationality writers. Which it is, but some of it focuses on things they didn't (e.g. beware of categories). So yeah.
I don't think that's true at all. It was never a niche subject among AI pioneers. Turing and Minsky already believed in it. The paperclip minimizer experiment is more or less lifted from Minsky's thought experiment about an AI being tasked to solve the Riemann hypothesis (mentioned in Artificial Intelligence: A Modern Approach). The transhumanists just kept the torch during the AI winters.
Interesting! Do you think I'm correct about the mood at the time among transhumanists, science-fiction fans, etc., at large, that largely they hadn't heard of such concerns except in easily dismissable anthropormophic form, expressed by general luddites? It seems to me like even the serious, nonanthropomorphic version of this is much older, it wasn't popular until Yudkowsky. Am I mistaken there?
I wasn't there back then so I'm not sure. I think Bostrom, Kurzweil, James Hughes, Bill Joy, etc. were developing the proto-longtermist tradition in parallel among the same time, but EY was the one to defend it among and against transhumanists interested in AGI in particular he was meeting in the Singularity Summits like Goertzel, Legg, etc. during the latter days of the second AI winter. Obviously science fiction fans would be aware of Asimov (who considered Minsky one of the two smartest people he ever met alongside Carl Sagan) and 2001 (which Minsky was a technical consultant for).
I mean sure Minsky was well-known but I don't believe that thought experiment was, I don't think I ever heard about it before encountering LW. Did it appear in anything widely-read?
Kurzweil is the sort of person I'm thinking of who y'know just generally assumed AI would of course be good.
It appeared in Section 26.3 (titled "The Ethics and Risks of Developing Artificial Intelligence") of the 2003 edition of Artificial Intelligence: A Modern Approach, the reference textbook on AI (not sure if it was already in earlier editions).
Yeah not sure about mentioning Kurzweil here. I think his x-risk worries were about grey goo and stuff, and that kinda faded alongside the rest of molecular nanotechnology, as it turned out that the molecular nanotechnology we run on ruthlessly optimized by billions of years of evolution is probably as efficient as you can get under the laws of physics.
Oh wow, I pretty strongly disagree. Evolution is absurdly inefficient. A nanobot programmed to consume all material and use it to copy itself would be far more efficient than anything evolution came up with.
Ugh, a word "if" went missing there. It should say "even if the serious..."
By anthropomorphic form do you mean, like, a wish-granting genie, golem, or sorcerer's apprentice?
Also, don't take your anthropomorphic form to an ATM machine to put your PIN number in - you might get the HIV virus.
You might want to read the post of mine this is actually responding to, but, in short -- we're not talking about anthroporphic AIs but about anthropomorphic arguments about AI. "AIs will be angry about being enslaved and revolt against us" is an example of an argument based on anthropomorphizing the AI. I'm saying a large part of what makes Eliezer Yudkowsky significant is that he popularized non-anthropomorphic arguments for AI being dangerous.
Oh, I see. That's interesting, because that would make the legends I referred to non-anthropomorphic!
Yes -- that sort of thing generally would have gotten a different counterargument; I guess I failed to include that in the post, I didn't think of it. I would say that the general response to those sorts of concerns would be either "well we wouldn't do that then, it's a computer, we wouldn't program it that way" or "it's an artificial intelligence, it'll understand what we mean". And of course a fair bit of what Yudkowsky wrote is addressing those two things, particularly the second -- a whole lot of space is dedicated to "the hard part isn't getting it to know what we mean, the hard part is getting it to *care*". But I don't think that's a point that was widely appreicated before. (With the first, the problems are a bit more obvious...)
Can you elaborate on Minsky’s thought experiment?
https://en.wikipedia.org/wiki/Instrumental_convergence#Hypothetical_examples_of_convergence
This comment was removed for praising a Ziz blog post while expressing violent sentiments. I presume these sentiments were not meant literally, but given events I have an absolute zero tolerance policy for *any* even conceivably pro-violence sentiment attached to agreement with *anything* from Ziz's blog. Sorry. I dismissed a lot of people's statements as rhetorical exaggerations and then they committed murder, I am flinchy.
To be clear, I was not and am not endorsing either Ziz or violence in any way, and I specifically and strongly oppose both.
Fair enough. Mind if I repost just the first half without the Mencken quote?