Disclaimer: For ease of writing, I say “effective altruism says this” or “effective altruists believe that.” In reality, effective altruism is a diverse movement, and many effective altruists believe different things. While I’m trying my best to describe the beliefs that are distinctive to the movement, no effective altruist (including me) believes everything I’m saying. While I do read widely within the effective altruism sphere, I have an inherently limited perspective on the movement, and may well be describing my own biases and the idiosyncratic traits of my own friends. I welcome corrections.
Yesterday, we talked about the Sequences’ influence on the effective altruism community. Today, we’re going to talk about the influence of the post-Sequences rationality community, particularly what I call “social epistemology.”
Social Epistemology
Where last I left off, the rationality community was reeling from the replication crisis.
The replication crisis put much of the Sequences—and far more of the stuff that people were writing inspired by the Sequences—into question. But it also doesn’t, like, fill you with confidence about other people’s thinking abilities? Indeed, one lesson you could draw from the replication crisis is that a smart layperson can outperform tenured psychology professors using three heuristics about study design and a good dose of common sense.
As the replication crisis wore on, I saw less and less breathless reporting about priming studies and more and more explanations of how to critically read studies. The mark of a rationalist became familiarity with p-hacking, underpowered studies, sampling bias, lack of ecological validity, inappropriate generalizations, and of course that old favorite “doesn’t smell right.” It’s a fool’s game to try to be more rational than normal people, but it’s super easy to be more rational than social scientists.
Over time, “let’s be critical of studies!” evolved into a different understanding of what rationality is. The shift parallels the transfer of the rationality community Mandate of Heaven from CFAR to Lightcone Infrastructure.1 CFAR primarily ran workshops; it taught individual people thinking techniques for them to use on their own, to make their individual beliefs more accurate and their individual plans more effective. Lightcone Infrastructure runs the website LessWrong and the venue Lighthaven; it sees itself as building a community of people who share ideas, criticize each other, and find the truth together.
In short, the rationalist community came to see truthseeking as a social project.
“I read this study and it’s bad, here’s why” isn’t primarily improving your own beliefs. It’s improving everyone’s beliefs. You are participating in the great, centuries-old Republic of Letters, where ideas stand and fall based only on their truth, where anyone can speak and anyone can criticize and we move together, slowly but surely, towards the truth.
Think about those cognitive biases I listed yesterday. You yourself are unlikely to notice whether you’re being unfair to people you dislike, getting in a bad mood because you’re hungry, or believing things only because they make you feel good about yourself. Other people, however, definitely notice those things. A dozen hours of introspection are nowhere near as good as a friend saying “I think you’re bitch-eating-crackers about Alice.”
If rationality is a social project, then the central rationalist virtues are social too:
Accurately representing the evidence for and against what you believe.
Proactively pointing out the weak points of your beliefs.
Accurately saying what you believe and why.
Saying what kind of evidence it would take to change your mind (and there’s always evidence that can change a good Bayesian’s mind).
Changing your mind when such evidence is provided.
Accurately representing other people’s viewpoints.
Only criticizing the arguments and evidence other people are actually providing, instead of arguments and evidence that it is easier for you to counter.
Taking criticism well.
Being willing to offer criticisms that other people won’t like, or of popular ideas and the ideas of powerful people.
Giving criticism clearly and unambiguously, yet kindly.
Attacking the argument, not the person.
The traditional twelve virtues of rationality still have a place, of course. But when Eliezer wrote the original essay, “argument” was only one of them. I feel if The Rationality Community As A Whole were writing it today, it would make up at least eight.
The Free Marketplace of Ideas
Social epistemology relies on the following premise: if everyone gets together and argues about what they believe, then people will wind up more likely to believe true things than false ones.
Of course, other traits might make beliefs more popular: people might be more likely to believe things that are easy to understand, or that cool people believe, or that a charismatic person argued for, or that flatter them, or that mean they don’t have to do any work. But, all things equal, arguing will tend to favor true beliefs over false ones.
Indeed—social epistemologists say—argument is the only way of picking your belief that reliably favors true beliefs over false ones. The truths people feel deep in their hearts regularly contradict each other; the Pope can say whatever he wants; Nazi bullets and Stalinist bullets shoot as well as liberal democratic bullets. But true arguments win.
Not everyone believes that argument tends to lead to truth. Neoreactionary writer Mencius Moldbug said that “Cthulhu may swim slowly. But he only swims left”: that is, the process of argument and debate inevitably favors left-wing ideas, even though (Moldbug believes) left-wing ideas are false. On the left, a similar idea often goes by the name of “the paradox of tolerance”: if we’re tolerant of the intolerant, then the intolerant people will take advantage and spread their ideas and destroy our tolerant society.2 Similarly, some leftists believe that beliefs are determined by a person’s class position and material interests, such that argument is a distraction from organizing the powerless to demand a just society.
If argument is the best way to find truth, then anything that shuts people up other than an argument makes us wronger. This includes the obvious uses of power to silence people: social ostracism, deplatforming, firing, government censorship, violence. But it also includes a bunch of behaviors that most people find completely unobjectionable—at least when they’re the ones doing it.
Effective altruists are opposed to shutting people up to a degree that people outside the community find it hard to wrap their minds around.
Socially, in my experience, effective altruists try very hard to maintain friendships with people they disagree with. Of course, it’s impossible to make the community perfectly welcoming regardless of your views: for example, if you express unpopular viewpoints, you sometimes get eight people disagreeing with you at once, which is pretty unpleasant. But, as someone who had very heterodox views on AI for a long time, I don’t think I’ve ever had someone dislike me or think less of me for my views about AI. When I talk to other effective altruists with heterodox views, they express the same sentiment. You are significantly less likely to lose friendships in the effective altruist community for trying to drive humanity extinct than you are in, for example, danmei fandom for having unapproved opinions about the gender identities of secondary characters.
In my experience, effective altruists try to be equally tolerant about who receives jobs and grant funding. Unavoidably, people with certain viewpoints are less likely to get certain jobs: if you think that only humans are moral patients, you’re unlikely to get a job at Mercy For Animals. Effective altruists want to embrace a diversity of viewpoints, but they also want to hire people who are “value aligned” (i.e. also effective altruists). But even so I’ve noticed a remarkably low level of viewpoint discrimination: for example, I have gotten to the final round of a job where I was supposed to write about AI multiple times.
Even so, effective altruists know that people will hesitate to disagree with the consensus, so they try their best to reward criticism. Effective altruist groups have offered five-figure prizes for the best criticism of their work—twice. The Effective Altruism Forum maintains a tag for criticism of effective altruism, not to be confused with the separate tags for criticism of effective altruist culture, criticism of effective altruist causes, criticism of effective altruist organizations, or criticism of effective altruist work. It’s true that some top-voted pieces in the criticism tags are actually criticisms of other critics. But they also include harsh criticism like:3
Effective altruists across causes are making biased arguments in favor of their pet issues.
The largest funder in effective altruism is badly misallocating its funding.
Effective altruist organizations have overly exclusive admissions policies for conferences. (If this doesn’t sound harsh to you, you don’t realize how intense effective altruists are about conference admissions.)
The entire effective altruist approach to fighting global poverty is completely misguided.
The entire effective altruist approach to aligning artificial intelligence is completely misguided.
The entire effective altruist approach to everything is completely misguided. Stop maximizing shit.
Effective altruists are very intolerant of critics who criticize the core ideology.
Of course, effective altruists aren’t perfect at accepting criticism. Many of these articles come from prominent effective altruists like Scott Alexander or Holden Karnofsky and would likely get a colder reception from someone less famous. And effective altruists are far more likely to take criticism well if it comes from someone who understands the shibboleths.
But again, compare effective altruism to any other movement. Do you think any major trans rights organization has ever run a contest for the strongest criticisms of trans rights, and if they did would the trans community be happy about it? If there were a central forum for climate change activists, would it maintain tags in which people could post criticism of the scientific consensus on climate change? Do prominent feminist writers write articles saying “I think gender equality was mistaken, actually” and remain feminists in good standing, and if they did would people calmly and civilly discuss them? Do the Democratic Socialists of America let people post anonymous criticism of democratic socialism on their website, or would they immediately delete the posts and call the posters capitalist trolls?
In principle, effective altruists can say “we are welcoming of many different viewpoints about the world, as long as you share our moral values and behave according to our norms.” But in practice people’s beliefs do say something about who they are as people. You can be pleasant to interact with and be trying to destroy the world. But if someone thinks women like to be forced into sex, there’s a safe way to bet about whether they’re a rapist. And people who come to certain viewpoints about human cross-population genetic variation have this weird tendency to call people the N word.4
Worse, the expression of some viewpoints silences others. If the first speaker says that women are naturally less capable of abstract thought than men, or that Christians cling to a sky daddy because they don’t know how to deal with the real world, or that people with strong accents shouldn’t talk and make people bear the burden of trying to understand them, then the second speaker is unlikely to be a woman or a Christian or someone who isn’t a native speaker of English. To the extent that women, Christians, and non-native speakers have different views, their views are silenced—which potentially distorts the argument.
How to handle this subset of beliefs is controversial among effective altruists. The Overton window ranges from “innocent until proven to say slurs” to “if someone so much as mentions human cross-population genetic variation, ban them.” Nevertheless, the community is surprisingly adjacent to vocal “race science” advocates—not because effective altruists agree (they mostly have perfectly normie liberal views on race), but because there exist any effective altruists who won’t expel race science advocates from their parties and Discord servers on sight. This phenomenon is frustrating to the many effective altruists who despise race science and don’t want to be on edge to see whether this guy will use the N word.
I do want to remark that—while these hot-button beliefs are less censored than in the mainstream left and center—effective altruists are far more willing to censor them than they are to censor criticisms of effective altruism. They aren’t very important to the main effective altruist project, and they tend to silence people. I sometimes see people assume that effective altruists are specifically tolerant of race science people for some reason, and I don’t think that at all reflects conditions on the ground.
In Which I Explain Why The Shrimp Welfare People Are Still Invited To Conferences
Sometimes people ask “why are all the effective altruist causes attached to each other? Why do the chicken welfare people share a movement with people who want to prevent a bioengineered pandemic from destroying the world, and what do either of them have to do with insecticide-treated bednets? Why can’t you just kick the shrimp welfare people out? People are turned off by shrimp welfare and that makes them not want to donate to bednets, so if you think about it including the shrimp welfare people is actuarial murder.”
I think the answer is that effective altruists are the people who take this rationalist idea—of social epistemology, of truthseeking as a community project—and apply it to the idea of improving the world. If effective altruism were primarily a trying-to-get-people-to-donate-to-GiveWell-Top-Charities community, including the shrimp welfare people would be both a confusing decision and potentially actuarially murderous. But it’s not! It’s a truthseeking community, trying to answer the question of how you do good, within a particular framework of what it means to do good.5
Kicking out the shrimp welfare people is as shortsighted as saying “no one should research snail sex, people find that off-putting and we need them to think biology is good so they’ll take their vaccines.” The purpose of biology is to find out truths about life (including how snails fuck), not to eradicate vaccine hesitancy; the purpose of effective altruism is to find out truths about improving the world, not (just) to get people to donate to bednets.
I am sympathetic to people who want an activist community around global health, similar to the activist communities that already exist around environmentalism, feminism, LGBT rights, social conservativism, libertarianism, and so on. I support John Green’s nascent attempts to kickstart such a community and consider myself a proud #TBFighter ally. But the effective altruism movement isn’t that kind of community, because that kind of community depends on already knowing more-or-less what you’re supposed to be doing.
Effective altruism is the claim that we don’t know what we’re doing—but we can figure it out together.
You can also contrast the Sequences with Inadequate Equilibria, Eliezer Yudkowsky’s theory of why societies are irrational.
Karl Popper’s original point is actually different. He wants to censor people who reject the project of rational argument and want to resolve disputes through violence, i.e., a large percentage of the people who cite the paradox of tolerance as a reason to punch Nazis.
I’m looking at Top Voted (Inflation Adjusted).
I have long proposed the compromise that only people of color are allowed to have opinions about human cross-population genetic variation. This would improve the conversation immensely and, more importantly, anger everyone.
The Progress Studies community seems to be a similar rationalist-inspired truthseeking do-gooder community, with a different idea of what ‘good’ is. They do in fact get along with effective altruists, even though they keep serving us chicken when we go to their conferences.
> I support John Green’s nascent attempts to kickstart such a community and consider myself a proud #TBFighter ally.
Very happy to hear this. I'm also a #TBFighter supporter and was disappointed by how negatively EAs initially responded when I talked about it (e.g., https://forum.effectivealtruism.org/posts/SCYrASfriLCoFaqCZ/henry-john-green-video?commentId=3gHmTWPybK9pAfFLw )
I think part of the hesitation comes from EA’s intellectual culture. There’s often a deep skepticism towards collective action, something I’d attribute, once again, to EA inheriting some of the blind spots of mainstream economics: https://bobjacobs.substack.com/p/the-ea-community-inherits-the-problems
A more well-rounded sociological perspective shows that many of the most impactful efforts in global health have come from public movements, not just individual optimization.
EA doesn’t need to give up its epistemic humility to recognize that sometimes, collective action is how real progress happens. Maybe the success of the TBFighter campaign, combined with the broader political moment, will help EAs take the messy, hard-to-quantify tool of 'collective action' a bit more seriously.
> How to handle this subset of beliefs is controversial among effective altruists. The Overton window ranges from “innocent until proven to say slurs” to “if someone so much as mentions human cross-population genetic variation, ban them.” Nevertheless, the community is surprisingly adjacent to vocal “race science” advocates—not because effective altruists agree (they mostly have perfectly normie liberal views on race), but because there exist any effective altruists who won’t expel race science advocates from their parties and Discord servers on sight. This phenomenon is frustrating to the many effective altruists who despise race science and don’t want to be on edge to see whether this guy will use the N word.
> I do want to remark that—while these hot-button beliefs are less censored than in the mainstream left and center—effective altruists are far more willing to censor them than they are to censor criticisms of effective altruism. They aren’t very important to the main effective altruist project, and they tend to silence people. I sometimes see people assume that effective altruists are specifically tolerant of race science people for some reason, and I don’t think that at all reflects conditions on the ground.
No I don't think that's true at all. As Thorstad noted at length in his sequence on the subject (https://reflectivealtruism.com/category/my-papers/human-biodiversity/), the discussion on the Manifest controversy on the Effective Altruism forum clearly showed that large swathes of even the effective altruists who weren't actively involved in promoting race science held race science to be true and the prospect of whether to invite race scientists to EA-adjacent conferences to be a matter of truth-seeking v. reputation-seeking. It is *not normal* for an ostensibly academic-minded secular cosmopolitan philanthrophic movement to be so reliably involved in controversies about ties to advocates for a single fringe extreme position held by virtually no academic biologist or geneticist, while simultaneously being closed to entire academic fields of study, and even majority positions in the few academic fields of study that it actively engage with. If this was a simple question of open-mindedness and free speech norms, then EAs would get in a similar amount of controversies about creationists, tankies¹, and Flat Earthers. This is very obviously not the case.
¹: kudos to certain French and Belgian skeptic orgs still platforming Jean Bricmont btw