“Milk before meat” is a phrase commonly used in Mormon and ex-Mormon communities. It refers to 1 Corinthians 3:1-2:
Brothers and sisters, I could not address you as people who live by the Spirit but as people who are still worldly—mere infants in Christ. I gave you milk, not solid food, for you were not yet ready for it. Indeed, you are still not ready.
In practice, “milk before meat” means telling new converts about the more palatable doctrines of the LDS church but leaving out the weirder and less popular ones until the member is already involved.1 You can tell people that the Latter Day Saints have a living prophet and that members are going to live with their families for all eternity, but you leave out the weird underwear and the thing where men get their own planet.
“Milk before meat” comes from an understandable impulse. You don’t want people to reject your ideas out of hand because they sound weird or implausible. You want to meet people where they are. You want to show them those aspects of your beliefs that they already agree with or that are logical implications of things they believe.
But it’s still really really creepy. You’re deliberately lying to people by omission because you think that they can’t handle knowing everything you believe. You’re misinforming them about the community they’re joining: they don’t get a chance to assess all Mormon doctrines before they decide whether to be baptized. You’re not trusting people to do their own reasoning about situations.
While the absurdity heuristic can lead people to dismiss correct ideas, “milk before meat” doesn’t actually make people less biased. It just makes them biased in the direction of believing you. By the time they get far enough in the community to hear about the weird stuff, it might be weird and awkward to bring up their questions. Maybe they have a bunch of new Mormon friends who they want to think well of them. Maybe they admire some Mormons and therefore think things those Mormons believe are more plausible, whether or not they actually are. Maybe they identify as Mormon and it’d be embarrassing to admit to being wrong.
It’s also just… not very good at convincing the right people? I think this matters less to Mormons because their target audience is “everyone.” But if you’re trying to recruit people who want to have their own planet, it isn’t going to work very well to aim at people who are really into the idea of spending eternity with their kids. These are different groups. Lots of people who want their own planet don’t even like kids.
Some effective altruists do “milk before meat” and they should stop.
To their credit, major effective altruist organizations basically don’t do “milk before meat”. The Centre For Effective Altruism’s Introduction to Effective Altruism covers all cause areas fairly and its introductory program has no fewer than three weeks on ‘weird effective altruist” issues. 80,000 Hours clearly communicates that it prioritizes risks from artificial intelligence, biorisk, effective altruist movement building, and global priorities research. I don’t know if they’re consciously aware of this problem or are just trying to tell the truth about their beliefs, but either way it’s praiseworthy and a positive sign for the effective altruist movement.
Nevertheless, personal contacts are the most common way for people to hear about effective altruism, and in my experience individual people are much more likely to do “milk before meat.” In particular, I think a lot of effective altruists who focus on “weird effective altruism” things—especially, but not exclusively, artificial intelligence—are likely to pitch effective altruism to other people as something which focuses on global health and development. I think that’s dishonest, manipulative, and ultimately harmful to effective altruism.
To be clear, I don’t think anyone is behaving maliciously. Like I said above, it’s very natural to want to meet people where they’re at. If you prioritize artificial intelligence, you’re usually not, like, against people donating money to GiveWell top charities. If you’re not going to get them to worry about the AI apocalypse and you are going to get them to give money to the Against Malaria Foundation, it’s very logical to try to convince them of the goodness of malaria nets.
However, if lots of people think like this, people are going to be systematically wrong about what the effective altruism movement is doing. They’re attracted to the Malaria Is Bad community, and then we bait-and-switch them into the Human Extinction Is Bad community.2 This is bad because:
The whole point of effective altruism is that people should believe true things about how to do good things in the world, and it is bad if people instead believe false things.
The people who are attracted to the Malaria Is Bad community are not necessarily the people you need in the Human Extinction Is Bad community, and in fact are probably different in a whole bunch of ways.
The whole reason that “milk before meat” works is that once a person is in a community they tend, for various social reasons, to find things that the community believes to be more plausible. That is not a truth-seeking dynamic: it works as well for false things as for true things.
I think, for reasons of good epistemic hygiene, it is bad to deliberately cause people to be biased in favor of things you believe. That’s still true if you’re doing it to counteract some other bias.
So what are the implications? First, if you’re talking to someone about effective altruism, be extremely conscientious about representing all four primary cause areas. You don’t have to say you agree with them: you might say “some effective altruists think that we should work on eliminating factory farming” or “some effective altruists are worried about things that would kill off all of humanity, especially artificial intelligence and pandemics.” If you’re giving examples, try to give examples from many different cause areas. Even a caveat like “I’m going to talk about global poverty, because I think you’re most interested in it, but a lot of effective altruists prioritize other issues such as…” is a big help, I think.
Second, practice discussing existential risk and other “weird effective altruist” issues in a way that sounds less absurd. I’ve had excellent luck talking about biorisk in the context of the covid-19 pandemic. You might say something like “we were very lucky that covid wasn’t much deadlier—if it were, many many more people could have died. Maybe it could have wiped out all of humanity!3 And yet very little money is put into preventing future pandemics.”
Similarly, I often talk about risk from artificial intelligence by connecting it to an experience everyone has had: misbehaving computers. “You know how your computer does exactly what you tell it to, even when you want it to do something else? These days, computers are able to do more and more things. If a very powerful computer does what we tell it to do instead of what we want it to do, it could be catastrophic—especially if it knows to try to keep us from turning it off!”
Of course, adjust these to your own personal style, and I’m not making the claim that these are the best ways to make “weird effective altruist” issues sound normal. But if you have a couple of these in your back pocket, it will be less tempting to resort to “milk before meat.”
Third, at all times accurately represent your own viewpoints. If you think that the most important global problem is artificial intelligence, and you’re in a conversation about how to improve the world, say that you think the most important global problem is artificial intelligence. If you’re trying to convince someone to donate to GiveWell top charities but you prioritize something else, say “this isn’t actually the cause I support myself—I work on/donate to [different thing], and I’m happy to answer questions about it—but I think it’s the one that you’ll find most appealing.” I don’t care what you do in your personal life, but if you’re speaking about effective altruism you need to tell the complete truth.
Thanks to callmesalticidae for fact-checking my claims about Mormonism. All remaining mistakes are my own.
Sometimes the LDS church even encourages people not to talk about certain doctrines with other long-term members.
Which effective altruism kind of is right now, especially among the most involved and elite members.
I know, I know, this is a massive oversimplification.
This seems mostly only applicable to people who themselves are engaged with, or part of, the weird parts of "the community." If I'm someone who thinks all the longtermist/X-risk/AI risk/etc. people are wrong and annoying, and I only worry about and donate to global health, why should I have to talk about X-risk upfront, any more than a Unitarian should have to talk about the weird parts of Mormon doctrine? like, "so here's this idea I think is really important, and here's these other people who also claim that the idea is really important although they mean something completely different by it"? I guess maybe I should just stop using the phrase "effective altruism" when I talk to people about it...
If we don't want random people to think we're weird, we can still say stuff like "well, I don't like being required to do such-and-such fundraiser, I'm picky about charities and I mostly donate to disease prevention in developing countries" if it's true, right?