Disclaimer: For ease of writing, I say “effective altruism says this” or “effective altruists believe that.” In reality, effective altruism is a diverse movement, and many effective altruists believe different things.
I think of myself as an EA (or at least EA-ish), and I don't really agree with these points:
* Every problem has a solution that leaves people better off on average.
* Politics shouldn’t have winners and losers; ideally, everyone should be happy with the outcome.
I wish this were the case, but I think realistically, most solutions will leave some people worse off somehow..
For example, I think we need to entirely eliminate human use of animals. This will clearly eliminate a bunch of existing jobs (factory farm worker). What we can do to ameliorate that would be to compensate them with cash payments. But some of those people will still be unhappy, because they derived a sense of self or satisfaction from the work, and will think they are worse off, even with the compensation.
So I think we can find solutions that have as few economic "losers" as possible, and compensate them financially, but we probably can't fix the _emotional_ pain that people feel because of a policy change.
* If you disagree with someone about policy, it’s because you believe different things about the world, not because you value different things.
I also disagree with this. It's quite clear that people have values beyond the very basic shared values like being healthy, having enough food to eat, shelter, etc. For example, lots of writing about conservatives has talked about conservatives value tradition, authority, and sanctity/purity more than liberals.
Given that difference, it seems like there's no policy that will satisfy everyone.
I mean, it's a goal, nothing's perfect. The greatest good for the greatest number, and the least harm for the remaining--wasn't that Bentham? I don't think you can craft a policy that has no negative effects on *anyone*.
I think Ozy's discussing mistake theory as a descsription of what EAs believe, not as unambiguously true in all cases. I tend to lean towards conflict theory myself--people just have different interests.
#5 "Trade Can Make Everyone Better Off." is false. The truth is that total gains are larger than total losses, but some people -- the guy who was a well liked foreman at the local factory but is too old to retrain for a new career -- end up worse off.
This also isn't taking into account the possibility of using the gains from trade to compensate the losers from trade, although in practice this can be pretty difficult to do.
"Trade Can Make Everyone Better Off" implies that not every trade results in universal improvement. Thus, being able to name one specific trade that results in a problem doesn't disprove the claim. You have to prove that there are no trades in which everyone is better off.
i am not you average EA-Adjacent person (much older, completely different background and ethnicity, low inclination towards utilitarianism) but EA politics is something I find off-putting and it would be even more so if EAs didn't mostly just ignore politics. Even from a cynical point of view, there's a talk by Tyler Cowen from a few years ago in which he very presciently underscores how EA's excessively strong liberal bend alienates many people on the right who could otherwise become attracted to it.
The most redeeming aspect though is that they are mostly liberal in the good sense of the word, i.e., classical liberalism as you mentioned in the post - markets are good, capitalism is good, even if not perfect, freedom and freedom of thought is good- as opposed to the SJW variety. I also feel they have a slightly inconsistent view of conservatism -for example, very critical in theory, but I got to Joe Henrich's books through an EA bookclub, and one can make the case that Henrich is the best and most updated version of Chesterton's Fence intellectually available right now.
I feel like EAs identify affectively with the left because they’re highly educated coastal young Americans (at median, of course) and that’s just the norm in that environment while not really agreeing with the actually existing American left on that much more than the actually existing American right. It’s true they like UBI which is arguably far left, but so did Milton Friedman…
This relies on the premise that they're mostly Americans. Is that true? I feel like the US has the plurality but not the majority of members, source: vibes.
Eh, expanding circles of Americans, Anglophones, and generally cosmopolitan Westerners, where I think to only slightly decreasing extents all the same points apply
Most of this seems right to me but I'm not convinced it's at all true that Effective Altruists are particularly incrementalist. "Not demanding an immediate end to capitalism" is like, the default view in American politics including among many self-styled socialists, so if that make you an incrementalist almost everyone except the revcoms are incrementalist. But being big on economic theory and technocratic policy making more generally has a tendency to make people look at the Rube Goldberg-esque set of political compromises that make up most real-world policy making and go, "holy shit! this makes no sense! we need to completely re-do this" and gravitate to ideas like carbon taxes and basic income guarantees that are well outside the current Overton Window of American politics. The thing that stops these from being major EA cause areas is they mostly don't seem very tractable, but I think EAs are still probably more open to them than the average American.
I think the Georgist-Pigovian-UBI cluster, when they attack all more politically realistic ways of addressing those issues, either kid themselves about political realizability or are in bad faith (in the case of Georgism and UBI there are indeed libertarian advocates that openly advocate it as a pragmatic move toward ending all taxation and redistribution) https://thingofthings.substack.com/p/on-concern-about-technological-unemployment/comment/101209291
The elitism is by far the most emotionally off-putting feature for me. But then -- if certain politics make it plain that a movement/group is EXPLICITLY not interested in people like me, then the loss of people like me from the movement is unimportant. And there's something refreshing in a complete lack of populism or even interest in popular appeal, ie honest stating that you (generic/movement) are only interested in high IQ people in position to generate wealth or influence and ideally both. It might hurt egos but it also kinda absolves one from not doing anything: instead of pretending small donations matter, you can admit that one millionaire donor is worth tens or hundreds of thousands of smaller ones. So I can keep my 10% because it's OBJECTIVELY worth very little to the causes but a lot to me :)
Yeah, I think it's genuinely difficult to figure out the right balance about elitism. On the one hand, the vast majority of EA direct work jobs only hire unusually smart, hardworking people; if you're trying to convince people to try EA direct work jobs, you want to aim for unusually smart, hardworking people, and it can be reasonable to use various (admittedly lossy) proxies like going to an elite college. On the other hand, there is something genuinely valuable about the Giving What We Can "anyone can save a life" framework, and I worry about losing it. And, of course, "someone else can save a thousand lives" is not a counterargument to "I can save one life"-- it's equally good either way! I have tended to adopt a less elitist position because I think EA is currently on the side of too much elitism, but I can see arguments in both directions.
This is one reason I'm a fan of 80,000 Hours's recent pivot into AI. As a practical fact, 99.9% of people, even in the developed world, have no meaningful way to improve the AI trajectory. When 80,000 Hours was balanced awkwardly between being an AI-focused org and a general EA careers org, this made their messaging come off as very elitist. I now recommend that people check out Probably Good. Probably Good covers all standard EA cause areas, which makes them significantly less elitist: different causes need different skillsets, and global development and *especially* animal advocacy have more of a need for normal basically competent people. (TBC, I don't disrecommend 80,000 Hours: it is still IMO the best introductory resource to AI out there.)
From a purely consequentialist pov tho >>"someone else can save a thousand lives" is not a counterargument to "I can save one life"<< because it ignores "opportunity for increasing own happiness" cost. One of my children donates a regular amount to Give Well -- at a rate that would take him five years to "save a life". He's a student who receives a full "poor people" help and works full time in the summer at above 20% minimum wage. I'm not suggesting he SHOULDN'T do that but perhaps he'd be objectively (????) better saving that (objectively meager) amount towards, idk, his imaginary/hypothetical future pension? On the other hand if we don't look in terms of full lives, then his monthly donation covers more than one DALY. And that seems EXTREMELY effective at a cost of a few pints of beer. So optics are important here.
This was really good. I assume you have more firsthand experience with the movement than I do. You cover a lot of the high points (mistake theory, economics, positive-sum), and even many of the weaknesses (bad at mass politics, elitism).
I think I'd try to put it in five words (cheating with the acronym) by saying 'EAs are liberals, not leftists', and that seems to be a big reason a lot of leftists don't like it. You're fine with capitalism, and increasingly the left organizes itself against that. (It always did outside of the USA.)
Not everyone is going to be a leftist, so if it gets a few people with money to do more good than they otherwise would have I think it counts as a net plus.
I agree largely with this list of differences between EA and "normie" left-liberalism, but I think the biggest one has been omitted - agency.
EAs actually do stuff! Whereas I think quite often left narratives rend to shred people's agency by blaming problems on society/power structures, in a way that convinces ordinary people that they can't do anything about the status quo except attend endless, largely ineffectual protest marches.
Left-liberals do stuff too. They run for office, lobby congresspeople, donate to candidates, raise money for battered women's shelters, and do lots of other things besides protest.
I agree the structural emphasis from intersectional leftists weakens solidarity and protest marches aren't as useful as they used to be, but I would not agree EAs have a monopoly on agency.
Honestly for all the reasons Ozy outlined I think EAs are better off influencing (or starting) things like charities run by ex-techies rather than running for office.
I appreciated the attempt to get at attitudes people hold which aren't necessarily conscious but do probably inform the more conscious things (aversion to conflict theory -> technocracy)
And some criticisms of specific aspects of administration policies aside, you'll seldom find EA-aligned political pundits who think democracy and civil liberties are worth defending, they have spent most of their energy since the election clamouring for Musk to amass as much power as possible, posting graphic fantasies of murdering trans people, and lamenting that Democrats are too mean to accelerationist tech billionaires.
There are a *large* number of people with accounts on the EA Forum. You've made 74 comments and 32 posts, while Cole only seems to have made one comment ever; it would be far more reasonable to conclude from you that EA is leftist than from Cole that EA is pro-DOGE.
The most prominent EA-aligned pundit is probably Matt Yglesias and he is vocally anti-Trump and anti-murdering-trans-people. If you look at more directly EA publications like Asterisk Magazine or Future Perfect, they have also run a bunch of articles about the USAID cuts, democratic backsliding, and other things we hate about Trump. For that matter, the most prominent EA-aligned *right-wing* pundit is Richard Hanania, who despises Trump and MAGA. Samuel Hammond wrote that execrable "the EA Case For Trump" post but he doesn't even identify as an EA!
I understand that you hate EA politics. That's fair! EA is definitely not leftist. But EAs are simply not pro-Trump, pro-DOGE, or into murdering trans people.
1/ Cole's presence is only illustrative of a broader point. It's hard to dispute that Musk himself (and other pro-Trump SV billionaires like a16z, Thiel, etc.) is a pure product of Bay Area techno-libertarianism, and the 'EA view of politics' is Bay Area techno-libertarianism.
I think that's a bad post but it is obviously not a graphic fantasy of murdering trans people, unless Civil Liberties and Gun Control are also people whom you think can be murdered. What he's saying is "Democrats should adopt less pro-trans positions in order to defeat Trump." It's a bad post because it comes off as triumphalist and sneering and doesn't acknowledge that the tradeoffs here are tradeoffs.
I don't know if Kamala Harris would have won over some swing voters if she'd said "trans women shouldn't play in women's sports" or "children shouldn't transition", but if she would have, she should do that, because Trump wants to become an authoritarian dictator and that is actually more important than any trans rights short of us not being put in concentration camps.
Would you have said the same if it was a card-carrying alt-right militant posting a meme of himself over a pile of bodies labeled "LGBTQ++" if there were other labels on other piles of bodies? (because Wokeness and Cultural Marxism aren't also people who can be murdered)
Because authoritarian takeovers always rely on the tacit support of complacent moderates who just think they can be more successful incorporating far-right demands in their program and dismissing bigotry toward minorities as "not so violent or genuine as it sounded". May I recommend Johann Chapoutot's work? (There's a reason the influence of liberal parties collapsed in Europe after WWII and it's not just the Keynesian revolution.)
The people being run over are not "LGBTQ++" any more than they are "Gun Control" or "Civil Liberties". They are metaphorical representations of abstract general issues in a political cartoon, not representations of individuals or groups of individuals.
Do not provide one isolated fact and when someone points out it's an isolated fact say "well it's illustrative of a broader point" (even though it really isn't) and then not provide sufficient evidence for that broader point.
1. I don't think they're exactly the same thing. You can have multiple divergent ideologies that grow from the same root; socialism grew out of Enlightenment liberalism but is definitely its own thing now. The EAs seem much more interested in the common good than, say, Musk.
2. Ozy's done a better refutation than I can, but I'd agree that isn't really a graphic fantasy of murdering trans people (though I agree the language is offputting) or lack of interest in democratic backsliding, ir's the same sort of dissembling political strategy Ozy specifically says EAs are bad at. (And is necessary IMHO for real politics.)
Answered elsewhere on 1. On 2, I'm not taking concerns about strategy for someone who was very influential on the administration that turned the 2020 blue wave into having the Dems lose the popular vote for the first time in two decades, to a felon who attempted a coup. Dems seizing on the Garcia case was an unequivocal success that turned popular approval of Trump's immigration policy in the negatives, meanwhile Yglesias is now talking about how the US constitution should be amended to end birthright citizenship. And it's bizarre to say "this isn't really [...] lack of interest in democratic backsliding" when Ozy's own 'refutation' of it being a 4chan-tier bigoted meme is that "civil liberties" is also under the bus. At some point one should just adopt the principle of revealed preference and note that if Yglesias always call for the farthest right position on all subjects imaginable it's probably because he sincerely believe in them.
All right, I'll certainly agree Yglesias isn't as smart as he thinks he is (though the Dems made a bunch of other unforced errors too, most prominently failing to get Biden to resign). I just don't think he wants to kill trans people or have a fascist government; he's a center-righty who doesn't like the left and wants a centrist Democrat government.
I'm sure Franz von Papen "didn't want to kill Jews or have a fascist government" and was just "a center-righty who doesn't like the left and wants a Zentrum government", but I don't think it's relevant when the thing at stake here isn't "will the Democrats get 10% more chance to win a given election a given year" but "will there still be civil liberties, rule of law, and fair elections regardless of which party is in power in the coming decades".
When you come to pundits, most EAs are pretty pro-PEPFAR from what I can tell; Scott Alexander devoted a whole post to attacking the DOGE arguments, and as Ozy says Yglesias is not .
I tend to think people engage in motivated reasoning to find reasons for things they want to do anyway, and the right's been trying to destroy the (already weak) American welfare state since...I don't know, the 1930s when it was started? Usually the excuse is some sort of efficiency, but at least in part businesses want to impoverish their workers so they can have more power over them.
From what I can tell of EA they're even more pro-trans than anyone short of active LGBT activists (and I'm sure the groups overlap). You might have a couple rogue EAs but they mostly seem to be worried about all the science getting canceled by DOGE.
Since when it's not? It's heavily based in GMU-strand libertarian economics and Silicon Valley interests, with people from the Bay Area and the tech industry being heavily overrepresented in EA. This seems very hard to deny per Ozy's own articles.
I mean, it's based in it, but they seem to have taken it in a different direction. If anything it represents a more benign turn of technolibertarianism toward trying to improve the world through nonprofits rather than something like DOGE.
A lot of this seems to boil down to 'People from the Bay Area and techies are involved, so it's bad'. I agree they're kind of smug, but as long as there's charity, might as well be efficient about it.
I think you'll need a strong argument to claim that the ideological distance between the median EA and the median Bay Area techno-libertarian is as high as the ideological distance between Lenin and Hayek.
> Former EA here, I think EA is worse than you think. Elon Musk used to be on the "EA people page" before I took him down with a comment explaining why, which got downvoted so much I had to ask friends to upvote it. Musk got praised/promoted so much by EA, and in fact it was MacAskill that was the liaison between SBF and Musk when it came to trying to set them up to jointly buy twitter. Which is a platform that Musk now uses to ban and suppress journalists and leftwing thinkers, while also promoting far-right conspiracy theories that directly aided Trump's election (and promoting MacAskill's book). Not to mention people in the EA-community giving a platform to Curtis Yarvin, whose "philosophy" formed the basis for project 2025.
> I don't think EA has been a net negative in the world, but you can buy malaria nets without supporting a movement with such a... let's call it 'mixed track record'. Despite how it's sometimes presented, EA didn't invent developmental economics, nor does it belong to them. Rejecting EA doesn't mean rejecting developmental economics or analyzing interventions, and I think you can be much more effective as an altruist by not being part of the "Effective Altruism" movement.
Thiel and Musk both spoke at EA events in the past. Musk has approvingly tweeted about both WWOTF (though MacAskill kinda-sorta disavowed the approval in one of his Future Perfect interviews) and Abundance. Noah Smith have been calling since the election for Musk to amass power within the administration (even before the election, he lambasted Democrats for criticizing Trump over his Musk ties), most recently calling for him to be Secretary of Defense. Yglesias has called to bring back Musk into the Democratic fold as soon as he had the slightest public dispute with the Don (which is, like, hilariously short-sighted on even the most unprincipled pragmatic grounds for obvious reasons).
Bay Area techno-libertarian ideology is in all likelihood the main reason why EAs 1/ are so out of sync with institutional development economists' views on global health and development (see https://bobjacobs.substack.com/p/is-effective-altruism-neocolonial, and also the fact that there is a "charter cities" tag on the EA Forum (a Thiel-backed NRx-adjacent idea) but no "land reform" or "industrial policy" tag) 2/ spend basically nothing on combating democratic backsliding, defending civil liberties, environmental causes, and short-term AI ethics 3/ kickstarted an AI arms race in the name of avoiding it.
I do NOT believe the ideological distance between the median EA and the median Bay Area technolib is anywhere *near* that between Lenin and Hayek; it's obviously much smaller! I was thinking more like the distance between Lenin and Kerensky, or Hayek and Ezra Klein. They're related, but it's not quite the same thing.
I'm not *that* familiar with these pundits, so I went to Noah Smith's substack and from a brief skim he seems to be very upset about Trump on immigration and tariffs. Yglesias seems to want to take back the Senate and seems to argue DOGE'S FOS (https://www.slowboring.com/p/yes-doge-failed-and-it-matters), though the guy paywalls everything so it's hard to dig too deep. It seems like he made a dumb argument about Musk but his major thing seems to be getting the Democrats back into power.
Are Thiel and Musk still well-liked within the EA movement, though, now that DOGE killed PEPFAR? I guess that would be the thing to me.
Still, you do seem to have inside knowledge of the movement I wasn't aware of; I've been to the Bay Area once in my life. I had the impression they had started to move away from that now, but maybe not.
I should clarify while I may be defending them here, I'm not EA; I'm a disgruntled ex-liberal who's much, much more cynical. (Big conflict theory person, for instance.) I just think their ideas are good for charity and think they got a bad rap after the SBF thing; I'm very skeptical about a lot of the AI paperclip stuff and find shrimp welfare risible. (Full disclosure: I'm also very skeptical of leftist ideas and the whole 'if you disagree with me on this one issue you're the enemy' framing.)
So, I'm slightly confused. Wouldn't authoritarianism be more responsible for DOGE and it's particular destructive methodologies than technolibertarianism?
Because the people involved mostly identify as libertarians and seek to achieve traditional right-libertarian goals like slashing government spending and the public sector and increasing the influence of tech elites? Democracy has never been an important goal in the libertarian tradition, and people like Thiel explicitly oppose it on libertarian grounds.
I think of myself as an EA (or at least EA-ish), and I don't really agree with these points:
* Every problem has a solution that leaves people better off on average.
* Politics shouldn’t have winners and losers; ideally, everyone should be happy with the outcome.
I wish this were the case, but I think realistically, most solutions will leave some people worse off somehow..
For example, I think we need to entirely eliminate human use of animals. This will clearly eliminate a bunch of existing jobs (factory farm worker). What we can do to ameliorate that would be to compensate them with cash payments. But some of those people will still be unhappy, because they derived a sense of self or satisfaction from the work, and will think they are worse off, even with the compensation.
So I think we can find solutions that have as few economic "losers" as possible, and compensate them financially, but we probably can't fix the _emotional_ pain that people feel because of a policy change.
* If you disagree with someone about policy, it’s because you believe different things about the world, not because you value different things.
I also disagree with this. It's quite clear that people have values beyond the very basic shared values like being healthy, having enough food to eat, shelter, etc. For example, lots of writing about conservatives has talked about conservatives value tradition, authority, and sanctity/purity more than liberals.
Given that difference, it seems like there's no policy that will satisfy everyone.
I mean, it's a goal, nothing's perfect. The greatest good for the greatest number, and the least harm for the remaining--wasn't that Bentham? I don't think you can craft a policy that has no negative effects on *anyone*.
I think Ozy's discussing mistake theory as a descsription of what EAs believe, not as unambiguously true in all cases. I tend to lean towards conflict theory myself--people just have different interests.
#5 "Trade Can Make Everyone Better Off." is false. The truth is that total gains are larger than total losses, but some people -- the guy who was a well liked foreman at the local factory but is too old to retrain for a new career -- end up worse off.
This also isn't taking into account the possibility of using the gains from trade to compensate the losers from trade, although in practice this can be pretty difficult to do.
"Trade Can Make Everyone Better Off" implies that not every trade results in universal improvement. Thus, being able to name one specific trade that results in a problem doesn't disprove the claim. You have to prove that there are no trades in which everyone is better off.
As Yoram Bauman once put it, "trade can make everyone better off" implies that "trade can make everyone worse off."
https://youtu.be/7u6Os0TDR6o?si=PMa7F69ag6daJESR&t=208
Normally that would require someone to agree to a trade that is not in their interest or plain old bad luck (such as a shipwreck destroying goods).
He was making a joke, but the example he cited in the video involves negative externalities (pollution) that outweigh the benefits of the trades.
Yeah. Presumably the people making the trade are better off, though; it's everyone else who gets screwed.
i am not you average EA-Adjacent person (much older, completely different background and ethnicity, low inclination towards utilitarianism) but EA politics is something I find off-putting and it would be even more so if EAs didn't mostly just ignore politics. Even from a cynical point of view, there's a talk by Tyler Cowen from a few years ago in which he very presciently underscores how EA's excessively strong liberal bend alienates many people on the right who could otherwise become attracted to it.
The most redeeming aspect though is that they are mostly liberal in the good sense of the word, i.e., classical liberalism as you mentioned in the post - markets are good, capitalism is good, even if not perfect, freedom and freedom of thought is good- as opposed to the SJW variety. I also feel they have a slightly inconsistent view of conservatism -for example, very critical in theory, but I got to Joe Henrich's books through an EA bookclub, and one can make the case that Henrich is the best and most updated version of Chesterton's Fence intellectually available right now.
I feel like EAs identify affectively with the left because they’re highly educated coastal young Americans (at median, of course) and that’s just the norm in that environment while not really agreeing with the actually existing American left on that much more than the actually existing American right. It’s true they like UBI which is arguably far left, but so did Milton Friedman…
This relies on the premise that they're mostly Americans. Is that true? I feel like the US has the plurality but not the majority of members, source: vibes.
Eh, expanding circles of Americans, Anglophones, and generally cosmopolitan Westerners, where I think to only slightly decreasing extents all the same points apply
Most of this seems right to me but I'm not convinced it's at all true that Effective Altruists are particularly incrementalist. "Not demanding an immediate end to capitalism" is like, the default view in American politics including among many self-styled socialists, so if that make you an incrementalist almost everyone except the revcoms are incrementalist. But being big on economic theory and technocratic policy making more generally has a tendency to make people look at the Rube Goldberg-esque set of political compromises that make up most real-world policy making and go, "holy shit! this makes no sense! we need to completely re-do this" and gravitate to ideas like carbon taxes and basic income guarantees that are well outside the current Overton Window of American politics. The thing that stops these from being major EA cause areas is they mostly don't seem very tractable, but I think EAs are still probably more open to them than the average American.
I think the Georgist-Pigovian-UBI cluster, when they attack all more politically realistic ways of addressing those issues, either kid themselves about political realizability or are in bad faith (in the case of Georgism and UBI there are indeed libertarian advocates that openly advocate it as a pragmatic move toward ending all taxation and redistribution) https://thingofthings.substack.com/p/on-concern-about-technological-unemployment/comment/101209291
The elitism is by far the most emotionally off-putting feature for me. But then -- if certain politics make it plain that a movement/group is EXPLICITLY not interested in people like me, then the loss of people like me from the movement is unimportant. And there's something refreshing in a complete lack of populism or even interest in popular appeal, ie honest stating that you (generic/movement) are only interested in high IQ people in position to generate wealth or influence and ideally both. It might hurt egos but it also kinda absolves one from not doing anything: instead of pretending small donations matter, you can admit that one millionaire donor is worth tens or hundreds of thousands of smaller ones. So I can keep my 10% because it's OBJECTIVELY worth very little to the causes but a lot to me :)
Yeah, I think it's genuinely difficult to figure out the right balance about elitism. On the one hand, the vast majority of EA direct work jobs only hire unusually smart, hardworking people; if you're trying to convince people to try EA direct work jobs, you want to aim for unusually smart, hardworking people, and it can be reasonable to use various (admittedly lossy) proxies like going to an elite college. On the other hand, there is something genuinely valuable about the Giving What We Can "anyone can save a life" framework, and I worry about losing it. And, of course, "someone else can save a thousand lives" is not a counterargument to "I can save one life"-- it's equally good either way! I have tended to adopt a less elitist position because I think EA is currently on the side of too much elitism, but I can see arguments in both directions.
This is one reason I'm a fan of 80,000 Hours's recent pivot into AI. As a practical fact, 99.9% of people, even in the developed world, have no meaningful way to improve the AI trajectory. When 80,000 Hours was balanced awkwardly between being an AI-focused org and a general EA careers org, this made their messaging come off as very elitist. I now recommend that people check out Probably Good. Probably Good covers all standard EA cause areas, which makes them significantly less elitist: different causes need different skillsets, and global development and *especially* animal advocacy have more of a need for normal basically competent people. (TBC, I don't disrecommend 80,000 Hours: it is still IMO the best introductory resource to AI out there.)
That makes sense.
From a purely consequentialist pov tho >>"someone else can save a thousand lives" is not a counterargument to "I can save one life"<< because it ignores "opportunity for increasing own happiness" cost. One of my children donates a regular amount to Give Well -- at a rate that would take him five years to "save a life". He's a student who receives a full "poor people" help and works full time in the summer at above 20% minimum wage. I'm not suggesting he SHOULDN'T do that but perhaps he'd be objectively (????) better saving that (objectively meager) amount towards, idk, his imaginary/hypothetical future pension? On the other hand if we don't look in terms of full lives, then his monthly donation covers more than one DALY. And that seems EXTREMELY effective at a cost of a few pints of beer. So optics are important here.
One life per five years is pretty good numbers, no?
I actually enjoyed the 2017 Beauty and the Beast.
This was really good. I assume you have more firsthand experience with the movement than I do. You cover a lot of the high points (mistake theory, economics, positive-sum), and even many of the weaknesses (bad at mass politics, elitism).
I think I'd try to put it in five words (cheating with the acronym) by saying 'EAs are liberals, not leftists', and that seems to be a big reason a lot of leftists don't like it. You're fine with capitalism, and increasingly the left organizes itself against that. (It always did outside of the USA.)
Not everyone is going to be a leftist, so if it gets a few people with money to do more good than they otherwise would have I think it counts as a net plus.
I agree largely with this list of differences between EA and "normie" left-liberalism, but I think the biggest one has been omitted - agency.
EAs actually do stuff! Whereas I think quite often left narratives rend to shred people's agency by blaming problems on society/power structures, in a way that convinces ordinary people that they can't do anything about the status quo except attend endless, largely ineffectual protest marches.
Left-liberals do stuff too. They run for office, lobby congresspeople, donate to candidates, raise money for battered women's shelters, and do lots of other things besides protest.
I agree the structural emphasis from intersectional leftists weakens solidarity and protest marches aren't as useful as they used to be, but I would not agree EAs have a monopoly on agency.
Honestly for all the reasons Ozy outlined I think EAs are better off influencing (or starting) things like charities run by ex-techies rather than running for office.
I appreciated the attempt to get at attitudes people hold which aren't necessarily conscious but do probably inform the more conscious things (aversion to conflict theory -> technocracy)
According to Dustin Moskovitz (https://forum.effectivealtruism.org/posts/mopsmd3JELJRyTTty/ozzie-gooen-s-shortform#duosAooDFq5xviyZz) at least one (and maybe two) EAs have joined DOGE. That would put Bay Area techno-libertarianism (or "the EA view of politics") as one of the most destructive ideologies of the millenium so far, with a predicted death toll of several millions in the coming decade.
And some criticisms of specific aspects of administration policies aside, you'll seldom find EA-aligned political pundits who think democracy and civil liberties are worth defending, they have spent most of their energy since the election clamouring for Musk to amass as much power as possible, posting graphic fantasies of murdering trans people, and lamenting that Democrats are too mean to accelerationist tech billionaires.
There are a *large* number of people with accounts on the EA Forum. You've made 74 comments and 32 posts, while Cole only seems to have made one comment ever; it would be far more reasonable to conclude from you that EA is leftist than from Cole that EA is pro-DOGE.
The most prominent EA-aligned pundit is probably Matt Yglesias and he is vocally anti-Trump and anti-murdering-trans-people. If you look at more directly EA publications like Asterisk Magazine or Future Perfect, they have also run a bunch of articles about the USAID cuts, democratic backsliding, and other things we hate about Trump. For that matter, the most prominent EA-aligned *right-wing* pundit is Richard Hanania, who despises Trump and MAGA. Samuel Hammond wrote that execrable "the EA Case For Trump" post but he doesn't even identify as an EA!
I understand that you hate EA politics. That's fair! EA is definitely not leftist. But EAs are simply not pro-Trump, pro-DOGE, or into murdering trans people.
1/ Cole's presence is only illustrative of a broader point. It's hard to dispute that Musk himself (and other pro-Trump SV billionaires like a16z, Thiel, etc.) is a pure product of Bay Area techno-libertarianism, and the 'EA view of politics' is Bay Area techno-libertarianism.
2/ Yglesias is literally the one I had in mind when I talked about EA pundits posting graphic fantasies of murdering trans people. https://archive.is/oO81u https://pbs.twimg.com/media/GZNZwF-XoAAEMXQ?format=jpg&name=orig He also criticized Democrats for caring too much about democratic backsliding and civil liberties at WelcomeFest. https://x.com/daveweigel/status/1930320926270275883
I think that's a bad post but it is obviously not a graphic fantasy of murdering trans people, unless Civil Liberties and Gun Control are also people whom you think can be murdered. What he's saying is "Democrats should adopt less pro-trans positions in order to defeat Trump." It's a bad post because it comes off as triumphalist and sneering and doesn't acknowledge that the tradeoffs here are tradeoffs.
I don't know if Kamala Harris would have won over some swing voters if she'd said "trans women shouldn't play in women's sports" or "children shouldn't transition", but if she would have, she should do that, because Trump wants to become an authoritarian dictator and that is actually more important than any trans rights short of us not being put in concentration camps.
Would you have said the same if it was a card-carrying alt-right militant posting a meme of himself over a pile of bodies labeled "LGBTQ++" if there were other labels on other piles of bodies? (because Wokeness and Cultural Marxism aren't also people who can be murdered)
Yes. That is also not a fantasy about murdering trans people. Come on, aren't there enough people who want us dead? Why do you have to make some up?
Because authoritarian takeovers always rely on the tacit support of complacent moderates who just think they can be more successful incorporating far-right demands in their program and dismissing bigotry toward minorities as "not so violent or genuine as it sounded". May I recommend Johann Chapoutot's work? (There's a reason the influence of liberal parties collapsed in Europe after WWII and it's not just the Keynesian revolution.)
The people being run over are not "LGBTQ++" any more than they are "Gun Control" or "Civil Liberties". They are metaphorical representations of abstract general issues in a political cartoon, not representations of individuals or groups of individuals.
Do not provide one isolated fact and when someone points out it's an isolated fact say "well it's illustrative of a broader point" (even though it really isn't) and then not provide sufficient evidence for that broader point.
It's Bay Area techno-libertarianism and not EA which I called destructive?
1. I don't think they're exactly the same thing. You can have multiple divergent ideologies that grow from the same root; socialism grew out of Enlightenment liberalism but is definitely its own thing now. The EAs seem much more interested in the common good than, say, Musk.
2. Ozy's done a better refutation than I can, but I'd agree that isn't really a graphic fantasy of murdering trans people (though I agree the language is offputting) or lack of interest in democratic backsliding, ir's the same sort of dissembling political strategy Ozy specifically says EAs are bad at. (And is necessary IMHO for real politics.)
Answered elsewhere on 1. On 2, I'm not taking concerns about strategy for someone who was very influential on the administration that turned the 2020 blue wave into having the Dems lose the popular vote for the first time in two decades, to a felon who attempted a coup. Dems seizing on the Garcia case was an unequivocal success that turned popular approval of Trump's immigration policy in the negatives, meanwhile Yglesias is now talking about how the US constitution should be amended to end birthright citizenship. And it's bizarre to say "this isn't really [...] lack of interest in democratic backsliding" when Ozy's own 'refutation' of it being a 4chan-tier bigoted meme is that "civil liberties" is also under the bus. At some point one should just adopt the principle of revealed preference and note that if Yglesias always call for the farthest right position on all subjects imaginable it's probably because he sincerely believe in them.
All right, I'll certainly agree Yglesias isn't as smart as he thinks he is (though the Dems made a bunch of other unforced errors too, most prominently failing to get Biden to resign). I just don't think he wants to kill trans people or have a fascist government; he's a center-righty who doesn't like the left and wants a centrist Democrat government.
I'm sure Franz von Papen "didn't want to kill Jews or have a fascist government" and was just "a center-righty who doesn't like the left and wants a Zentrum government", but I don't think it's relevant when the thing at stake here isn't "will the Democrats get 10% more chance to win a given election a given year" but "will there still be civil liberties, rule of law, and fair elections regardless of which party is in power in the coming decades".
When you come to pundits, most EAs are pretty pro-PEPFAR from what I can tell; Scott Alexander devoted a whole post to attacking the DOGE arguments, and as Ozy says Yglesias is not .
I tend to think people engage in motivated reasoning to find reasons for things they want to do anyway, and the right's been trying to destroy the (already weak) American welfare state since...I don't know, the 1930s when it was started? Usually the excuse is some sort of efficiency, but at least in part businesses want to impoverish their workers so they can have more power over them.
From what I can tell of EA they're even more pro-trans than anyone short of active LGBT activists (and I'm sure the groups overlap). You might have a couple rogue EAs but they mostly seem to be worried about all the science getting canceled by DOGE.
Since when was EA politics Bay Area techno-libertarianism?
Since when it's not? It's heavily based in GMU-strand libertarian economics and Silicon Valley interests, with people from the Bay Area and the tech industry being heavily overrepresented in EA. This seems very hard to deny per Ozy's own articles.
I mean, it's based in it, but they seem to have taken it in a different direction. If anything it represents a more benign turn of technolibertarianism toward trying to improve the world through nonprofits rather than something like DOGE.
A lot of this seems to boil down to 'People from the Bay Area and techies are involved, so it's bad'. I agree they're kind of smug, but as long as there's charity, might as well be efficient about it.
I think you'll need a strong argument to claim that the ideological distance between the median EA and the median Bay Area techno-libertarian is as high as the ideological distance between Lenin and Hayek.
Quoting @Bob Jacobs' comment on another article here (https://thingofthings.substack.com/p/you-can-buy-a-malaria-net/comment/75880301)
> Former EA here, I think EA is worse than you think. Elon Musk used to be on the "EA people page" before I took him down with a comment explaining why, which got downvoted so much I had to ask friends to upvote it. Musk got praised/promoted so much by EA, and in fact it was MacAskill that was the liaison between SBF and Musk when it came to trying to set them up to jointly buy twitter. Which is a platform that Musk now uses to ban and suppress journalists and leftwing thinkers, while also promoting far-right conspiracy theories that directly aided Trump's election (and promoting MacAskill's book). Not to mention people in the EA-community giving a platform to Curtis Yarvin, whose "philosophy" formed the basis for project 2025.
> I don't think EA has been a net negative in the world, but you can buy malaria nets without supporting a movement with such a... let's call it 'mixed track record'. Despite how it's sometimes presented, EA didn't invent developmental economics, nor does it belong to them. Rejecting EA doesn't mean rejecting developmental economics or analyzing interventions, and I think you can be much more effective as an altruist by not being part of the "Effective Altruism" movement.
Thiel and Musk both spoke at EA events in the past. Musk has approvingly tweeted about both WWOTF (though MacAskill kinda-sorta disavowed the approval in one of his Future Perfect interviews) and Abundance. Noah Smith have been calling since the election for Musk to amass power within the administration (even before the election, he lambasted Democrats for criticizing Trump over his Musk ties), most recently calling for him to be Secretary of Defense. Yglesias has called to bring back Musk into the Democratic fold as soon as he had the slightest public dispute with the Don (which is, like, hilariously short-sighted on even the most unprincipled pragmatic grounds for obvious reasons).
Bay Area techno-libertarian ideology is in all likelihood the main reason why EAs 1/ are so out of sync with institutional development economists' views on global health and development (see https://bobjacobs.substack.com/p/is-effective-altruism-neocolonial, and also the fact that there is a "charter cities" tag on the EA Forum (a Thiel-backed NRx-adjacent idea) but no "land reform" or "industrial policy" tag) 2/ spend basically nothing on combating democratic backsliding, defending civil liberties, environmental causes, and short-term AI ethics 3/ kickstarted an AI arms race in the name of avoiding it.
I do NOT believe the ideological distance between the median EA and the median Bay Area technolib is anywhere *near* that between Lenin and Hayek; it's obviously much smaller! I was thinking more like the distance between Lenin and Kerensky, or Hayek and Ezra Klein. They're related, but it's not quite the same thing.
I'm not *that* familiar with these pundits, so I went to Noah Smith's substack and from a brief skim he seems to be very upset about Trump on immigration and tariffs. Yglesias seems to want to take back the Senate and seems to argue DOGE'S FOS (https://www.slowboring.com/p/yes-doge-failed-and-it-matters), though the guy paywalls everything so it's hard to dig too deep. It seems like he made a dumb argument about Musk but his major thing seems to be getting the Democrats back into power.
Are Thiel and Musk still well-liked within the EA movement, though, now that DOGE killed PEPFAR? I guess that would be the thing to me.
Still, you do seem to have inside knowledge of the movement I wasn't aware of; I've been to the Bay Area once in my life. I had the impression they had started to move away from that now, but maybe not.
I should clarify while I may be defending them here, I'm not EA; I'm a disgruntled ex-liberal who's much, much more cynical. (Big conflict theory person, for instance.) I just think their ideas are good for charity and think they got a bad rap after the SBF thing; I'm very skeptical about a lot of the AI paperclip stuff and find shrimp welfare risible. (Full disclosure: I'm also very skeptical of leftist ideas and the whole 'if you disagree with me on this one issue you're the enemy' framing.)
So, I'm slightly confused. Wouldn't authoritarianism be more responsible for DOGE and it's particular destructive methodologies than technolibertarianism?
Because the people involved mostly identify as libertarians and seek to achieve traditional right-libertarian goals like slashing government spending and the public sector and increasing the influence of tech elites? Democracy has never been an important goal in the libertarian tradition, and people like Thiel explicitly oppose it on libertarian grounds.