5 Comments
User's avatar
John Quiggin's avatar

I get the first two. I totally don't understand why effective altruists should regard AI (rather than catastrophic climate change or nuclear holocaust) as the biggest threat to humanity. This linkage has been a disaster, certainly in PR terms.

Expand full comment
Greg's avatar

The usual EA view is that climate change will be really bad, but probably won't be a civilization ender.

As for AI over nuclear war, I think this is a neglectedness thing. Governments seem to care a lot about nuclear war already, and any solution seems likely that it would be political. On the other hand, at least theoretically, someone could solve the AI problem tomorrow if they came up with the right algorithm.

Expand full comment
John Quiggin's avatar

I understand the arguments, but this reasoning has nothing to do with Effective Altruism as a philosophical position. I could advocate or reject it on the basis of almost any philosophical viewpoint

The result of making it part of the EA political program is to cost EA support among people who don't agree with the view on AI, while highlighting the influence of proponents of this view (Bankman-Fried, McAskill etc) who are also widely disliked for other reaons.

I would not be surprised to learn most EA people prefer science fiction to Shakespeare (I do). But it would be stupid to make this a test of EA orthodoxy or to anoint SF writers as representatives.

Expand full comment
Greg's avatar

I don't see how the reasoning has nothing to do with EA philosophy. EA is, at least in part, about taking weird ideas seriously. Being worried about AI is weirder than worry about nuclear war or climate change. That's why it's a neglected thing that's not already being taken care of by governments the way nuclear weapons and climate change are.

I get that you are worried about PR. But there is no real EA orthodoxy. You can believe AI might have bad effects on civilization or not, and put money toward world hunger instead. I understand that there is often a discussion of if EA should split off into animal EA, global poverty EA, catastrophic risk EA, etc. The problem is that we expect to end up with 3 or 4 different conferences with 70% to 90% of the same people at each conference.

EA tends to be fundamentally, the view that says that reality doesn't care if your cause is weird. So it's hard to separate the global catastrophic risk crowd from the solving poverty crowd, because they are often the same people. If we expected that their might be a physics breakthrough that we might discover soon that would stop nuclear weapons, you would expect a lot of EA money to go to that.

Expand full comment
Not-Toby's avatar

Appreciate this post.

Honestly, I’ve come to think of EA as a more than anything else.

Expand full comment