Discussion about this post

User's avatar
Sheila's avatar

People who ignore regular ethics for Pascal's wager kinds of ethics —eg "it doesn't matter how I treat people alive today, because someday much larger numbers of people have a chance to be much happier" —annoy me more than almost anything else. If that reasoning works, it's a complete disproof of utilitarianism, because I can intuit that it's bad morals no matter how you make the numbers go. But I don't think it does work, for the reasons you mention. Nobody actually has that much certainty about the future. You don't know that future is possible at all!

I don't think there is very much chance AI will destroy us all, possibly because I don't live in the Bay Area surrounded by people who do think that. Or possibly because my brain is so constructed that it prefers predictions grounded in what has happened before, things we can actually make any kind of predictions about from past data. I feel like long before it could kill the last human, there would no longer be sufficient infrastructure to support it. (Have you read Service Model? It's so good. First plausible sounding AI apocalypse I have ever read.)

However I do think it could make life a lot worse, and for some of us it already is. It's a less exciting kind of prediction, but still a kind that calls for some kind of regulation. "Infrastructure breaks down so much we can't run the data centers" is already a nightmare scenario. Though perhaps not nightmarish enough for some people to motivate themselves, trained as they are on either "total human extinction" or "sextillions of people experiencing megabliss."

Jake Mendel's avatar

I work in AI safety so I’m clearly sympathetic to the gist of your argument, but I think there is a steelman of the position “choosing to work in AI safety is a Pascal's mugging", which is that even if xrisk is not vanishingly small, the amount that a typical person can influence xrisk is vanishingly small.

21 more comments...

No posts

Ready for more?