16 Comments

I strongly agree, obviously (https://ealifestyles.substack.com/p/i-dont-want-to-talk-about-ai)

As an added incentive, if people were less annoying about it, I'd be more open to reading about AI!

Expand full comment

This... does not seem epistemically virtuous. I can't see a good justification for choosing to be uninformed out of spite. Sometimes annoying people are nonetheless correct.

Expand full comment

Did you follow the link? I laid out my reasons - you might not agree with them but I can guarantee you spite isn't on the list

Expand full comment

I was referring specifically to your last sentence; not the article.

Expand full comment

I think EALifestyles was being objective about the likelihood of their future self finding the spoons to read about AI conditional on how annoying its advocates are; not making a conscious precommitment to not read about AI to punish the advocates for being annoying.

Expand full comment
May 30, 2023Liked by Ozy Brennan

The phrase I've heard has been "current rate no Singularity" rather than "present".

Expand full comment

I know that this is not the main point of the article but

> our community and everyone you care about thinks your life work is pointless!

I know basically nothing about Wild Animal Suffering, but whenever I am sad about plans to cull animals in my city, because of ways they have inconvenienced humans, I feel comforted that there are some people who are doing work to learn how we can improve the lives of wild animals.

Expand full comment

If you're tired of AI discussion now, these next 5 years are going to really suck for you.

Expand full comment

Or, we could just be uncertain.

Expand full comment

This isn't relevant to the core theme of article, but I figured I'd mention that I don't think a singularity is necessary for AI to have a large impact on the job market. Even if if we imagine that superintelligence is impossible, and perhaps even that a single sudden "AGI is here" moment is impossible, it could still be the case that all existing jobs are methodically automated out of existence over the course of the next 10-20 years.

Expand full comment

So I hear the frustration. But also, isn't it relevant what's true. Like maybe the norm should be 50% it's weird 50% it isn't.

I get the desire not to talk about death or be pushed on AI but those feel different to wanting to have inaccurate views about the future. Like if it all does go weird I'll be glad that everyone talked about it so much.

Expand full comment

It depends on whether everyone is on the same page. If you're living in the US during the height of the cold war, where there's a significant chance you could die within the next week, you'd still want the ability to talk about things further out than that with your neighbors. It's a waste of time to append the prefix "assuming we're still alive" to the beginning of every conversation when people can just take it as an assumption.

Now, if some of your neighbors are in denial about the risk and believe there's 0 chance of nuclear war, then it would probably be worth trying to have a serious talk to them about that. But it wouldn't be productive for you and everyone else at the neighborhood parties to just bulldoze over their objections every time they're brought up. That's not going to make them listen to reason, it's going to either change their mind though social pressure, or cause them to double down and resent you.

Expand full comment

It seems like the crux is that people with long timelines don't in fact think that short timelines are true, and don't want to spend every possible conversation on relitigating this question. Time and place. It's not like they're going "Yes, obviously we're all going to die soon, but it's rude to bring it up!"

Personally, if it all does go weird I don't actually think I will have been glad to spend additional time talking about it so much. 'It all goes weird' doesn't give me a relevant action space.

Expand full comment

Tangentially relevant: The original Sequences make the case for a unified/expected utility maximizer approach to rationality, but recently on LessWrong there's been some opposition to that:

https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents

https://www.lesswrong.com/s/4hmf7rdfuXDJkxhfg

Expand full comment

There's a pretty good novel (name in rot13 to avoid accidental spoilers Gbepufuvc gevybtl) where the AI apocalypse is in fact prevented by Bay Area NIMBYs who regulate the AI gods into not interfering. (Until the last one dies of natural causes and they foom across ten worlds.)

Expand full comment

I have short timelines, but agree that this is a good idea. Should be quick and easy to talk about other things happening >5 years out

Expand full comment