16 Comments

I strongly agree, obviously (https://ealifestyles.substack.com/p/i-dont-want-to-talk-about-ai)

As an added incentive, if people were less annoying about it, I'd be more open to reading about AI!

Expand full comment
May 30, 2023Liked by Ozy Brennan

The phrase I've heard has been "current rate no Singularity" rather than "present".

Expand full comment

I know that this is not the main point of the article but

> our community and everyone you care about thinks your life work is pointless!

I know basically nothing about Wild Animal Suffering, but whenever I am sad about plans to cull animals in my city, because of ways they have inconvenienced humans, I feel comforted that there are some people who are doing work to learn how we can improve the lives of wild animals.

Expand full comment

If you're tired of AI discussion now, these next 5 years are going to really suck for you.

Expand full comment

Or, we could just be uncertain.

Expand full comment

This isn't relevant to the core theme of article, but I figured I'd mention that I don't think a singularity is necessary for AI to have a large impact on the job market. Even if if we imagine that superintelligence is impossible, and perhaps even that a single sudden "AGI is here" moment is impossible, it could still be the case that all existing jobs are methodically automated out of existence over the course of the next 10-20 years.

Expand full comment

So I hear the frustration. But also, isn't it relevant what's true. Like maybe the norm should be 50% it's weird 50% it isn't.

I get the desire not to talk about death or be pushed on AI but those feel different to wanting to have inaccurate views about the future. Like if it all does go weird I'll be glad that everyone talked about it so much.

Expand full comment

Tangentially relevant: The original Sequences make the case for a unified/expected utility maximizer approach to rationality, but recently on LessWrong there's been some opposition to that:

https://www.lesswrong.com/posts/3xF66BNSC5caZuKyC/why-subagents

https://www.lesswrong.com/s/4hmf7rdfuXDJkxhfg

Expand full comment

There's a pretty good novel (name in rot13 to avoid accidental spoilers Gbepufuvc gevybtl) where the AI apocalypse is in fact prevented by Bay Area NIMBYs who regulate the AI gods into not interfering. (Until the last one dies of natural causes and they foom across ten worlds.)

Expand full comment

I have short timelines, but agree that this is a good idea. Should be quick and easy to talk about other things happening >5 years out

Expand full comment