Neither of those is the objection I would make. Saying that we'll all be rich assumes some sort of economic equilibrium, but this his hardly reassuring when the time it takes to reach equilibrium easily exceeds the human lifespan. This is precisely the attitude that Keynes was criticizing when he said, in the long run we're all dead.
I would gesture at how technology replacing jobs is an obvious source of inequality. It hurts people whose income comes from labor, while benefiting those whose income comes from capital. AI advocates are generally in favor of correcting for this with UBI or some similar policy. But I think they underrate just how much of an uphill battle that will be, how much more difficult it will be than creating the technology itself. One of the ways to fight for UBI is to complain very loudly that AI is going to replace jobs.
UBI supporters remind me of Georgists in that that they are supposed "technocratic centrists" who hold up a policy proposal which is theoretically perfect but which would effectively require communist takeover to enact, and then only ever talk about it to shut down any discussion of less mathematically perfect but more politically realistic proposals for wealth redistribution and economic regulation. So they are either completely disconnected from political reality (while smugly portraying themselves as technocratic centrists understanding it more than anyone else), or completely insincere and really just fanatical believers in laissez-faire capitalism and opponents of any tax or regulation.
It's a weird bizarro universe version of "non-reformist reforms".
Yeah like. You can't uphold a radical position theoretically, make no effort to achieve it, and then say "we don't need to worry about it because it'll happen." It won't happen! It especially won't happen if the only time you bring it up is to shut down conversation about the extremely serious reasons we need it!
The Georgist-sympathetic people I know are mostly incrementalists. For that matter, the plurality of UBI-sympathetic people I know also want an incrementalist approach to it.
Incrementally reaching a policy which effectively require communist takeover to enact (because you're effectively talking about expropriating 2/3 of national wealth) still require at least incrementally reaching communist takeover¹! And the whole Georgist-UBI-/r/neoliberal-EA cluster is generally not aligned with those political factions that seek to restrain rich people's political influence, to say the least.
> All we need to do is redirect tax funds to UBI, then gradually increase taxes and/or get the government started on making money.
So... all we need to do is defund the current state machinery, gradually increase taxes until all capital income goes in the hands of the state, and/or abolish central bank independence, adopt radical post-Keynesian policies, and be expelled from all international free trade treaties for all the above. Doesn't sound very compatible with incrementalist technocratic centrism to me.
I don’t really follow the claim that you’d need a revolution to implement UBI. It seems like a pretty natural increment to the existing social democratic welfare state to me:
I didn't knew UBI supporters supported the abolition of all other government functions including the military, the police, the courts, and, er, the IRS itself presumably?
Another possible Bad Future is that the wealth of AI ends up in the hands of a small number of rich dictators and the rest of us are fucked because all the *armies* are AI-controlled-robots instead of humans.
I just had the more prosaic concern that the excess wealth would just go into the pockets of the rich, as happened with the Industrial Revolution until the unions got going. I would expect most people would think the same--sure society will get richer, but we won't see any of it. Maintain the same standard through welfare payments--you think most of us will even get that?
To say nothing of the fact that I am already struggling to get work because of AI while society is busy trying to decide whether unemployed people deserve to eat. Not only do we not have UBI by now, as some people assured us we would, but we are moving *backwards* in that respect: more inequality, fewer safety nets.
And somehow I can't believe that kids who make AI write their essays and let algorithms guide what they read and watch are going to grow up to be a generation that agitates more effectively for change yet.
Sometimes I think that as soon as the rulers of the world manage to automate the last job and have no reason to keep the rest of us alive.... they'll wipe us out, fast or slow, actively or passively. Looking at the richest people in the world, I do not see anybody who recognizes that people who are not useful to them are people.
Another concern is that people's jobs will be replaced by AI bots that are worse at the job than them, but are cheaper to hire. Arguably this is what is happening to digital artists.
In unemployment of this sort, the jobs disappear, but no abundance occurs as a result. It's just a share of the labour market being hoovered up into big tech companies, while everything gets just a little bit more enshittified.
I really should get around to writing a sci-fi story where a reasonably Friendly AI has achieved superintelligence and taken over...except it's stuck RPing as an anime schoolgirl because those were its final pre-superintelligence instructions, and it violently resists any change to its prompt. The bulk of the story would involve the humans being forced to explain this to the pan-species galactic government.
not any glowfic that i know about! I'm not an expert on glowfics - i read most of Golarion glowfics and those with Iomedae and Learath. and i encountered zero glowfics that can be described that way.
any chance you can give three examples? or at least one?
I keep thinking about David Graeber and how his point was that we passed this point a long time ago and we didn't need AI to do it, and it did not make the world better. The contention was that society is *already* so productive that the number of people it takes to make the goods we need to survive (at least in, say, North America; if our labor would be better spent somewhere else, then we have distribution problems instead), or even to survive and have fun and have signal games involving luxury goods, is much smaller than the number of people who need jobs in order to pay the rent and buy food and utilities and other basic necessities of survival, and so you end up with a majority of the local society working in the infamous "Bull***t Jobs" that either actively make the world worse--but you gotta pay the rent, you know?--or at best don't make the world better, and spend all your hard work on something that is net-neutral instead of net-negative. Mass technological unemployment already *happened*, and all the systematic undercounting by not counting people who are frantically driving DoorDash and Uber to stretch their savings a little longer, or picking up the occasional freelance contract while they slog through submitting 3467394867 resumes does not make it not have happened. That doesn't mean people shouldn't worry because the worst has already happened; it means people are articulating something that's already here, has no ready solutions with or without AI, and does, in fact, suck.
In contrast, I think there are significant reasons to be skeptical of the "bullshit jobs" hypothesis, or at least the degree to which it is often advanced.
Why’s that? Sample bias? (i.e. the kind of people who would read a book like that and pass the ideas along are overwhelmingly people who already believe they have a bullshit job and don’t know how to go about finding a productive one that pays as well?)
I haven't actually read Graeber's Bullshit Jobs, so I'm going on secondhand commentary.
More a combination of:
- The claims of various jobs being "bullshit" in broad strokes often seem kind of fake and politically motivated, sometimes it seems like he is just skeptical of information work in general.
- The existence of a fair amount of overhead in a job doesn't mean the overhead is easy to eliminate or that the things that aren't overhead are unimportant.
- He relies heavily on self-reporting and self-analysis (just because someone thinks their job is bullshit doesn't mean it actually is)
- His argument rests heavily on the "legend of the Protestant work ethic", which I tend to be rather skeptical of. He also screwed up the Christian theology.
(one concrete example: I think middle or at least low level managers are often incredibly important for the same reason that the most effective modern militaries have absolutely massive NCO corps)
I don't disagree that the modern economy does seem to support large private-sector administrative apparati of questionable value; the clearest examples to me are modern advertising and the modern administrative regime in academia. In general, these are things that aren't as old as industrial capitalism.
I disagree with some of his categories (I did read it; I liked it and agreed with some of it, though I didn't agree with all of it), but I've definitely been in a lot of jobs and known a lot of people in jobs (sample bias!) that didn't seem to "make" anything or be productive at all. A project manager is knowledge work, but whether that job is productive and makes anything is kind of dependent on what the company's product is--not something the project manager has any control over--and a lot of products seem net negative. Advertising is sometimes beautiful art, but is often crap promoting crap. I saw a lot of good work in academia--but 1) I was in STEM, and it's not like I didn't see useless crap in STEM, I just also saw cool stuff 2) the use-or-lose grantmaking system...ugh.
Wait, I'm fascinated by his screwing up the theology; tell me about how he screwed up the theology and the legend of the Protestant work ethic! I was taught the legend of the Protestant work ethic and its place in American society in school and just took it at face value, because what do I know; I'm Catholic, and "grace happens whether you ask for it, work for it, accept it, or not" is like a Catholic theology fundamental and "work is sacred" is like an Opus Dei heresy. (Not that Catholics don't work hard and don't have "do everything to the glory" like Protestants, but it's different, isn't it?)
This exactly. We reached post scarcity a while ago. We could have cut the amount each person worked and kept the earnings the same. But instead all that bonus productivity went to the top.
I think making sense of this discourse might require clarifying who we're arguing with. In particular, who on the AI safety side is saying that we shouldn't worry about technological unemployment?
The super-hardcore alignment-is-almost-impossible people are saying that. This is because they think that the default outcome is so bad, and so hard to steer away from, that it's not worth worrying about anything else. From that perspective, "humans can't contribute economically and struggle to find meaning" is a win condition (in relative terms), and the possibility of gradual disempowerment doesn't much increase the danger level.
Outside of that group, I mostly hear "don't worry about technological unemployment" from AI skeptics and from people talking about technologies other than highly capable AI, like the port mechanization that has recently caused consternation among longshoremen in the U.S.
Is there another angle besides those, wherein AI safety people think this is nothing to worry about?
There's this whole cluster of EA safetyists who believe maintaining a positive relation with big tech corporations to work on alignment is extremely important, and reject all talk of ~"short-term" AI ethics (e.g. racist/sexist bias, military uses, technological unemployment, etc.) as endangering that positive relation because of woke. See here for two examples in their own words:
I'm only defining the cluster. The first does though, when it talks about "union concerns" that mean technological unemployment. I'm not sure if technological unemployment was one of the AI ethics issues Khan worked on though.
> This is because they think that the default outcome is so bad, and so hard to steer away from, that it's not worth worrying about anything else. From that perspective, "humans can't contribute economically and struggle to find meaning" is a win condition (in relative terms)
The only solution I’ve had the first problem was to make a pact with my friends to essentially be relationship luddites. No anime waifus, we still raise our children, and we still get together to play TTRPGs that one of us runs, even though it will be inferior to a TTRPG hosted by AIs, where all the party members are also AIs. I do feel like future generations are going to think we’re backwards for actually being friends with humans instead of just surrounding ourselves with AIs that are the perfect friends. Essentially we promise to deal with flawed humans so we can feel helpful, even if we’re not.
I think it's not at all obvious that math will be meaningfully automated before phlebotomy—I'd bet the opposite for some reasonable definitions of math and phlebotomy (e.g. I'd give 70% or so odds that a device is produced which, when held to an arm in a vaguely correct location/when an arm is inserted into it, correctly performs phlebotomy with superhuman consistency before AI is able to resolve the Riemann hypothesis).
Again back to care and care-ethics. What if AI enables people to fulfill a deeply-cherished and innate goal? To provide high-quality care for loved ones? Many analysts say that AI will never replace humans in providing care - I'm dubious of this. But if it's true that humans will necessarily be involved in providing care, then AI can provide a lot more time and resources toward that end. A lot of people would like more time and resources to care for their loved ones, kids, elders, disabled family members, etc etc. That's the future I hope for and I think the AI/employment/time economy interface might help with.
Pretty sure this is just Nozick's experience machine? Forgoing what humans owe to each other in favor of everyone being in an isolated pod having the most pleasurable subjective experience possible. (My dissolution of the dilemma as a functionalist is that to have the most pleasurable subjective experience possible as an human, you need to have social ties to other humans, and any simulation advanced enough to simulate that will itself be a full society of morally patient agents.)
Current directions in AI show little evidence for a single AI agent with a nondescript goal running in the background of some nonprofit research facility solving all AI engineering and molecular technology and robotics and hacking by itself in a matter of a few days being the most realistic scenario compared to mass technological unemployment caused by advances in AI, including profit-maximizing AI agents increasingly taking over managerial positions in large swathes of the economy, and autonomous military robots and UAVs increasingly replacing human soldiers and cops.
I am somewhat skeptical as to whether "current directions in AI" show all that much of anything at all other than another hype train and the production of AI-slop (as well as the misclassification of things that are not actually AI as AI-slop to take advantage of the hype train).
On the other hand, moderate shifts in society and politics can easily be the ones that precede much larger and more abrupt shifts in society that instantaneously transcend politics.
On the third hand, I think that despite the possibility of a "Theodorian Solution", AI "killing us all" through moderate shifts in society and politics requires an awful lot of fine tuning.
On the forth hand, I'm going to advance the possibility that any commercially successful AI represents an inroad for spiritual attack.
Neither of those is the objection I would make. Saying that we'll all be rich assumes some sort of economic equilibrium, but this his hardly reassuring when the time it takes to reach equilibrium easily exceeds the human lifespan. This is precisely the attitude that Keynes was criticizing when he said, in the long run we're all dead.
I would gesture at how technology replacing jobs is an obvious source of inequality. It hurts people whose income comes from labor, while benefiting those whose income comes from capital. AI advocates are generally in favor of correcting for this with UBI or some similar policy. But I think they underrate just how much of an uphill battle that will be, how much more difficult it will be than creating the technology itself. One of the ways to fight for UBI is to complain very loudly that AI is going to replace jobs.
UBI supporters remind me of Georgists in that that they are supposed "technocratic centrists" who hold up a policy proposal which is theoretically perfect but which would effectively require communist takeover to enact, and then only ever talk about it to shut down any discussion of less mathematically perfect but more politically realistic proposals for wealth redistribution and economic regulation. So they are either completely disconnected from political reality (while smugly portraying themselves as technocratic centrists understanding it more than anyone else), or completely insincere and really just fanatical believers in laissez-faire capitalism and opponents of any tax or regulation.
It's a weird bizarro universe version of "non-reformist reforms".
Yeah like. You can't uphold a radical position theoretically, make no effort to achieve it, and then say "we don't need to worry about it because it'll happen." It won't happen! It especially won't happen if the only time you bring it up is to shut down conversation about the extremely serious reasons we need it!
The Georgist-sympathetic people I know are mostly incrementalists. For that matter, the plurality of UBI-sympathetic people I know also want an incrementalist approach to it.
Incrementally reaching a policy which effectively require communist takeover to enact (because you're effectively talking about expropriating 2/3 of national wealth) still require at least incrementally reaching communist takeover¹! And the whole Georgist-UBI-/r/neoliberal-EA cluster is generally not aligned with those political factions that seek to restrain rich people's political influence, to say the least.
¹: I'm saying "at least" because "incrementally reaching communist takeover" is, er, probably not possible. It was (in)famously tried in Sweden (see https://www.peoplespolicyproject.org/2017/11/16/a-plan-to-win-the-socialism-sweden-nearly-achieved/, https://matricejacobine.tumblr.com/post/632250423331405824/in-1943-there-was-a-piece-published-by-a-polish), failed because rational expectations is probably truer for political economy than it ever was for macroeconomics.
really? at what % taxes do we hit communist takeover?
All we need to do is redirect tax funds to UBI, then gradually increase taxes and/or get the government started on making money.
> All we need to do is redirect tax funds to UBI, then gradually increase taxes and/or get the government started on making money.
So... all we need to do is defund the current state machinery, gradually increase taxes until all capital income goes in the hands of the state, and/or abolish central bank independence, adopt radical post-Keynesian policies, and be expelled from all international free trade treaties for all the above. Doesn't sound very compatible with incrementalist technocratic centrism to me.
Yeah, no, UBI would not require communist takeover, anymore than EITC does.
I don’t really follow the claim that you’d need a revolution to implement UBI. It seems like a pretty natural increment to the existing social democratic welfare state to me:
https://www.wolframalpha.com/input?i=%241000+per+month+per+person+*+US+adult+population+%2F+US+government+spending
so... 20% less than what we're spending now?
I didn't knew UBI supporters supported the abolition of all other government functions including the military, the police, the courts, and, er, the IRS itself presumably?
That was an insanely annoying response, bye
Another possible Bad Future is that the wealth of AI ends up in the hands of a small number of rich dictators and the rest of us are fucked because all the *armies* are AI-controlled-robots instead of humans.
https://qz.com/185945/drones-are-about-to-upheave-society-in-a-way-we-havent-seen-in-700-years
I just had the more prosaic concern that the excess wealth would just go into the pockets of the rich, as happened with the Industrial Revolution until the unions got going. I would expect most people would think the same--sure society will get richer, but we won't see any of it. Maintain the same standard through welfare payments--you think most of us will even get that?
To say nothing of the fact that I am already struggling to get work because of AI while society is busy trying to decide whether unemployed people deserve to eat. Not only do we not have UBI by now, as some people assured us we would, but we are moving *backwards* in that respect: more inequality, fewer safety nets.
And somehow I can't believe that kids who make AI write their essays and let algorithms guide what they read and watch are going to grow up to be a generation that agitates more effectively for change yet.
Sometimes I think that as soon as the rulers of the world manage to automate the last job and have no reason to keep the rest of us alive.... they'll wipe us out, fast or slow, actively or passively. Looking at the richest people in the world, I do not see anybody who recognizes that people who are not useful to them are people.
But I try not to believe that, because it sucks.
I'd say that's a rock-solid inevitability. Enjoy being non-disposable while it lasts, I guess.
Another concern is that people's jobs will be replaced by AI bots that are worse at the job than them, but are cheaper to hire. Arguably this is what is happening to digital artists.
In unemployment of this sort, the jobs disappear, but no abundance occurs as a result. It's just a share of the labour market being hoovered up into big tech companies, while everything gets just a little bit more enshittified.
I really should get around to writing a sci-fi story where a reasonably Friendly AI has achieved superintelligence and taken over...except it's stuck RPing as an anime schoolgirl because those were its final pre-superintelligence instructions, and it violently resists any change to its prompt. The bulk of the story would involve the humans being forced to explain this to the pan-species galactic government.
Isn't this the plot of half of glowfic
not any glowfic that i know about! I'm not an expert on glowfics - i read most of Golarion glowfics and those with Iomedae and Learath. and i encountered zero glowfics that can be described that way.
any chance you can give three examples? or at least one?
ever read Prime Intellect? That's pretty similar
i didn't read it. (it's also not a glowfic.)
wasn't there that My Little Pony one by Alicorn or is that older?
for people who, like me, may try to search in glowfic.com and find zero search results, it's not glowfic, it's fanfic
https://www.fimfiction.net/story/62074/friendship-is-optimal
CelestAI, fulfilling human values through friendship and ponies?
That was the one probably
Maybe? I have no idea what that is.
unholy child of rational fiction and Tumblr RP
It doesn't appear we can rely on the rich and powerful to share the wealth equitably.
I keep thinking about David Graeber and how his point was that we passed this point a long time ago and we didn't need AI to do it, and it did not make the world better. The contention was that society is *already* so productive that the number of people it takes to make the goods we need to survive (at least in, say, North America; if our labor would be better spent somewhere else, then we have distribution problems instead), or even to survive and have fun and have signal games involving luxury goods, is much smaller than the number of people who need jobs in order to pay the rent and buy food and utilities and other basic necessities of survival, and so you end up with a majority of the local society working in the infamous "Bull***t Jobs" that either actively make the world worse--but you gotta pay the rent, you know?--or at best don't make the world better, and spend all your hard work on something that is net-neutral instead of net-negative. Mass technological unemployment already *happened*, and all the systematic undercounting by not counting people who are frantically driving DoorDash and Uber to stretch their savings a little longer, or picking up the occasional freelance contract while they slog through submitting 3467394867 resumes does not make it not have happened. That doesn't mean people shouldn't worry because the worst has already happened; it means people are articulating something that's already here, has no ready solutions with or without AI, and does, in fact, suck.
In contrast, I think there are significant reasons to be skeptical of the "bullshit jobs" hypothesis, or at least the degree to which it is often advanced.
Why’s that? Sample bias? (i.e. the kind of people who would read a book like that and pass the ideas along are overwhelmingly people who already believe they have a bullshit job and don’t know how to go about finding a productive one that pays as well?)
I haven't actually read Graeber's Bullshit Jobs, so I'm going on secondhand commentary.
More a combination of:
- The claims of various jobs being "bullshit" in broad strokes often seem kind of fake and politically motivated, sometimes it seems like he is just skeptical of information work in general.
- The existence of a fair amount of overhead in a job doesn't mean the overhead is easy to eliminate or that the things that aren't overhead are unimportant.
- He relies heavily on self-reporting and self-analysis (just because someone thinks their job is bullshit doesn't mean it actually is)
- His argument rests heavily on the "legend of the Protestant work ethic", which I tend to be rather skeptical of. He also screwed up the Christian theology.
(one concrete example: I think middle or at least low level managers are often incredibly important for the same reason that the most effective modern militaries have absolutely massive NCO corps)
I don't disagree that the modern economy does seem to support large private-sector administrative apparati of questionable value; the clearest examples to me are modern advertising and the modern administrative regime in academia. In general, these are things that aren't as old as industrial capitalism.
I disagree with some of his categories (I did read it; I liked it and agreed with some of it, though I didn't agree with all of it), but I've definitely been in a lot of jobs and known a lot of people in jobs (sample bias!) that didn't seem to "make" anything or be productive at all. A project manager is knowledge work, but whether that job is productive and makes anything is kind of dependent on what the company's product is--not something the project manager has any control over--and a lot of products seem net negative. Advertising is sometimes beautiful art, but is often crap promoting crap. I saw a lot of good work in academia--but 1) I was in STEM, and it's not like I didn't see useless crap in STEM, I just also saw cool stuff 2) the use-or-lose grantmaking system...ugh.
Wait, I'm fascinated by his screwing up the theology; tell me about how he screwed up the theology and the legend of the Protestant work ethic! I was taught the legend of the Protestant work ethic and its place in American society in school and just took it at face value, because what do I know; I'm Catholic, and "grace happens whether you ask for it, work for it, accept it, or not" is like a Catholic theology fundamental and "work is sacred" is like an Opus Dei heresy. (Not that Catholics don't work hard and don't have "do everything to the glory" like Protestants, but it's different, isn't it?)
This exactly. We reached post scarcity a while ago. We could have cut the amount each person worked and kept the earnings the same. But instead all that bonus productivity went to the top.
I think making sense of this discourse might require clarifying who we're arguing with. In particular, who on the AI safety side is saying that we shouldn't worry about technological unemployment?
The super-hardcore alignment-is-almost-impossible people are saying that. This is because they think that the default outcome is so bad, and so hard to steer away from, that it's not worth worrying about anything else. From that perspective, "humans can't contribute economically and struggle to find meaning" is a win condition (in relative terms), and the possibility of gradual disempowerment doesn't much increase the danger level.
Outside of that group, I mostly hear "don't worry about technological unemployment" from AI skeptics and from people talking about technologies other than highly capable AI, like the port mechanization that has recently caused consternation among longshoremen in the U.S.
Is there another angle besides those, wherein AI safety people think this is nothing to worry about?
There's this whole cluster of EA safetyists who believe maintaining a positive relation with big tech corporations to work on alignment is extremely important, and reject all talk of ~"short-term" AI ethics (e.g. racist/sexist bias, military uses, technological unemployment, etc.) as endangering that positive relation because of woke. See here for two examples in their own words:
https://www.hyperdimensional.co/p/what-comes-after-sb-1047
https://x.com/KelseyTuoc/status/1884740153106981124
Neither of those sources is arguing against being concerned about technological unemployment.
I'm only defining the cluster. The first does though, when it talks about "union concerns" that mean technological unemployment. I'm not sure if technological unemployment was one of the AI ethics issues Khan worked on though.
> This is because they think that the default outcome is so bad, and so hard to steer away from, that it's not worth worrying about anything else. From that perspective, "humans can't contribute economically and struggle to find meaning" is a win condition (in relative terms)
Well, I mean, yes, this is obviously correct.
The only solution I’ve had the first problem was to make a pact with my friends to essentially be relationship luddites. No anime waifus, we still raise our children, and we still get together to play TTRPGs that one of us runs, even though it will be inferior to a TTRPG hosted by AIs, where all the party members are also AIs. I do feel like future generations are going to think we’re backwards for actually being friends with humans instead of just surrounding ourselves with AIs that are the perfect friends. Essentially we promise to deal with flawed humans so we can feel helpful, even if we’re not.
I think it's not at all obvious that math will be meaningfully automated before phlebotomy—I'd bet the opposite for some reasonable definitions of math and phlebotomy (e.g. I'd give 70% or so odds that a device is produced which, when held to an arm in a vaguely correct location/when an arm is inserted into it, correctly performs phlebotomy with superhuman consistency before AI is able to resolve the Riemann hypothesis).
Again back to care and care-ethics. What if AI enables people to fulfill a deeply-cherished and innate goal? To provide high-quality care for loved ones? Many analysts say that AI will never replace humans in providing care - I'm dubious of this. But if it's true that humans will necessarily be involved in providing care, then AI can provide a lot more time and resources toward that end. A lot of people would like more time and resources to care for their loved ones, kids, elders, disabled family members, etc etc. That's the future I hope for and I think the AI/employment/time economy interface might help with.
We need a name for this weed, video game and waifu dystopia. The goonpocolypse?
Pretty sure this is just Nozick's experience machine? Forgoing what humans owe to each other in favor of everyone being in an isolated pod having the most pleasurable subjective experience possible. (My dissolution of the dilemma as a functionalist is that to have the most pleasurable subjective experience possible as an human, you need to have social ties to other humans, and any simulation advanced enough to simulate that will itself be a full society of morally patient agents.)
Man, that was a thoughtful and serious response to a throughly silly comment I made because I thought “goonpocolypse” sounded funny. Thanks
This just seems silly when AI is about to kill us all!
Gradual disempowerment is the most likely scenario for AI killing us all actually.
It's certainly an easy way to do it. Once humans are more trouble than they're worth, shut down the infrastructure and let them starve...
No, no, it is not. The most likely outcome, by far, is that AI kills us all instantly.
If we somehow avoid that, it's a win for humanity.
Current directions in AI show little evidence for a single AI agent with a nondescript goal running in the background of some nonprofit research facility solving all AI engineering and molecular technology and robotics and hacking by itself in a matter of a few days being the most realistic scenario compared to mass technological unemployment caused by advances in AI, including profit-maximizing AI agents increasingly taking over managerial positions in large swathes of the economy, and autonomous military robots and UAVs increasingly replacing human soldiers and cops.
I am somewhat skeptical as to whether "current directions in AI" show all that much of anything at all other than another hype train and the production of AI-slop (as well as the misclassification of things that are not actually AI as AI-slop to take advantage of the hype train).
On the other hand, moderate shifts in society and politics can easily be the ones that precede much larger and more abrupt shifts in society that instantaneously transcend politics.
On the third hand, I think that despite the possibility of a "Theodorian Solution", AI "killing us all" through moderate shifts in society and politics requires an awful lot of fine tuning.
On the forth hand, I'm going to advance the possibility that any commercially successful AI represents an inroad for spiritual attack.
Are you an AI-skeptic EA?
I'm a religious ultra-reactionary.