I sometimes see AI safety advocates dismiss concerns about technological unemployment from AI. “Don’t worry about AIs taking your jobs,” they say. “By the time AIs are good enough to take your jobs, society will be rich enough that you won’t need to work! What you actually need to be concerned about is—”
But I think, if you actually listen to people’s concerns about technological unemployment, they often fall into one of two categories, one of which matches up with a common AI safety concern and neither of which is addressed by society being rich.
First, people are concerned about meaning. In the 1950s, people thought that by the 2000s robots would automate away all the work, leaving human beings to devote ourselves to the finer things in life, that which is most uniquely human—creating art, writing great works of literature, advancing the frontier of science and mathematics.
Unfortunately, that which is most uniquely human turned out to be less uniquely human than expected. Science and math at least are going to be fully automated well before (say) phlebotomy. So if AIs run our factories, clean our floors, and manage our financial system—and if our art looks like a four-year-old’s stick figures next to the AI’s, human-cooked food is inedible to an AI-food-honed palate, and letting a human instead of an AI teach is basically child abuse—what are we supposed to do with ourselves?
I think a lot of people are worried that the answer is smoking weed, playing video games, and masturbating. And that’s just not a very appealing future, even if you assume that the AIs have developed unfathomably well-made video games, Super Mega Ultra No Side Effects No Bad Trip Weed, and big-titted anime waifus the likes of which our petty mortal minds could not compass.
Most people like being useful. Most people like doing work that has some concrete positive effect on the world. Many of these people are legitimately worried that AI automation means they will never be useful again.
Second, people are concerned about gradual disempowerment. As the authors of the Gradual Disempowerment essay write:
Humans use their economic power to explicitly steer the economy in several intentional ways: boycotting companies, going on strike, buying products in line with their values, preferentially seeking employment in certain industries, and making voluntary donations to certain causes, to name a few… It is fairly easy to see how a proliferation of AI labor and consumption could disrupt these mechanisms: socially harmful industries easily hiring competent AI workers; human labor and unions losing leverage because of the presence of AI alternatives; human consumers having comparatively fewer resources.
The more subtle but more significant point is that most of what drives the economy is implicit human preferences, revealed in consumer behavior and guiding productive labor. Some small amount of choices have already been delegated to systems like automated algorithms for product recommendation, trading, and logistics, but the majority of economic activity is guided by decisions and actions made by individual humans, to the point that it is almost hard to picture how the world would look if this were no longer true.
Most people exert control over the global economy by doing their jobs and by spending money which they earn from their jobs. If humans no longer have jobs, then they very likely have far less control over the global economy—even if they maintain the same standard of living through welfare payments. An economy that humans no longer have control over seems likely not to reflect the values and preferences of ordinary human beings.
Now, you might not think these are the most likely Bad Futures. For that matter, you might think these are good futures! Maybe everyone has internalized the Protestant work ethic and this has made them overly precious about the merits of weed and masturbation. But I think that people’s real concerns about technological unemployment aren’t addressable by going “oh, society will be very rich.”
I wish people who know a lot about AI would be less condescending about people trying to express their legitimate worries without knowing the correct AI safety shibboleths.
Neither of those is the objection I would make. Saying that we'll all be rich assumes some sort of economic equilibrium, but this his hardly reassuring when the time it takes to reach equilibrium easily exceeds the human lifespan. This is precisely the attitude that Keynes was criticizing when he said, in the long run we're all dead.
I would gesture at how technology replacing jobs is an obvious source of inequality. It hurts people whose income comes from labor, while benefiting those whose income comes from capital. AI advocates are generally in favor of correcting for this with UBI or some similar policy. But I think they underrate just how much of an uphill battle that will be, how much more difficult it will be than creating the technology itself. One of the ways to fight for UBI is to complain very loudly that AI is going to replace jobs.
Another possible Bad Future is that the wealth of AI ends up in the hands of a small number of rich dictators and the rest of us are fucked because all the *armies* are AI-controlled-robots instead of humans.
https://qz.com/185945/drones-are-about-to-upheave-society-in-a-way-we-havent-seen-in-700-years