The Best Product Of The EA Movement Is The EA Movement
Cause X, the hinge of history, and empiricism
[content note: brief references to past moral atrocities]
Cause X
Some people argue that we have already discovered the most important cause area: we already know what we are supposed to be doing to help others and we just need to do it. Other people argue that, in fact, we are very uncertain about what the most important issues are. We can expect to find a Cause X which is more important than anything effective altruists currently prioritize.
The argument in favor of expecting to find Cause X is simple. Historically, there were many very intelligent, wise, ethical people deeply committed to doing good in the world. And, historically, nearly all of them were wrong about what the most important issues are.
Even today, many people read Greek and Roman philosophers for moral guidance and inspiration. I’m hardly going to be accused of excessive modesty if I say no one is going to be taking life advice from my writings in two thousand years. But there are lots of issues where I am right and Greek and Roman philosophers are wrong.
And these are not especially controversial moral issues. Here’s a quick list of examples of beliefs I’m talking about:
You shouldn’t keep slaves.
You shouldn’t torture people.
You should also care about people who live in a different city than the one you live in, who were born in a different city than the one you live in, or who speak a different language than you.
You should let women vote, leave the house, choose who they marry, and generally exercise some amount of self-determination over their own lives.
Your national sport shouldn’t be watching people get murdered.
You shouldn’t rape pubescent children, especially if they are also your slaves.
If a country consists exclusively of former child soldiers with CPTSD and the slaves they can randomly murder on a whim, that is in fact bad, and not good.
Aristotle was very concerned about ethics and his insights have been recognized by philosophers for thousands of years. But he did not at any point go “wow, honestly the thing we should be working on here is all the kids being raped??? That seems kind of bad???” In fact, from our perspective, the Nichomachean Ethics is bizarrely unconcerned with all of the dozens of ongoing moral catastrophes in Athenian society.
I’m not smarter than Aristotle. But I have the advantage that people have been thinking about ethics for thousands of years and writing down their thoughts. So I can catch on to certain trends. And one of the trends is that there are lots of kind, thoughtful people who spent lots of time thinking about how to be good people who missed things as obvious as “do not rape children.” So probably I—even though I am trying really hard to be kind and thoughtful and spend lots of time thinking about how to be a good person—am missing things as obvious as “do not rape children.”
But because I know about this fact, I can prepare for the possibility of a severe moral error.
The Hinge of History
Another important thing to think about here—besides Cause X—is the hinge of history hypothesis. The hinge of history is the one of the most important times in human history. For example, it might be the time when humanity is most likely to go extinct, because we have developed the technology that lets us destroy ourselves without having the wisdom not to use it. It might also be a time when we have an unusually large influence over whether humanity’s future goes well or badly. For example, we might be creating a greater-than human intelligence which, being superhumanly good at accomplishing goals, will have a very large effect on what people’s lives are like and, being immortal, will be around for millennia.1
There are good arguments for and against the hypothesis that a hinge of history exists at all, as well as for and against the hypothesis that we are currently at one. I don’t have the space to review all of them here. But it’s important to be prepared for the possibility that we aren’t at the hinge of history now and might be in the future.
Implications of the Argument
What the future hinge of history hypothesis and the Cause X hypothesis have in common is that they suggest that most of the good we do is in the future. Of course it’s important to work to eliminate factory farming, to protect children from malaria, and to prevent pandemics. But more important than these is successfully handling the hinge of history and the unknown ongoing moral catastrophes.
Does this mean we just can sit back and relax, ready to spring into action when a severe moral catastrophe or a hinge appears? No!
First, of course, we’re hardly going to know when Cause X or a hinge will appear without putting a lot of serious thought into it. The small but rapidly growing field of global priorities researches these issues, and we as a society should be putting far more energy into it than we already are.
Second, we need to prepare ourselves to be able to handle Cause Xs and hinges when they do arrive. Unfortunately, there is no guarantee that we’ll be faced with challenges that we’ll be able to easily overcome. If we want the best possible chance of doing good in the world, we as a society need to become stronger.
A moment’s thought reveals dozens of things that we will need. Preventing burnout and knowing what working conditions lead to the best performance, for example. Making decisions under conditions of extreme uncertainty. Having discussions between experts that are the most likely to lead to the truth. Building institutions that organize people to pursue the best course of action.
How do we as a movement and as a society develop these things? Constantly testing ourselves against reality.
It’s easy to fool yourself in your armchair. Of course we know how to have truth-seeking discussions, we assure ourselves. We can definitely make good decisions no matter how complicated the situation. There’s nothing to worry about.
But when you test yourself against reality, often you find that, well, you can’t. For example, many people thought that the world would easily be able to handle a major pandemic or other existential risk. The coronavirus pandemic—although far from an existential risk—provides evidence that we can’t. We can now analyze what went wrong and figure out how to do better next time. (And if we don’t—well, that’s one obvious thing we need to work on fixing!)
My argument means that we, as altruists, need to balance more speculative and more testable work. A lot of important work—like global priorities research itself—is pretty difficult to measure. You can’t ever really know whether you’ve successfully identified the correct global priorities. It’s important to try to reduce the risk of global thermonuclear war, but it’s hard to tell if your strategies succeeded or you just got lucky.
But if we’re trying to improve our skills, we should also prioritize areas with tight feedback loops that let us test ourselves directly against reality. For example, if we’re trying to reduce the rate of malaria in a certain area, we can (in principle) study how the rate of malaria changes a decade after the implementation of our program. If we’re trying to improve living conditions for chickens, we can look at how many chickens are better off after each year of campaigning.
Of course, all programs can be tested against reality to some extent: you can observe whether you passed a bill reducing how many nuclear weapons a country has. That is also important and gives us valuable information. But malaria and corporate campaigns are important because we can directly measure the effects of our actions on the thing we care about. If we mess up anywhere in the process, you can tell. That’s not possible for nuclear war: you can measure whether you passed the bill, but it’s much much harder to measure whether the bill actually reduced the risk of nuclear war.
For this reason, concern for the long-term future may, ironically, lead some altruists to prioritize relatively short-term programs—to better build the skills we need for the future.
The argument in favor of caring about artificial general intelligence is more complicated than this. I recommend Wait But Why’s series on superintelligence as an introduction.
This is a classics question, but I wonder if Athenian slaves ever advocated for themselves publicly and opposed the institution of slavery. Like, it seems to me one big difference between Athenian society in whenever BC Socrates was alive and the 2021 English-speaking world is that there's a pretty huge democratization of information and community advocacy. I would at least hope that having, like, a feminist movement would prevent a 2021 version of ancient Greek-style patriarchy from being /unthinkable/ the way it was for the all-male symposiums (even if actually solving 2021 patriarchy is a long and tricky political question). Of course, it's still possible there's a Cause X that either affects beings who can't advocate for themselves on the English-language internet or doesn't directly affect people so they can advocate for themselves
My sense is that this is not a popular position in the EA movement, but it seems to me that the slavery-level why-are-they-being-so-stupid? question our descendants will ask of us is about climate change. Also, I would argue that we're in a hinge of history in that regard.