It was really hard for me to be unbiased in my previous post about Nonlinear because they represent every trend I most despise in effective altruism.
For years I’ve been gesturing about vibes, with people looking at me skeptically and saying “Ozy, is that really a thing? Are you sure you’re not strawmanning some more reasonable behavior?” And then someone wrote a document explaining, in thousands of words of detail, that they do everything I hate and they’re proud of it. It’s incredibly validating.
So here are four things Nonlinear does that I hate and why I hate them.
Do It For The Gram
Kat Woods describes the experience of working for Nonlinear:
Imagine a job where you’re always in beautiful, sunny, exotic places. Part of the year is spent in various EA Hubs: London, Oxford, Berkeley, San Francisco. Part of the year you explore the world: Venice, the Caribbean, Rome, Paris, the French Riviera, Bali, Costa Rica.
You’re surrounded by a mix of uplifting, ambitious entrepreneurs and a steady influx of top people in the AI safety space. In the morning, you go for a swim with one of your heroes in the field. In the evening, a campfire on a tropical beach. Jungle hiking. Adventure. Trying new foods. Surfing. Sing-a-longs. Roadtrips. Mountain biking. Yachting. Ziplining. Hot tub karaoke parties. All with top people in your field…
Sure, it’s an unorthodox payment arrangement. But, man, you are certainly living a glamorous lifestyle. Always in sunny, exotic, places. Living in beautiful homes. Going on adventures in bioluminescent bays, yachting, kayaking, and snorkeling in tropical reefs. And you’re living that glam life while working for a charity. Not bad.
The correct number of times for the word “yachting” to occur in a description of an effective altruist job is zero. I might make an exception if it’s prefaced with “convincing people to donate to effective charities instead of spending money on.”
To be clear, there’s nothing wrong with being an effective altruist who sails. You can be an effective altruist and have an expensive hobby. I think this is a recurring problem with the effective altruism frugality discourse. It’s damnably hard to talk about frugality without erasing the subtle distinction between effective altruist culture and what individual effective altruists do.
Decide what percentage of your resources you want to donate to altruism and do that. If you have enough left over to pour money into a black hole in the water, that’s your business, I don’t care.
But effective altruist culture ought to be about spending resources in the most efficient way possible to do good. Sure, sometimes the most efficient way to spend resources to do good doesn’t look frugal. I’ve long advocated for effective altruist charities paying their workers well more than average for nonprofits. And a wise investor might make 99 bets that don’t pay off to get one that pays big. But effective altruist culture should have a laser focus on getting the most we can out of every single dollar, because dollars are denominated in lives.
Spending resources efficiently is different from spending less money. In fact, as far as I can tell, Nonlinear underpaid its employee Chloe relative to the usual compensation for a live-in personal assistant.1 Effective altruists should spend money. They shouldn’t spend money on dumb shit.
I don’t give a shit if Kat Woods and Emerson Spartz snorkel in tropical reefs or zipline or do hot tub karaoke parties on their own time. But they’re describing a job they repeatedly characterized as a “dream job” for many effective altruists—so I think it’s fair to take this as a description of their vision for what effective altruism should be, the kind of thing effective altruists should aspire to. And it isn’t about any of the things I admire in the effective altruists I know. It isn’t about people holding you to high standards and calling you on your shit, but always with compassion and a sense of human frailty. It isn’t about taking joy in admitting you’re wrong, or feeling the burning drive to understand, or being the sort of person who takes in abuse victims or babysits friends’ children or moves worms stuck on the sidewalk to the grass. It isn’t about the ruthless pursuit of a better world.
It’s about being Instagrammable.
Everything Kat describes in those paragraphs is a photo. “Woke up early in Bali for a sunrise swim with Will MacAskill! #living #goals #squad #blessed #nofilter”
It’s cool and high-status to travel the world. It’s cool and high-status to go on adventures. It’s cool and high-status to spend time with famous and influential people. And, God help us, it’s cool and high-status to save the world.
I think something like this is the root of a lot of discomfort with showy effective altruist spending. It’s not that yachting is expensive. It’s that if your idea of what effective altruists should be doing is yachting, a reasonable person might worry that you’ve lost the plot.
So-Called Top EAs
Kat writes:
Of note, we think where this is coming from is that when Alice said she was going to visit her family in another country, we were surprised. We were having some of the top figures in the field come live with us for weeks right during the dates she’d chosen.
She’d basically be skipping one of the best career and impact opportunities of her life.
We, who were her mentors and employers, recommended she not miss out on this once in a lifetime opportunity. That she’d probably have more impact if she visited her family at a different time. The friendships she made and the ideas she got exposed to might be the real big break in her ability to do good.
We were doing our best to launch Alice’s career, and one of those ways to help her was providing her mentorship. Advice. She disagreed with the advice and said she’d prefer to visit family. And we said, “enjoy some quality family time!”
Let me be frank: if this is the effective altruist movement, I want no part of it.
Set aside that I’m pretty sure most “top” effective altruists aren’t jetting around with Kat Woods and Emerson Spartz, because they’re too busy working. Set aside that, if Alice would benefit so much from meeting these people, there was nothing stopping Kat and Emerson from setting up a voice call or introducing Alice to them the next time they were in the same place.
Knowing people helps you get jobs. Granted. But the only reason that building friendships with “top EAs” might be “the real big break in her ability to do good” or “one of the best career and impact opportunities of her life” is if the way you get a job in effective altruism is doing hot tub karaoke with your future boss.
Who you know has more of a role in effective altruist hiring and grantmaking than it should. But, thank God, we haven’t lost our way that far. Major effective altruist organizations still often hire using blinded work trials. There are plenty of people working in AI safety who think “networking” is the thing you do to get two computers to talk to each other. You can impress people by doing good research and publishing it on the Effective Altruism Forum or your website—even if you’ve never stepped foot in the Bay Area.
The biggest career and impact opportunity of your life is and, I hope, will always be getting good at a valuable skill. Getting hired is harder if you don’t know anyone, but if you can do the work, you’ll eventually find a place in effective altruism. And if you can’t do the work, no amount of hot tubs will help.
The obsession with top effective altruists is toxic. It’s at the root of most of the social dysfunctions I see in effective altruism. It burns people out, causes unnecessary drama and social anxiety, destroys friendships, and keeps us from developing the rich and supportive community we actually need to make the world better.
Pick a useful skill you don’t have. Every time you find yourself being glad that you’re in the inner circle, or being anxious that you’re not in the inner circle, or strategizing about how to get into an innerer circle, go spend ten minutes learning it. You’ll be happier and more employable, and the effective altruism community will be better for it.
File Your Paperwork
I’m not going to belabor this one. But come on guys. Know how much you’re spending. If you’re claiming that something is “accounting,” make sure all the numbers in the columns add up. Legally hire your employees. Make sure they have visas.
To be sure, if you’re two people working part-time out of their living room, you don’t need all the bureaucracy of a two-hundred-person organization. And some amount of sloppiness is normal in a small, new organization, whether for-profit or nonprofit. But by the time your organization is up and running, you need a paperwork situation that won’t make your auditors cry.
Sunshine Is The Best Disinfectant
Kat writes:
EA’s high trust culture, part of what makes it great, is crumbling, and “sharing only negative information about X person/charity” posts will destroy it.
There’s an interesting ambiguity in the phrase “high trust culture.” It could mean that everyone trusts each other, or it could mean that everyone is trustworthy.
Sharing accurate information about potential bad actors—including, at times, publicly—is the immune system of a trustworthy culture. It ensures the prompt expulsion of people who take advantage of that trust. And if you’re tempted to do something you know you shouldn’t, fear for your reputation might stop you when conscience won’t.
A lot of people have pointed out that Alice and Chloe agreed to Nonlinear’s absurd compensation structure. I might be libertarian enough to not want to get in the way of two consenting adults signing a contract. But I want higher standards for effective altruist organizations than “their practices wouldn’t be illegal in a libertarian utopia.”
The effective altruist imprimatur—invitations to Effective Altruism Global, listing on the 80,000 Hours job board, membership in effective altruist coworking spaces, being interviewed on effective altruist podcasts, and so on—isn’t an entitlement given to everyone who can correctly use the phrase “utility function.” You have to have a program that, in expectation, is at least as good as giving money to GiveDirectly. You have to be truthful, far beyond the lax standards of truth of normal charities. You have to check, as best you can, that the thing you’re doing has the desired results, instead of assuming that it does because it sounds good.2 And you have to do right by your employees and others you have power over.
I, yes, trust effective altruist institutions to filter for honorable organizations. And if they’re falling down on the job (perhaps through no fault of their own), individuals should hold them to account.
I could do this for any EA org. I know of so many conflicts in EA that if somebody pulled a Ben Pace on, it would explode in a similar fashion.
That would probably be a bad idea: even the best organizations have embarrassing pasts, and sometimes something is all right to do but hard to explain to a critical public. But part of me is just like do it. Make it all public. Reveal all the darkest secrets. We’re here to evaluate charities, aren’t we? Give us all the information we need to evaluate them. Blow it all up, and let us sort through the wreckage. The effective altruism movement might very well be stronger for it.
If your employer starts talking about how glamorous your job is, reach for your wallet.
I don’t think it’s a coincidence that, as an incubator of AI-safety meta-charities, Nonlinear was safely isolated from such dreadful things as “measurement” and “observation of outcomes.”
Thanks for this. As someone who has been quietly and frugally doing unpaid EA work for half a decade, it is incredibly frustrating to see these "top EAs" focusing so much on elevating themselves and their personal connections. I keep picking up on bad 'vibes' in the EA community that can never be expressed because they're verbal and not numerical, but at least I can confirm that I'm not crazy when an SBF or a Kat proves me right.
I am not an effective altruist, and effective altruists should feel free to discount what I say accordingly. I also have strong opinions about the process here (https://forum.effectivealtruism.org/posts/bwtpBFQXKaGxuic6Q/effective-aspersions-how-the-nonlinear-investigation-went), and effective altruists should feel free to discount what I say based on that as well. Finally, I have never been persuaded that any AI safety stuff is doing much, so effective altruists should feel free to discount what I have to say about AI safety orgs due to that as well.
To be succinct and to the point: Inasmuch as EA follows your preferences, I suspect it will either fail as a subculture or deserve to fail. You present a vision of a subculture with little room for grace or goodwill, a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group? Which skeletons are in your closet? Where are your character flaws? What should we know, what should we see, that allows us to exclude you?
You are welcome to do that. Perhaps you will even succeed in it. But you cannot at once decry the notion of "top EAs" and the harm it causes, then promote a values system based on effectiveness as an EA that actively excludes everyone other than the same "top EAs" you decry a few paragraphs above. You advocate for an insular and elitist group that stands apart from society, creates an internal network (invitations to Effective Altruism Global, listing on the 80,000 Hours job board, membership in effective altruist coworking spaces, being interviewed on effective altruist podcasts, and so on) open only to the Worthy, and seeks to have dramatic impacts on the world writ large.
Explode your organizations if you will; celebrate the process of burning them down with mixes of truth, rumor, and uncharitable slants if you like; but you are fooling yourself if you think such a winnowing process will lead to a high-minded strong group of idealists and not a petty, risk-averse, cautious group who knows that the knives will be out at signs of weakness and who constantly seeks to self-purify and root out the unworthy within it. There is much I like within effective altruism; your vision, self-contradictory and self-destructive as it is, seems effective primarily at damaging that. I am not persuaded.