42 Comments
User's avatar
Bob Jacobs's avatar

> I support John Green’s nascent attempts to kickstart such a community and consider myself a proud #TBFighter ally.

Very happy to hear this. I'm also a #TBFighter supporter and was disappointed by how negatively EAs initially responded when I talked about it (e.g., https://forum.effectivealtruism.org/posts/SCYrASfriLCoFaqCZ/henry-john-green-video?commentId=3gHmTWPybK9pAfFLw )

I think part of the hesitation comes from EA’s intellectual culture. There’s often a deep skepticism towards collective action, something I’d attribute, once again, to EA inheriting some of the blind spots of mainstream economics: https://bobjacobs.substack.com/p/the-ea-community-inherits-the-problems

A more well-rounded sociological perspective shows that many of the most impactful efforts in global health have come from public movements, not just individual optimization.

EA doesn’t need to give up its epistemic humility to recognize that sometimes, collective action is how real progress happens. Maybe the success of the TBFighter campaign, combined with the broader political moment, will help EAs take the messy, hard-to-quantify tool of 'collective action' a bit more seriously.

Expand full comment
sidereal-telos's avatar

I'm never really clear what Leftists mean by "collective action" when they complain Effective Altruism doesn't have it. When I donate to a shared fund that tries to find the best places to spend money, rather then trying to do it myself, is that not collective action? When people make arguments about what effective altruists should do, rather than just doing stuff themselves, is that not collective action? Are Givewell analyses not collective action, given that they move many millions of dollars in funding? Are the groups campaigning for restrictions on AI development not doing collective action?

Expand full comment
Bob Jacobs's avatar

Your examples aren’t collective action, except perhaps parts of the last one (depending on what they do). The key feature of collective action is that it only works if a group does it together.

Take your donation to a fund: that donation has a positive marginal impact whether others donate or not. The same goes for reading GiveWell reports or arguing on forums, each is useful on its own. This is marginalist thinking, common in economics: “what is the impact of one more unit?” That mindset, inherited from econ 101 and prominent in libertarian/rationalist circles, has heavily shaped EA.

But not all impactful action works this way. Collective action requires crowds because it only produces results when many people act together. Think of strikes, protests, or political movements. These aren’t effective because of one person’s decision; they only succeed through mass participation. From a purely individual, marginal, cost-benefit analysis perspective, your contribution seems negligible, but if everyone thinks that way, no one joins, and the movement collapses before it starts.

To understand these dynamics, you need tools from sociology, political theory, and history; fields that study how groups organize, how power is contested, and what shapes political change.

Expand full comment
sidereal-telos's avatar

Hmm, I actually hadn't heard that definition of "collective action" before, and it does seem to match with the way I hear it used, but it makes the ways Leftists treat "collective action" as almost synonymous with "good", "effective", or "desirable" way more confusing. If something doesn't work unless enough people all do it at the same time, that's a *bad* thing. It's something you'd bring up if you were arguing not to try something. I can make sense to support some specific collective actions despite this drawback, but to me it doesn't make sense to support collective action as a class, any more than it would make sense to support as a class "things that only work if you're rich" or "things that only work if you have a good education".

I suspect you're going to respond to this with something to the effect of "yes, this is what I mean when I say Effective Altruists are too sceptical of collective action" but from where I stand I see a lot of Leftists trying collective action and failing because they don't have anywhere close to enough support to succeed, which makes me think they're overly enamoured with the idea.

I think you would do better here arguing for some specific intervention you think is incorrectly neglected because it requires too much collective action.

Expand full comment
Bob Jacobs's avatar

So two things 1) The impact of a collective action is often much higher than an individual action, and 2) This individualistic mindset is something the capitalists use to divide and rule: ensuring unions, coops, collective bargaining etc don't come to be. We've been conditioned to think of ourselves as a kind of homo economicus, only interested in individual action. By promoting collective action leftists are trying to start a paradigm shift away from this incorrect model of humans, and are signalling: Hey, I'm willing to cooperate on these very impactful collective actions.

If you've read my post then you know that I have, in fact, argued for specific interventions I think are wrongly neglected due to EAs aversion of collective action (e.g., making the EA orgs coops), and I have done the same in other posts (e.g., the neglect of working on constitutional changes, like changing the Belgian constitution to include animal welfare, for which EA didn't want to give any support/funding)

Expand full comment
Matrice Jacobine's avatar

> How to handle this subset of beliefs is controversial among effective altruists. The Overton window ranges from “innocent until proven to say slurs” to “if someone so much as mentions human cross-population genetic variation, ban them.” Nevertheless, the community is surprisingly adjacent to vocal “race science” advocates—not because effective altruists agree (they mostly have perfectly normie liberal views on race), but because there exist any effective altruists who won’t expel race science advocates from their parties and Discord servers on sight. This phenomenon is frustrating to the many effective altruists who despise race science and don’t want to be on edge to see whether this guy will use the N word.

> I do want to remark that—while these hot-button beliefs are less censored than in the mainstream left and center—effective altruists are far more willing to censor them than they are to censor criticisms of effective altruism. They aren’t very important to the main effective altruist project, and they tend to silence people. I sometimes see people assume that effective altruists are specifically tolerant of race science people for some reason, and I don’t think that at all reflects conditions on the ground.

No I don't think that's true at all. As Thorstad noted at length in his sequence on the subject (https://reflectivealtruism.com/category/my-papers/human-biodiversity/), the discussion on the Manifest controversy on the Effective Altruism forum clearly showed that large swathes of even the effective altruists who weren't actively involved in promoting race science held race science to be true and the prospect of whether to invite race scientists to EA-adjacent conferences to be a matter of truth-seeking v. reputation-seeking. It is *not normal* for an ostensibly academic-minded secular cosmopolitan philanthrophic movement to be so reliably involved in controversies about ties to advocates for a single fringe extreme position held by virtually no academic biologist or geneticist, while simultaneously being closed to entire academic fields of study, and even majority positions in the few academic fields of study that it actively engage with. If this was a simple question of open-mindedness and free speech norms, then EAs would get in a similar amount of controversies about creationists, tankies¹, and Flat Earthers. This is very obviously not the case.

¹: kudos to certain French and Belgian skeptic orgs still platforming Jean Bricmont btw

Expand full comment
Isaac King's avatar

> I do want to remark that—while these hot-button beliefs are less censored than in the mainstream left and center—effective altruists are far more willing to censor them than they are to censor criticisms of effective altruism. They aren’t very important to the main effective altruist project, and they tend to silence people. I sometimes see people assume that effective altruists are specifically tolerant of race science people for some reason, and I don’t think that at all reflects conditions on the ground.

This is obviously true. They publicly condemned Nick Bostrom simply for stating facts about race that are so well-accepted that they're mentioned on Wikipedia! When it comes to "traditionally controversial" political positions, EA is pretty far towards the "censor people who make us look bad" side of the truthseeking spectrum.

The reason why EA is frequently adjacent to these people is because of their relation to the rationality community, which is much more open to controversial views. This is one of the main differences and causes of friction between EA and rationalists.

Manifest is not an EA conference, it's a rationalist one.

Expand full comment
Matrice Jacobine's avatar

I (and Thorstad) are in fact also basing this on the EA Forum response to the Bostrom incident. On the flip side, I would wager that, if asked, Jaan Tallinn and Vitalik Butlerin wouldn't answer significantly different from Dustin Moskovitz when pressed on their opinion on race science. I think you're making a spurious distinction between EA and rationalists which do not really exist (and which Ozy themself is not making), at least in the Bay where Manifest took place.

Furthermore, many of the Manifest invitees seem more centrally EA than rationalist to me at first glance (Agnes Callard, Ajeya Cotra, Ozzie Gooen, Nathan Young, Joe Carlsmith). None of the actual controversial invitees seem particularly rationalist either, they are best classified as Thielsphere/IDW, and Hanania in particular is pretty well-embedded with EA pundits.

In so far as there *is* a difference between the two in term to "openness to controversial views", it doesn't make your case: why is it that EAs get in controversies only about the race science people and not the tradcaths, those getting really into chakras, and the violent anarchists? All of which are as or even more present among Bay rats than the race science people, as recent events have shown.

Expand full comment
Isaac King's avatar

Because the race science people are correct, while the tradcaths, chakra people, and violent anarchists are wrong. Rationalists try to have true beliefs, so of course their interest in a subject will be different depending on the evidence in favor of it. This spills over into the EA community, but they're not *happy* about it, and they try to distance themselves as much as possible in that arena.

Of course everything is a spectrum, there's a strong crossover between the communities; that's what Ozy's whole series of posts is about. But they're still distinct communities when it comes to their general goals, personalities, and "vibe".

Expand full comment
Doug S.'s avatar

I'm reasonably confident there must be at least as much bullshit in race science as there is in evolutionary psychology, which itself ranges from rigorous and high-quality academic work to unverifiable just-so stories to sexist PUA bullshit...

Expand full comment
Isaac King's avatar

Oh of course. I would guess that race science is *way* worse epistemically than evolutionary psychology, since most of the well-meaning scientists get bullied out of the field and only the motivated-reasoning-driven racists remain.

But the core claim of "there are intelligence differences between races" is undeniably true, yet cannot be acknowledged even in EA circles without getting ostracized. And any situation in which an evidence-supported claim is being harshly censored by nearly all of society will be naturally intriguing to scientifically-curious people.

https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/

Expand full comment
Matrice Jacobine's avatar

Lol. Lmao even.

Expand full comment
Victor Thorne's avatar

Disclaimer that I am not arguing in support of race science.

I think the problem here is that for obvious reasons, race science has been examined a lot less rigorously and therefore debunked a lot less thoroughly than creationism, Marxism-Leninism, and Flat Earthers, and thus gains a murky status of "things *they* don't want you to know about" that interests people looking for suppressed truths. Specifically, arguments on the "for" side are more suppressed than in any of those 3 cases, and there's a logical connection to ideas we know to be true (some ethnicities differ from one another in important internal characteristics as well as external, like that one ethnic group that's really good at running marathons iirc because of increased lung capacity among other things, and the brain is an organ), so people get the idea that if we really looked into it, we would find something.

The combination of not thoroughly examined, incredibly controversial and upsetting, and suppressed by mainstream scientific consensus tends to attract the sort of people who are interested in things like shrimp welfare that the mainstream doesn't look into seriously for different reasons.

Expand full comment
Matrice Jacobine's avatar

And yet it's the rabid racist who openly brag about not wanting to actually argue about his race science beliefs on the object-level who get invited each year¹, not Eric Turkheimer.

¹: Ctrl+F "A cynic might notice" here: https://www.astralcodexten.com/p/book-review-the-origins-of-woke

Expand full comment
Anonymous Dude's avatar

Truth-seeking vs reputation-seeking puts the other side a little badly. If you want to rationally figure out how to do the most good, then part of that is gaining support, both electoral and financial, from people, and most philanthropic organizations and wealthy do-gooders are on the left and have a strong aversion to this stuff.

My personal opinion is: it's true, and if you talk about it lots of people will hate you and that will get in the way of convincing people to invest in malaria nets or fighting TB or what have you. So it's rational for EAs to lie about it and chase people away who insist on keeping talking about it.

Which puts me in the interesting position of writing a comment theoretically arguing for its own suppression.

Expand full comment
Manuel del Rio's avatar

The way I see it, there's a bit of a split in the EA community, and some of them (who Thostad represents very well) are pushing for EA becoming Woke Lite (now with Numbers and some Utilitarianism!). While that is perfectly acceptable, I feel the more Rat-adjacent 'The whole truth and nothing but the truth' side is just as acceptable. I think you don't ideologically like the latter, and myself, I rather don't like the former.

Expand full comment
Ozy Brennan's avatar

I think that "the movement sometimes fails to alter its collective behavior in response to criticism" is in fact *very* unusually tolerant of dissenting views for a social movement.

There's a real double standard here. EA rejects leftist criticism if there is a highly upvoted and widely discussed critical post which many people disagree with and whose recommendations aren't immediately implemented. But EA embraces race science if one guy has extremely bad opinions about whom he ought to have as a featured speaker in his non-EA conference and faces widespread backlash about it from prominent EAs (as well as, sure, some people who defend him).

Expand full comment
Bob Jacobs's avatar

Euhm, for the record, I'm not saying that (not in my article and not elsewhere). I’m not arguing that EA is unusually bad at handling dissent, or that one bad speaker choice means EA "embraces race science." What I actually said was:

> "you’ve heard that EA is great at receiving criticisms. I think this reputation is partially earned, since the EA community does indeed engage with a large number of them. The EA Forum, for example, has given “Criticism of effective altruism” its own tag. At the moment of writing, this tag has 490 posts on it. Not bad.

Not only does EA allow criticisms, it sometimes monetarily rewards them. In 2022 there was the EA criticism contest, where people could send in their criticisms of EA and the best ones would receive prize money. A total of $120,000 was awarded to 31 of the contest’s 341 entries."

So when it comes to criticism the article agrees with you, and "race science" isn't even discussed at all (I've learned my lesson there). Yes, the article does critique how EA responds to criticism, but I think I'm being much more nuanced here. For example:

> “Critiques that question technical assumptions or offer marginal suggestions are welcomed — even rewarded. But those that challenge the movement’s power structures, funding models, or social norms? Those are more often ignored”

And then I go into *why* that happens, what *structures* produce that filtering effect, and how it could be improved (because I care about EAs. I literally have you in my substack recommendations for crying out loud).

Expand full comment
Ozy Brennan's avatar

Sorry, I didn't mean that as a criticism of your post, just as a criticism of leaving your post as a response to my post. I actually liked your post quite a lot-- the point about EA Forum karma was really interesting and made me more concerned about strong upvotes.

Expand full comment
Bob Jacobs's avatar

Thanks Ozy, really appreciate the clarification and your kind words about the post. Glad the karma point resonated. I think EAs are, by and large, genuinely good people (yourself very much included), just stuck with some... let's say, less-than-ideal incentive structures.

Expand full comment
Matrice Jacobine's avatar

The relevant comparison is more that the former are low-status rank-and-file who have been subjected to "losing sleep, time, friends, collaborators, and mentors" and the latter still lead of a major forecasting org and organizing one of the biggest conferences in the adjacency. (https://archive.is/Ptz2x)

Expand full comment
Ozy Brennan's avatar

I think that Austin Chen has lost sleep, time, friends, and collaborators (I don't know if he has mentors) about his frankly bizarre conference invitation decisions. There is no EA Pope who can ban him from organizing a conference or expel him from his own organization.

Expand full comment
Matrice Jacobine's avatar

This would be a hard bet to evaluate without doing a complete investigation of half a dozen of strangers' lives, but I would be very surprised if that was substantially the case in a quantitatively and qualitatively equivalent way, per observation of the respective people involved in the two cases' level of social influence from outside looking in before and after the respective events, and base rates from basic knowledge of how social groups function (see link).

You're right that there is no EA Pope and there is no easy solve the problem button, but that doesn't mean there is no problem (despite what the Shadoks taught us), in fact the first step to solving a problem is admitting there is one.

Expand full comment
sidereal-telos's avatar

I'm not quite sure what you're trying to convey by your coughing, but that essay seems to just be someone who's disappointed that their criticism didn't lead to a fundamental transformation of EA practice and also conjure up billions of dollars in diversified funding sources. But the nature of listening to criticism in an attempt to correct your flaws is that you will often end up rejecting that criticism anyway, because a lot of criticism you receive will actually be incorrect or not useful.

I'm sure the author of that piece thinks they're right about everything in it, or they wouldn't have written it, but from my perspective a lot of it is just confused, and I suspect that this opinion was shared by many readers.

Expand full comment
Ozy Brennan's avatar

CEA has set diversifying EA movement funding as one of its key goals (which I'm very pleased by). That said, I'm not sure how much the various people calling for diversified funding influenced this decision. And diversifying funding streams is hard!

Expand full comment
sidereal-telos's avatar

Yeah, calling that "non-actionable" was going a bit far, but the piece seems to treat it as something you can "just do"

Expand full comment
Bob Jacobs's avatar

Thanks for the read. I don’t expect EA to radically reform overnight or instantly conjure up billions in alternative funding sources. That’s not the point of the essay. The point is that there are consistent, structural dynamics (centralization of power, narrow epistemics, donor dominance) that make meaningful reform difficult by design.

You say that "a lot of criticism will actually be incorrect or not useful." Sure. But that can’t become a blanket justification to ignore the growing chorus of concerns, or to dismiss them as "confused" or rooted in personal disappointment. That kind of reaction actually illustrates the very epistemic insulation the piece is trying to name.

Expand full comment
Doug S.'s avatar

For purposes of #4, do secular light-skinned Jews count as people of color?

Expand full comment
CounterBlunder's avatar

I just want to share that I love this sequence, and am excited to share it with friends who don't understand rationalist / EA spaces to try to explain it to them. So thank you!

Expand full comment
tailcalled's avatar

I feel like using the N word is also not the essence of why one would oppose those interested in race science, but rather just as much of a proxy as belief in genetic race differences is. (Contrast with being a rapist, which is directly harmful and thus directly grounds for ostracization.) Like using the N word is perhaps more directly legible and justifiable to ban than belief in genetic race differences, but a proxy is a proxy.

Expand full comment
Ozy Brennan's avatar

I was definitely using "using the N word" as metonymy for a wider class of objectionable behavior, but for Americans people (who are nonblack and don't have N word privileges) using the N word is objectionable in and of itself, even setting aside that it signals racial animus. It's like loudly yelling "PENIS!"-- you are upsetting everyone around you and showing off that you enjoy violating social norms.

Expand full comment
tailcalled's avatar

I'm aware that there's a general norm against white Americans using the N word, I'm just arguing that this norm is more like the norm against men claiming that women enjoy being raped than it is like the norm against men raping women.

Showing off that you enjoy violating social norms again isn't harmful in the same sense that being a rapist is, rather enjoying violating social norms is grounds for ostracization because it is a relevant proxy for your propensity to cause harm because many forms of harm are reduced due to social norms against it.

Suppose someone has Tourette's and regularly yells "PENIS!" and the N word as a result. That would only be grounds for ostraticization among cruel people/who are intolerant of neurodivergence, not among those who can accomodate differences, because the speech disorder accounts for yells and does not imply an intent to perform bad behavior more generally. On the other hand, if there was some neurological disorder that forced some straight man to regularly rape women (there isn't such a disorder AFAIK, but like, hypothetically), then unless one could find some way to restrain it, that on its own would be sufficient reason to force this man to stay away from women, because the rape is more directly constitutive of harm.

Expand full comment
Ozy Brennan's avatar

Rape is really, really, really bad; yelling bad words is kinda bad. We accommodate some level of otherwise kinda bad behavior from people who are disabled and can't help it. We tolerate people with ADHD who constantly run late, or people with autism who accidentally insult people, but that doesn't mean that it's okay to run late or insult people.

Expand full comment
tailcalled's avatar

It's true that rape is orders of magnitude worse than yelling bad words, but at the same time I think the examples you mention match my descriptions rather than contradict them.

When a person with ADHD deprives others of a resource due to being late - for instance doesn't hand in a library book on time - then they face the same punishment that others face, rather than there being made a special exception for them.

For something like running late, it's typically not explicitly punished, and so it's not really clear how there's a distinction between someone with ADHD and without. I guess the most plausible possible distinction would be that if someone without ADHD was constantly running late, it might be due to them not caring about or wanting to sabotage the planned activities. But in that case, the punishment towards the non-ADHD person would be due to the running-late being a signal of some other bad trait, not due to it being bad in-and-of-itself. If someone non-ADHD was constantly running late and it wasn't because they didn't care for the arrangement, then one probably wouldn't punish them. (Though one could maybe say that's because one is diagnosing them with "mysterious running late disorder"?)

To me the archetypal notion of "insult" is something like "Saying something negative about a person, not in order to reveal that information for a shared account, but rather in order to signal one's generalized aversion towards them and/or manipulate others into treating them worse". The main reason to treat insults as an act of aggression would be these two latter conditions, namely that you naturally want to discriminate against those who have generalized aversion towards you and that the lack of informative motivation often means that the statement will be misleading if not an outright lie.

This notion of "insulting" is not something that is done by accident, but it seems reasonable to distinguish the notion of an insult (as in saying something negative about a person) from its root cause analysis (that it is motivated by generalized aversion towards the person). In that case one could say that a less-archetypal form of insult could happen by accident, though that also means that the primary reasons insulting is wrong wouldn't apply to these. Like we should obviously have a presumption in favor of the value of sharing information, though of course there are lots of reasons one might want to avoid sharing *negative* information about *other people*:

* The ambiguity of whether one is intentionally trying to perform the archetypal form of insult or just tried to share information creates a social tension, as people then have to try to pick a route of behavior that works well in either case.

* The subtext of commenting on a flaw is often that the flaw should be corrected, which implies a certain sense of entitlement on behalf of the person making the comment. For instance if a man tells a woman with short hair that he'd find her more attractive with long hair, the subtext can be that she is supposed to try to be more attractive to him, whereas I assume she'd say that she isn't picking her hairstyle to attract him.

* People are often poorly calibrated with respect to other's traits and with respect to how good one's traits need to be for a given purpose, so attempts to share information about that are often misleading and tend to cause chaos in trying to sort out how reliable they are, or to cause runaway collapses in self-esteem due to repeated comments based on the same flawed heuristic.

* Relatedly, the ontology with which people seek to correct people about their traits is often much more simplified than what those people already know. For instance trans people sometimes pretend to have a delusional disorder about thinking they are the opposite natal sex in order to stake out a region for them to live like said sex, and sometimes when people tell transfems "you are a man" or transmascs "you are a woman", it is because they are not very attentive and genuinely thought trans people had said delusional disorder.

* Admittedly the above example includes a galaxybrained social manipulation scheme, and insulting here has the virtue of making the existence of this scheme much clearer. However, as the scheme is part of a larger conflict about policy, it is likely to be a good idea to get well-informed about the conflict before one starts pushing the conflict forwards in a certain direction. (Like, dishonest behavior can be as defense against other's immoral behavior, in which case it's better to address the upstream immorality.)

* While some people overestimate their own qualities and thus could in theory be helped by getting more accurate information about them, it is often based on a strategy ("fake it till you make it") or conspiracy ("let's not discourage them") about improving their self-esteem to the point where they can participate enough in a subject that they get experience with how to work within said subject, which in turn could transform their self-estimate to be more accurate. Popping this self-esteem bubble leaves them without a strategy to get this experience.

* In some cases, the person getting insulted simply genuinely would benefit from getting the information in question. However, others may have ommitted the information due to some of the reasons above. Commenting on this information thus also reveals the fact that others chose to ommit the information. This makes them look like they have poor social skills (or perhaps are manipulative, if they ommitted the information as an inverse-insult, rather than due to the concerns above), which is sort-of-like-an-insult towards those people.

This probably justifies a general heuristic against insults (but at the same time it's obviously extremely untenable to fully avoid sharing negative information about other people). But most of these seem quite unlike the issues with yelling "PENIS!". Probably the closest is about the possibility of there being a subtext, like yelling "PENIS!" implies that this is a sort of sexual or unserious context. But the subtext example is also the one that's most about it functioning as a proxy for some other offense, rather than it being a harm in itself.

Expand full comment
tcheasdfjkl's avatar

honestly it would be very reasonable to exclude guy who yells "PENIS! N*****!" from many spaces, even if we know him to be perfectly well-intentioned and not likely to engage in other misbehavior, on the grounds that that specific behavior is really quite unpleasant to be around in itself

Expand full comment
Whenyou's avatar

See, as a sky high agreeableness sky high neuroticism person I think arguments is like the worst way ever for me to reach the truth. If someone wants to argue with me I just sit there fighting the urge to cry and I want to agree with them because probably they're right and I'm an Irredeemable Bad Person who is Wrong About Everything. My life became about 15% better when I stopped arguing online.

But it's a noble project, based on what you said here I find it really admirable. Some weirdos even enjoy arguing, those sick fucks, and for them this sounds way more honest and better than Debate Club.

Expand full comment
Ozy Brennan's avatar

The secret is that neurotic agreeable people can *read* the arguments we disagreeable people are having and then still come to more accurate viewpoints. :)

Expand full comment
Whenyou's avatar

That has indeed been The Strat for me :)

Expand full comment
Stephen Saperstein Frug's avatar

I think this doesn't get at the best version of what people who believe in argument believe in. To get at a closer version, imagine it as "conversation in community is the best way to get at the truth" (as opposed to, say, sitting alone in a room and thinking by yourself, which is a (perhaps surprisingly) common method in the history of philosophy). I think you are focusing on the hostility of the word "argument", whereas the important thing is "people have different perspectives and if you let them interact they develop new and better ones".

(And, FWIW, even when you're talking about a narrow version of "arguing", "arguing online" is to "arguing" as eating Macdonald's is to fine dining. If there's anyone who thinks that *arguing online* is a way at the truth I've never met them.)

Expand full comment