9 Comments
May 3, 2023·edited May 3, 2023Liked by Ozy Brennan

I am somewhat amused by the idea of mathematicians indulging in a bit of p-hacking when they can't quite get their proofs about Woodin cardinals and the cardinal characteristics of the continuum come out the way they'd like. But I don't know -- perhaps applied mathematicians do publish stuff with wonky p-values, and it's not just that mathematics is hanging out with the wrong fields here?

Expand full comment
May 4, 2023Liked by Ozy Brennan

The first part of the Wakanda story seemed plausible enough that I actually looked up the relevant part of Pliny's Natural History to see whether it was based on something real; the rest of the antique & medieval section of it would have seemed similarly plausible had I not confirmed that the Pliny quote was made up, & it included allusions to obscure real things (e.g. the Aethiopis, Nero's expedition up the Nile, the Ethiopian emperor's gift of giraffes) among which the similar fake ones (Pelagon of Rhodes, the 7th century Caelestian heresy) blended in. So when I looked at

> an extremely detailed review of TikTok YA novel Lightlark, which is one of the stupidest YA books to come out in a while.

on another blog I hadn't seen before, I was wondering from the beginning whether it was another metaliterary fiction. Halfway through that review, the book described seemed so implausibly bad that I was expecting some revelation along the lines of "actually GPT-3 wrote this book" or "this is a parody the reviewer made up", & even though the entire rest of the site gives no apparent indication of any part of it being fake, I would still be entirely unsurprised to find out that "Lightlark" doesn't actually exist.

I'm not sure whether you're already aware of https://justinehsmith.substack.com/archive , but in it are included several pseudohistorical fictions somewhat akin to the Wakanda story (e.g. https://justinehsmith.substack.com/p/the-voynich-manuscript-a-translation or https://justinehsmith.substack.com/p/against-resistentialism ) -- along with some articles on actual history/culture in a similar style (e.g. https://justinehsmith.substack.com/p/re-entering-the-vampire-castle ).

Expand full comment
May 6, 2023Liked by Ozy Brennan

I used to be a teacher, and yea, I could have told you that learning style theory was wrong years ago. When someone says "I'm an X learning", what they actually mean is "my favorite subjects are learned in X way." And like, that's fine, but use that information correctly. Direct that person towards the subjects they like or have use for, don't try to teach other subjects in ways that don't fit the subject matter.

Expand full comment

Why does substack think Thing of Things is written by one 'Miranda Dixon-Luinenburg' (also they managed to print her name twice)?

Expand full comment

> You look into the mirror and see a girl with ashen brown skin and chestnut hair. You look fine, you think. Maybe a little more grey than the others. Maybe a bit rounder in the face. *But you look like a girl*

Wait, does this mean what I think it does? :o

Expand full comment

Regarding the link that argues AI companies should avoid hype or speeding up research: I've seen many people whose common sense I respect (such as yourself) advocate for this. However it seems like a bad idea to me, because if successful you are selectively removing AI researchers who are concerned about alignment from the field (or just drastically weakening their position).

Some people explicitly advocate for this self-removal; I've even seen people concerned about AI risk say they wish OpenAI and DeepMind had never been founded. This seems completely insane to me; I highly doubt this would delay arrival of AGI by more than 5 years and in return for this you are ensuring that when an AGI is being developed, the people doing so have been specifically selected for not caring about whether that AGI is aligned, rather than the current situation in which the two leading AI research groups are both highly concerned about alignment. This is not even mentioning the knock-on effects that dampen the ability of alignment researchers to do their job - people working on alignment will have less resources, less expertise, and less credibility. It seems like you get a gazillion different downsides in exchange for one upside.

And I think this same tradeoff applies for many less drastic measures. Asking companies concerned about AI risk to not publish, sure; but asking them to avoid publicity, especially for companies like Anthropic and OpenAI that rely on publicity to be able to sell their products, is asking those companies to cripple themselves for very little upside.

Slowing down AI progress by regulating the field, which affects everyone, makes sense. Slowing down AI progress by selectively getting AI researchers who care about alignment to hobble themselves or remove themselves from the field entirely seems very bad. Yet I often hear people advocate for both at the same time, without a clear distinction between these two possibilities, or even go straight for option number 2!

Anyway, idk if we'll be able to come to an agreement - a lot of premises here have to be evaluated intuitively - but I'd be interested to hear what you think about this.

Expand full comment