Btw in decision theory, the formal treatment of Knightian uncertainty is known as "imprecise probability" and there are various different approaches (Dempster-Shafer theory, credal sets, previsions etc). And ofc the topic is close to my heart since a lot of my recent work is applying this to AI theory :)

The way this finally clicked for me was: Sure, Bayesian probability theory is the one true way to do probability. But you can't actually implement it.

In particular, problems I've experienced are:

- I'm sometimes not sure about my calibration in new domains

- Sometimes something happens that I couldn't have predicted beforehand (particularly if it's very specific), and it's not clear what the Bayesian update should be. Note that I'm talking about "something took me completely by surprise" rather than "something happened to which I assigned a low probability"

- I can't actually compute how many bits of evidence new data comes. So for instance I get some new information, and I don't actually just instantaneously know that I was at 12.345% and now I'm at 54.321%. I have to think about it. But before I've thought about it I'm sometimes like a deer in the headlights, and my probability might be "Aaaah, I don't know."

- Sometimes I'll be in an uncertain situation, and yeah, I'm uncertain, but I'd still offer a $10k bet on it. Or I'd offer a smaller bet with a spread (e.g., I'd be willing to bet $100 at 1:99 in favor but 5:95 against). But sometimes I really am just very un-eager to bet.

That said, I do think that people are too eager to say that something is under "Knightian uncertainty" when they could just put up a question on Metaculus (or on a prediction market) about it.

I think a key thing about Knightian uncertainty is that we don't necessarily expect the laws of probability to apply to them.

One law of probability is Bayes' theorem; I don't really know what not following Bayes' theorem looks like.

Another is, if A implies B, then

P(A) ≤ P(B)

The "conjunction fallacy" is violation of the special case where A is "B&C". For example, B = "Linda is a bank teller" and A is "Linda is a feminist bank teller".

## Knightian Uncertainty

Btw in decision theory, the formal treatment of Knightian uncertainty is known as "imprecise probability" and there are various different approaches (Dempster-Shafer theory, credal sets, previsions etc). And ofc the topic is close to my heart since a lot of my recent work is applying this to AI theory :)

The way this finally clicked for me was: Sure, Bayesian probability theory is the one true way to do probability. But you can't actually implement it.

In particular, problems I've experienced are:

- I'm sometimes not sure about my calibration in new domains

- Sometimes something happens that I couldn't have predicted beforehand (particularly if it's very specific), and it's not clear what the Bayesian update should be. Note that I'm talking about "something took me completely by surprise" rather than "something happened to which I assigned a low probability"

- I can't actually compute how many bits of evidence new data comes. So for instance I get some new information, and I don't actually just instantaneously know that I was at 12.345% and now I'm at 54.321%. I have to think about it. But before I've thought about it I'm sometimes like a deer in the headlights, and my probability might be "Aaaah, I don't know."

- Sometimes I'll be in an uncertain situation, and yeah, I'm uncertain, but I'd still offer a $10k bet on it. Or I'd offer a smaller bet with a spread (e.g., I'd be willing to bet $100 at 1:99 in favor but 5:95 against). But sometimes I really am just very un-eager to bet.

Some links to dig into this might be: https://www.lesswrong.com/posts/vpvLqinp4FoigqvKy/reflective-bayesianism, https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1

That said, I do think that people are too eager to say that something is under "Knightian uncertainty" when they could just put up a question on Metaculus (or on a prediction market) about it.

I think a key thing about Knightian uncertainty is that we don't necessarily expect the laws of probability to apply to them.

One law of probability is Bayes' theorem; I don't really know what not following Bayes' theorem looks like.

Another is, if A implies B, then

P(A) ≤ P(B)

The "conjunction fallacy" is violation of the special case where A is "B&C". For example, B = "Linda is a bank teller" and A is "Linda is a feminist bank teller".

nostalgebraist talks about this in his old Bayes masterpost, section 7: https://nostalgebraist.tumblr.com/post/161645122124/bayes-a-kinda-sorta-masterpost

He points out that such logical relations are often unknown, and therefore we cannot always follow this law of probability theory.