Nearly everyone is uncertain about some things [citation needed]. For example, you might be uncertain whether the Qin dynasty occurred before the Ming dynasty, what exactly it means that E equals M C squared, whether the coin you just flipped is going to come up heads, when the next book in A Song of Ice And Fire is going to come out, or what dark matter is.
When we talk about uncertainty, it’s important to distinguish two kinds of uncertainty. These are sometimes called risk and Knightian uncertainty, but “risk” is kind of a confusing word for it so I’m going to call them regular uncertainty and Knightian uncertainty.
Regular uncertainty is uncertainty where you understand what you’re uncertain about. For example, think about the coin flip. You’re uncertain whether the coin is going to come up heads or tails. But you’re nearly one hundred percent confident that the coin is going to come up heads or tails. In fact, you know that it’s fifty percent likely that the coin comes up heads and fifty percent likely that the coin comes up tails (weighted coins aren’t physically possible).
Knightian uncertainty where you don’t understand what you’re uncertain about. Think about debates about consciousness. You can see that questions like “what is consciousness? how can we tell whether a being is conscious?” are very different from questions about coin flips. You can’t say “well, there’s a fifty percent chance that dogs are conscious” with the same certainty that you say “well, there’s a fifty percent chance that the coin flip will come up heads.” If asked to give a probability, you’ll probably hem and haw and eventually give a number that feels kind of arbitrary. You don’t know whether your concepts or your approach to the problem are even sort of heading in the right direction. You don’t know what all the possible answers even look like: maybe the true definition of consciousness is something we haven’t even thought of.
Of course, in the real world, it’s very rare to have a situation that only has regular uncertainty or only has Knightian uncertainty. When you’re planning your wedding, you mostly have regular uncertainty about whether the caterers will show up on time. But you have some Knightian uncertainty: maybe there was a horrible miscommunication with the caterers because of an expectation you had no idea about because you’ve never planned a wedding before. Conversely, if you’re treating an undiagnosed chronic illness, you have some regular uncertainty: you might know about forty percent of your days are bad, so you can wake up in the morning expecting a forty percent chance that today is bad. But most of your uncertainty is Knightian: you don’t know whether this medication will work, or what factors imply that this medication will or won’t work, or what your diagnosis is, or whether your diagnosis is even known to medical science.
Why is this distinction important? Because we should think about situations with a lot of Knightian uncertainty differently from situations with a lot of regular uncertainty. Just as one example, in a situation with a lot of Knightian uncertainty, you’re likely to be blindsided by something you totally didn’t think about or that wasn’t even possible in your best understanding of the world. So if you’re in a situation with a lot of Knightian uncertainty, you might want to:
Take actions that are probably good even if you’re wrong about everything (e.g. if you’re chronically ill trying to get enough sleep might help and probably won’t hurt).
Take actions that make you or other people more likely to notice if you’re wrong (e.g. telling the truth, pointing out flaws in your own arguments).
Be ready to deal with unexpected bad things (e.g. saving up money, having a first aid kit).
Think about things and learn so that more of your uncertainty is regular uncertainty (e.g. talk to someone who’s planned a wedding before about how caterers work).
Btw in decision theory, the formal treatment of Knightian uncertainty is known as "imprecise probability" and there are various different approaches (Dempster-Shafer theory, credal sets, previsions etc). And ofc the topic is close to my heart since a lot of my recent work is applying this to AI theory :)
The way this finally clicked for me was: Sure, Bayesian probability theory is the one true way to do probability. But you can't actually implement it.
In particular, problems I've experienced are:
- I'm sometimes not sure about my calibration in new domains
- Sometimes something happens that I couldn't have predicted beforehand (particularly if it's very specific), and it's not clear what the Bayesian update should be. Note that I'm talking about "something took me completely by surprise" rather than "something happened to which I assigned a low probability"
- I can't actually compute how many bits of evidence new data comes. So for instance I get some new information, and I don't actually just instantaneously know that I was at 12.345% and now I'm at 54.321%. I have to think about it. But before I've thought about it I'm sometimes like a deer in the headlights, and my probability might be "Aaaah, I don't know."
- Sometimes I'll be in an uncertain situation, and yeah, I'm uncertain, but I'd still offer a $10k bet on it. Or I'd offer a smaller bet with a spread (e.g., I'd be willing to bet $100 at 1:99 in favor but 5:95 against). But sometimes I really am just very un-eager to bet.
Some links to dig into this might be: https://www.lesswrong.com/posts/vpvLqinp4FoigqvKy/reflective-bayesianism, https://www.lesswrong.com/posts/xJyY5QkQvNJpZLJRo/radical-probabilism-1
That said, I do think that people are too eager to say that something is under "Knightian uncertainty" when they could just put up a question on Metaculus (or on a prediction market) about it.