There’s a lot of terms in English (and probably all languages) that are used to mean closely related, yet vastly different, things. This can be annoying – language is arguably one of the most important tools humans ever developed, if not the most important, because it’s what lets us communicate with each other and transfer our thoughts and experiences. Ambiguity is a crack in that tool that results in people not understanding each other. When the terms are close enough, people can go entire conversations without realizing they are talking about different things. Worse, using the same term can confuse a person’s own internal thoughts, as they haven’t clearly defined it to themselves. Even worse, entire political arguments can derive out of different people holding different meanings for the term. One of those terms is the term “wrong choice”.
Let’s start easy with a quick hypothetical. I make a bet with you that the coin I’m about to flip will land on heads. If I’m right, you give me $10; if I’m wrong and it lands on tails, I’ll give you $1000. Unless you have strong feelings against gambling or you really can’t afford to lose that $10, the “right choice” here is to accept my bet. You have a 50% chance of winning $1000 and a 50% chance of losing $10 – with an expected value of $495, it’d be silly of you not to take it.
If the coin landed heads, you might say that in hindsight you made the wrong decision – based on the outcome, the correct thing would have been to pass. But it was absolutely the correct decision until the moment the flip occurred.
This holds very true in other casino games like Blackjack and Poker. The players who are masters at these games know to differentiate between decisions which are correct in outcome and decisions which are correct in choice. There are probability tables for these games that outline what decisions are correct in choice, and someone who is solely interested in maximizing profit should always follow what they say. They may feel disheartened all the times it leads them astray – where they folded their hand when they could have actually won. But this is because at the end of the day, what we really care about is the outcome.
Casino games are an easy example because in the long run, a person who follows what is correct in choice will end up being correct in outcome. In many, many individual instances, the two will differ –yet in the majority of cases, they will be the same. Since players can play over and over and over, the benefits of playing according to the correct probabilistic models become evident.
But in real life, you don’t get that many tries – you often just get one.
Here’s another hypothetical. You and your friend want to throw a barbecue one Saturday afternoon and you invite a bunch of people ahead of time. On the night before, the forecast shows an 80% chance of rain on Saturday but only a 20% chance on Sunday. You want to reschedule for Sunday but your friend says you worry too much and you should just keep the Saturday date.
Now if you push hard to end up rescheduling and it ends up not raining on Saturday, you will feel cheated. Indeed, one wouldn’t blame you for being downright angry if it ended up raining on Sunday instead. Whereas if you decide to give in and it ends up not raining on Saturday, your friend may gloat over how you worried too much and that it was a good thing they talked you down from rescheduling.
Both of these happen because again, we judge whether our choices were correct in outcome. If your goal is to not get wet and your friend group is largely indifferent to which day the barbecue happens, you are correct to want to reschedule and your friend is incorrect to want to stay the course. If you were going to have 100 barbecues like this and always rescheduled, you would end up having 20 bad barbecues; but if you always did nothing, you would end up having 80 of them.
Of course, this is hard to remember when you are getting drenched on a 20% P.O.P day or looking outside on a clear sunny 80% P.O.P day wishing you hadn’t rescheduled.
It bothered me a lot after the 2016 election when people started saying that Nate Silver was wrong. The man had repeatedly called out that Trump had a significant chance of winning, and likened the idea of calling Hillary’s win a foregone conclusion to saying that not dying from a game of Russian Roulette was a foregone conclusion. Indeed, after he got called a wizard for his “accurate predictions” in the 2012 election, Nate called out in his book that “getting every state right was a stroke of luck” and that his “chance of going fifty-for-fifty were only about 20 percent”.
In the end, Nate and his team at FiveThirtyEight gave Hillary a 71.4% chance of winning the 2016 election. She lost and he got heat for “being wrong”. This makes exactly as much sense as someone getting heat for saying there’s a 66.7% chance of a particular dice roll showing a 3 or higher, and having a 1 come up.
To be fair, I also can’t claim that he was right. Nate can’t know if his estimated probability was right. It’s not as simple as a coin toss where we intuitively known that the odds of heads vs. tails is 50-50. It’s not like weather forecasting, which we know through historical data to be quite accurate. The National Weather Service has hundreds of locations each day that it can test out. The FiveThirtyEight team gets one election to test out their model every 4 years. Presumably, they make updates in between, which means they don’t even get to test the same model repeatedly. This leads one to ask: how do we know if the model is even right? If they always say there’s a 30% outcome in an election, one could argue that no one can ever prove them wrong and they are failing the criteria of falsifiability.
There’s no good answer to this. If I am overly trusting of the FiveThirtyEight team, it’s because their methodology makes sense, because they have shown historical success via predictions, and because they seem comfortable discussing their uncertainty and mistakes. Probabilistic statements inherently fail the falsifiability criterion – indeed, this is one of the criticisms of the criterion. So this is as good as its going to get.
We often make statements like “I am 80% certain” without thinking much about what it means. We have an intuitive sense of what it means but if it were an honest statement, it would imply the following:
- If I make 100 such statements, I would expect around 80 of them to be true and 20 of them to be false.
- If someone offered to give me some amount (say $10) if I was right but I had to give them more than four times that amount (> $40) if I was wrong, that would be a bad bet to take. But If I had to give them less than four times that amount if I was wrong, it would be a good bet to take.
In reality, people who say they are 99% confident are wrong 20% of the time, and people who say there is only a 1 in a million chance they are wrong are wrong 5% of the time. This highlights the problem of overconfidence but it also highlights a lack of awareness on what these statements practically mean.
Some people (like Scott Alexander) put genuine effort into calibrating themselves, which I applaud. I’ve never done this myself but I would love to live in a world where it’s a norm for politicians and pundits and journalists to do this, so the public always has a historical record we can use to see who is trustworthy and who isn’t. This would also save us from being drawn by the overconfidence that is so inviting about these personalities.
Scott Adams, who created the Dilbert comic strips, predicted that Trumps’ odds of winning were 98%. He had no real methodology other than that he thought the Democrats were bullies and that Hillary was a candidate who was turning Americans against each other while Trump was seeking to unite America. He was correct in outcome and has drawn praise. Yet, there is no question that he was completely incorrect in choice to say this.
As long as we confuse being correct in outcome and being in choice, people who make bad predictions like Scott Adams will be praised whenever they get one right, while people like Nate Silver or Scott Alexander who diligently state their uncertainties and evaluate their accuracies will be wrongly criticized.