Thomas Bayes was an 18^{th}-century English statistician, philosopher, and minister. His most famous work was “An Essay towards Solving a Problem in the Doctrine of Chances,” which he never published himself. Instead, it was introduced to the Royal Society two years after his death by his friend Richard Price.

The essay contained the seeds of what today is called Bayes Theorem, which *“describes the probability of an event, based on prior knowledge of conditions that might be related to the event.”*

If you’re not into math — don’t worry. You don’t have to understand exactly how probability calculations work to benefit from Bayesian thinking. You just have to grasp the intuitions behind the math, which is pretty easy to do. Consider, for example, the following news headline:

### “Violent Crime Doubles”

If you read that in your local newspaper, you might get worried that your chances of being assaulted have increased dramatically. But is that really true? To find out, we’ll use Bayesian thinking to put this new piece of information into the context of your prior knowledge.

Let’s say that violent crime in your city has been steadily declining for decades. You know that the risk of being assaulted last year was 1 in 10,000. Since then, according to the newsletter article, violent crime has doubled. That means the risk of assault is now 2 in 10,000. In other words, the risk of getting assaulted is no longer 0.01%, but 0.02%.

So, the headline actually shouldn’t make you too worried. Sure, the probability of getting assaulted has increased. It has indeed doubled. But it’s still very unlikely to happen. And that’s difficult to discern unless you factor in prior information about the situation.

This example illustrates the big idea behind Bayes Theorem; that we should continuously update our probability estimates as we come across new information. And that’s very different from how we typically approach the world. Usually, we tend to either dismiss new evidence or embrace it as though nothing else matters.

As an example of that, let’s say that you consider yourself a good driver. But then, one day, you get into a car accident. In that situation, most people will either protect their belief (“It was the other guy’s fault”) or replace it altogether (“I guess I’m a terrible driver”).

By instead using Bayesian thinking, you look at the situation in the context of your prior experience. Sure, the car accident is evidence against your theory that you’re a good driver. But that doesn’t mean you have to stubbornly protect or immediately replace that belief. It just means you should be a little less confident that it’s correct.

Instead of being 100% or 0% sure that your theory is correct, you assign it a more reasonable probability. If you’ve been driving for 10 years without any prior accidents, perhaps you can be 90% sure that you’re a good driver. With that estimate in mind, you don’t have to avoid driving, but you might want to be a little more cautious than you previously were.

Reasoning this way makes you much more aware that your beliefs are greyscale rather than black-and-white. It allows you to continually update the level of confidence in your own theories about the world. And that helps you make more accurate predictions, improve your decisions, and get better outcomes.

I’m 99% sure of it. 🙂