“A monkey brain good decision is based on knowledge and not on numbers.”
When I was at West Point, there was quite a focus on combat arms (infantry, tankers, field artillery, air defense, etc.). Every running cadence I can remember that I sang, or, rather, gasped through, had to do with the combat arms; there weren’t any songs on the glories of, for example, the Finance Corps. Based on the amount of focus on combat arms, it was quite easy to think, as I often did, that there were more people in combat arms in the military than there were in the other branches. However, in reality, it took 10 support people – soldiers who weren’t in the combat arms – to support every one person in a combat arms role.
This confusion of useless or uncorrelated data – in this case, the ratio of running cadences focused on the combat arms versus other military branches – with actual percentages and rates is a common mathematical fallacy that Monkey Brain falls for. When there are different potential outcomes and probabilities for calculating each, we can perform what is called Bayesian analysis to determine the actual probability of an event happening.
However, doing Bayesian analysis sounds tough to Monkey Brain, and it requires a lot of math and calculations, usually done in your head. That’s work, at least, according to Monkey Brain. As a result, Monkey Brain avoids the work and looks for shortcuts, using heuristics – rules of thumb – to try to replicate the distribution of calculations which would normally occur with Bayesian analysis.
This would be fine if our heuristics gave us outcomes which were reasonably close to the answer. The problem, as a series of experiments conducted in 1973 by Nobel Prize winning economists Daniel Kahneman and Amos Tversky shows, is that there are times when we use useless information as an excuse to throw relevant information out the window and come up with our own, inaccurate conclusions.
One of their experiments was to get students to predict the graduate school major of a fictional student based on his high school senior year personality sketch:
Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people and does not enjoy interacting with others. Self-centered, he, nonetheless, has a deep moral sense.
Monkey Brain is probably jumping up and down raising his hand, just waiting to be called on.
What Do You Think Tom Studied, and Why You’re Probably Wrong
Monkey Brain probably thought that Tom was an engineer or computer science student; yet, a higher percentage of graduate students at that time were studying either humanities or education, making it more likely that Tom was one of those than an engineer.
Why did we make the wrong conclusion?
If you think of a stereotypical engineer, the description of Tom seems pretty apt. After all, there was a reason that all of us on the high school math team were called “math nerds,” right? However, we’re making the wrong inferences when we come to that conclusion. There is nothing definitive which says that, for example, people whose writing is dull, and who aren’t great at interacting with others have a higher propensity to become engineers. We just make the inference and assume that it is so. As a result, Monkey Brain gets rates and numbers confused.
Where else in our financial lives are we likely to suffer from the same fates?
- Remembering the winning investments and forgetting that one big loss. There are a lot of other psychological biases at work here, such as loss aversion and recency bias, but we’re much more likely to remember the winning trades that we made, no matter how small the profit that we took off the table was, but we’re more apt to discount the losses, even if they were huge. Similarly, if we invested $1,000 in a winning trade that netted us a $2,000 profit while investing $10,000 in a losing trade that lost everything, we’ll remember the 200% ROI trade, compare it to the -100% ROI trade, and conclude that we’re up 100% and great investors.
- Not getting enough insurance because, after all, tornadoes only hit mobile home parks. The act of getting insurance is an admission that there are very negative life events which could happen to us, and we need to prepare to deal with them in the unlikely event that they do happen. However, having to consider things like dying young, being disabled, or having a significant property loss isn’t very pleasant. As a result, we tend to find unrelated information (e.g., television news coverage always seem to show mobile home parks that got hit by tornadoes) and extend unreasonable conclusions (e.g. I’m not in a mobile home park; therefore, a tornado would never hit me, despite living in Oklahoma…).
- Extrapolating from a lucky shot. Whenever we get lucky in something, Monkey Brain likes to take the credit. He will tell you that it was all skill and you can do it again! He inflates your ego and makes you overconfident, making you more prone to taking unnecessary risks. Instead, the most likely outcome of the next time you try something is what is called reversion to the mean, meaning that you’ll probably have subsequent outcomes which more closely resemble your average results.
So, how do we stop Monkey Brain from running us off of a cliff with his leaps of (il)logic?
- Ask if the information we’re using to make conclusions is truly representative. In the example of Tom from the experiment I discussed above, asking a couple of questions could have helped the students determine what was relevant and what was not relevant. For example, what is the most common field of study for graduate students, or are these personality traits more conclusive in determining a graduate course of study than any other pieces of information. In other words, ask what truly determines an outcome rather than letting Monkey Brain take a stab at key factors.
- Ask if the situation has truly changed to create a new baseline. Did you do something which had a meaningful impact on the outcome that you saw, or was it an outlier? Working backwards from the result can help you determine what caused the result – luck or skill.
- Value cost average. Chances are pretty good that you’re not going to beat the market, so don’t let Monkey Brain convince you that you have some sort of knowledge of stock investing that nobody else does just because you heard a guy on the bus talking about his cousin’s stepmother striking it big in pork snout futures.
- Get appropriate amounts of insurance. Chances are pretty good that outcomes against which you insure yourself won’t happen, but chances are also pretty good that if you don’t have appropriate amounts of insurance, you’ll be left with your pants down and your finances in a giant hole.
I may have chosen a combat arm (whoo tanks!) when I graduated from West Point, but I was by no means in the majority of students. Most of them chose non-combat arms branches, despite Monkey Brain’s insistence that the world revolved around us tankers.
Have you ever fallen for the trap of using the wrong information to draw conclusions about the likelihood of events happening? Tell us about it in the comments below!