“To err is human: to forgive, divine.”
How many of you have used a GPS to get somewhere when you were driving? I am a big fan of using Waze (we even call her “Wendy” when she gives us directions), the application that is supposed to account for current traffic conditions on your route and route in a way that avoids traffic and gets you to your destination fastest.
Wendy Waze is not perfect, though. Occasionally, she either recommends a very circuitous path, or she recommends turns that are just plain wrong.
The first time my wife was in the car with me when Wendy Waze hit a snafu, she (my wife, not Wendy Waze) showed deep skepticism.
“Are you sure this is right?” she asked.
“Um…” I replied, having flushed all land navigation skills I learned in the Army down the mental toilet as soon as GPSes arrived on the scene.
The next time we went somewhere, my wife immediately assumed that Wendy Waze was just plain wrong. I, being stubborn (some may say pig-headed), refused to accept human help – at the risk of the harmony of my relationship – and plowed on with the directions my phone was spouting off at me, much to my wife’s chagrin.
Despite now using a GPS for several years, my wife is much more skeptical of the probability of us arriving at a given location at an expected time, even though, for a vast, vast majority of the time, we do arrive in one piece on or about the time that the GPS algorithms expected us to arrive.
However, let’s contrast GPS success rates to two other cases:
- Army second lieutenants. If you were in the military, you completely get this reference. There’s a reason that second lieutenants are assumed, at any given point in time when land navigation is required, to be lost. For my civilian readers, trust me. There’s a joke in here.
- The Amazing Race. We discovered this show a couple of years ago, and have been slowly making our way through the seasons. We just finished season 7, the one where Rob and Amber from Survivor competed in it. The premise of the show is that a group of two-person teams start off at some place in the world, usually in the United States, and are given instructions to find something where they will get their next clue. Along the way, they have to compete challenges to get clues, but, essentially, it’s an around-the-world treasure hunt. There is a lot of driving involved, and almost every episode shows a team with one person driving and one person sitting in the back puzzling over a map and, invariably, making a wrong turn. If only those teams had GPSes, they’d avoid many of their navigational errors.
Most of us are willing to give computers a try, whether it’s at navigation, beating Garry Kasparov in chess, or taking on Ken Jennings in Jeopardy!; however, the moment an algorithm shows fallibility, our confidence is kneecapped.
Our concerns about algorithms is called, unoriginally, algorithm bias, and was studied by the University of Pennsylvania’s Berkeley Dietvorst, Dr. Joseph Simmons, and Dr. Cade Massey. They compared human forecasters versus algorithm forecasters for both student performance based on admissions criteria and for predicting the rank of states for the number of airline passengers based on a few pieces of data.
Is Watson So Wrong?
What they were studying was under what conditions people expected the algorithms to perform worse than the human forecasters despite being shown overwhelming evidence that the algorithms were expected to outperform the human forecasters.
Their studies showed that when people were exposed to neither the algorithms nor the humans before predicting who would perform better, they generally expected the algorithms to perform better. However, when shown the algorithms, if they saw an error, even though they were shown human errors between 15% – 97% greater in magnitude, the confidence in the performance of the algorithms dropped from 10% to 39%.
The fundamental reasoning behind this has to do with behavioral biases. First off, people expected the superhuman from the supercomputers (har!). They expected algorithms to perform nearly twice as well as human forecasters, which depending on the circumstances, may be extremely overoptimistic. Secondly, once they had built this mental model of the infallibility of an algorithm, they had very little forgiveness for the algorithm having a shortcoming. The mental model was binary – either the algorithm was perfect, or it could do no right. There was no in-between.
Furthermore, the participants in the experiment expected humans to learn from their mistakes and a) overestimated the humans’ ability to learn from mistakes, and b) failed to account for the idea that algorithms can also adapt to conditions and learn from errors, and the algorithms don’t fall prey to the same behavioral biases that humans do, which prevent them from learning from errors as quickly as they could.
Algorithms and Your Investing
Over the past couple of years, the financial services industry has seen the proliferation of what I call “roboinvestors.” These are companies that will take your money and invest in low-cost investments and then manage capital gains and losses in a tax-efficient manner. While the actual tax loss harvesting benefits really only benefit higher net-worth individuals, the fees that most of these roboinvestors charge are actually pretty reasonable. They also automatically rebalance your portfolio to keep your asset allocation in line with your stock/bond balance that you need for your age and risk profile.
I’m generally a fan of these roboinvestors. I’ve referred several clients over who just didn’t want to deal with the process of managing their own investments and for whom the charges that they paid to the roboinvestors wouldn’t affect their retirement planning outcomes one way or the other.
Yet, for all the potential good that these roboinvestors could do, given the wide world of how much money is available for investing, these roboinvestors have a fraction of a fraction of the market share. They’re backed by hundreds of millions of dollars of venture capital money, but can’t seem to scratch the surface of the available market share that they should probably be claiming.
Because people don’t trust algorithms. They would rather trust a human being.
Human beings are prone to error. They are prone to the illusion of control, meaning that, while they’re less capable than, say, and algorithm to perform a certain task, they want to feel like they’re in control and can do it better (it’s also why driving doesn’t scare you, but flying over turbulence scares the bejeezus out of you). Hand your money over to an algorithm, even if the algorithm can almost certainly do it better than you can, and the moment you see an error, your confidence is shattered. This is, after all, your money and your future we’re talking about. This isn’t predicting the 2012 elections a la Nate Silver or trying to guess who’s going to win each game of the NCAA basketball tournament.
But, maybe if we would take our emotions out of the equation, we might be more prone to trusting the algorithms. This isn’t an Asimov novel. Robots are not going to take over our lives and force us to become slaves.
This is removing human error from investing.
Side note: Most of the media who cover the financial services industry call these roboinvestors by a different term: roboadvisors. That is not true. None of the roboinvestors will tell you how much money you need to invest, how much you need to set aside for college, how much to put into your emergency fund, how much life insurance you need, how much disability insurance you need, or if you’re on track to retire at the age you want to retire and spending what you think you’ll spend when you retire. That’s what advisors do. Therefore, they’re misnamed. They’re roboinvestors, because they invest money algorithmically. They don’t make financial plans algorithmically.