Many people have argued, however, that overconfidence can and should diminish if individuals are exposed to objective performance evaluation data. Is that true? Well, a new paper by Patrick Heck, Daniel Benjamin, Daniel Simons, and Christopher Chabris (published in Psychological Science) questions that conventional wisdom. They studied over 3,000 tournament chess players from 22 countries. In chess, each player has a rating that accurately reflects their probability of winning a contest. In short, chess players have access to objective, accurate performance evaluation data. Yet, overconfidence persists even in the face of cold hard facts! The scholars report:
"On average, participants asserted their ability was 89 Elo rating points higher than their observed ratings indicated—expecting to outscore an equally-rated opponent by 2:1. One year later, only 11.3% of overconfident players achieved their asserted ability rating. Low-rated players overestimated their skill the most and top-rated players were calibrated. Patterns consistent with overconfidence emerged in every sociodemographic subgroup we studied. We conclude that overconfidence persists in tournament chess, a real-world information environment that should be inhospitable to it."
Hubris gets the best of us at times, and it certainly affects business leaders in many situations. My conclusion from this study is that we can't simply expect good outcome measures to mitigate overconfidence bias. Pointing to the facts is not enough. People's emotions matter, and their identity shapes how they will make sense of objective performance data. As we give feedback or evaluate performance, we need to consider the likelihood that distorted perceptions of self-efficacy may not go away just by pointing to the numbers. We have to appeal to people in ways that go beyond the data if we wish to help them reset their self-evaluations and improve based on our feedback.

No comments:
Post a Comment