Contacts
Research Economics

Why Pundits Can Be Unreliable

, by Fabio Todesco
Analysts, experts and TV guests live off their reputation, and this can lead them to manipulate information, an experiment conducted by Marco Ottaviani, Salvatore Nunnari and Debrah Meloso finds

Decisions by policymakers and managers often rely on predictions and forecasts provided by experts. For instance, policymakers may increase public investments if they anticipate economic stagnation, and companies may introduce new products if they anticipate sufficient demand. The predictions of future trends by experts play a crucial role in informing these decisions. Thus, the accuracy of experts is closely monitored, and forecasters with a proven track record of accurate predictions may have significant career opportunities.

The need for a good reputation, though, can lead forecasters to misreport information that might reflect negatively on their reputation for being well informed, a series of experiments reported by Marco Ottaviani and Salvatore Nunnari (Bocconi Department of Economics) along with Debrah Meloso (Toulouse Business School) in a forthcoming paper.

The authors designed an urn and balls scheme. Each ball has an outer shell and an inner core. The inner core itself is either blue or orange. The outer shell is opaque (it does not allow to see the core's color) and is also either blue or orange. There are two 10-ball urns corresponding to the quality of information of the forecaster. The inner core perfectly matches the color of the outer shell for all the balls contained in the informative urn, corresponding to a forecaster who perfectly knows the future. The uninformative urn captures a forecaster who has no ability to predict the future: the color of the shell is independent of the color of the core.

Infographic by Weiwei Chen
The game proceeds as follows: First, a ball is drawn from either the informative or the uninformative urn with equal probability, but neither the forecaster nor the evaluator know which urn the ball is coming from. Second, the forecaster sees the color of the outer shell (but not the color of the inner core) of the drawn ball and reports it to the evaluator. The evaluator then observes the color of the inner core and assesses the probability that the forecaster has observed a ball drawn from an informative urn. The forecaster is compensated according to the evaluator's assessment that the ball was drawn from an informative urn.

Both the forecaster and the evaluator are shown the panel displayed above, informing them of the real distribution of the blue and orange cores, which is the same in the two urns. Sometimes it is 6/4 (simulating a situation of high uncertainty), sometimes 8/2 (simulating a less uncertain situation). The uninformative urn always contains five blue shells and five orange ones, as displayed in the figure.

Intuitively, in the less uncertain condition (8/2), even after seeing an orange shell the forecaster thinks that a blue core is more likely than an orange core. In fact, four of the seven orange shells must have a blue core. So, by misreporting, the forecaster can increase the probability of correctly guessing the core.

The experiment broadly confirms that the forecasters have an incentive to misreport to safeguard their reputation and that there is more misreporting the more certain is the situation. In the more uncertain condition (6/4) misreporting is at 51%, while in the more certain condition (8/2) misreporting is at 63%.

"Our results have implications for the use of expert advice as input of managerial decision-making and the design of markets for professional forecasting," Prof. Ottaviani says. "Our experimental evidence suggests that firms should trust experts' advice when the phenomenon to forecast is more uncertain. On the other hand, when the firm already has accurate information and the relevant variables are less uncertain, expert advice is not only less valuable but also less trustworthy."

"Furthermore," adds Prof. Nunnari, "the evaluators' approach plays a role. We found that experts have a strong incentive to reveal their private information truthfully when their reputation is strongly affected by the ex-post accuracy of their statements. Evaluators should link their evaluation to expert's ex-post accuracy, rather than to experts' advice, thus reducing the experts' incentives to misreport information."

Debrah Meloso, Salvatore Nunnari, Marco Ottaviani, "Looking into Crystal Balls. A Laboratory Experiment on Reputational Cheap Talk." Forthcoming in Management Science. DOI: https://doi.org/10.1287/mnsc.2022.4629.