Does forecasting actually work?

Most of policy analysis and indeed most of the work we do at Scioto Analysis is forward-looking. We sometimes look back at policies that have been put into place to determine what effects they had (which is sometimes called “evaluation” and sometimes called “ex post analysis”), but in order to help policymakers make better decisions, we need to be able to give them information about what will happen if they choose to enact a given policy.

We call this “forecasting,” but if you want to be less technical, we’re trying to predict the future. We have tools that give us confidence that our predictions are better on average than just randomly guessing about the future, but at the end of the day we are making a claim about something that has not happened yet. 

One question you might have about forecasting is straightforward: does it work? If people are making so many predictions, why don’t we just do some evaluation and see if they are accurate or not?

A new working paper from the National Bureau of Economic Research tries to answer this question. 

In this study, economists looked at 100 social science projects which made tens of thousands of forecasts between 2020 and 2024. They wanted to find out whether or not these forecasts were successful at predicting what would happen in the future, and if they were able to identify any characteristics that made people better or worse at forecasting. 

The good news is that, in general, the forecasts they looked at had predictive value. There was a tendency for forecasts to overestimate the treatment effect, but in general, a forecast was able to provide meaningful insight about the future. 

This study also supported the “wisdom of the crowd” hypothesis. When the researchers found multiple forecasts about the same topic, then the average of those forecasts would be a better predictor than any one by itself. 

When looking at the quality of predictors, the researchers found that academics (professors and top-ranking PhD students) performed better than non-academics. Since results tend to be overestimated, this generally means that these academics are finding smaller effect sizes, which might explain why they have the perception of being cautious. In a similar vein, the researchers found out that people who were more confident in their estimates tended to perform worse than people who were less confident. 

Overall, I think this is a success story for forecasters. When I think about the information a policymaker tends to want, the first thing we need to do is make sure our effect direction is correct. If we say a policy is going to grow the economy, then it better not shrink it. 

Of course, we want to be as accurate as possible and any good policy analysis should be rigorous, but getting the direction correct already gives policymakers information to make their decisions with. 

At the end of the day, the goal of analysis is to inform better decision making. This paper suggests that in general, social scientists are doing a good job at getting the main idea right. We should probably be skeptical about the magnitude of some effects, but overall, research helps us understand what the future might hold.