At a time of shrinking international budgets, escalating conflicts and humanitarian crises it is tempting to treat conflict prediction as a distraction. Current automated prediction systems are imprecise and do not explain why something happens. The little evidence we have suggests that they are improving the prediction of events when compared to humans alone (Benjamin et al (2023)) but the question remains whether it is worth the effort if predictions cannot explain what happens.
In the beginning, this paper was to large parts motivated by a thought-provoking Raleigh report which argues strongly against quantiative prediction. But I also wanted provide some general arguments for why we put in so much effort in developing and providing machine learning based forecasts at Conflict Forecast. At EconAI our perspective on this issue is heavily influenced by the idea of "prediction policy problems" (Kleinberg et al (2015)). The idea here is that in some areas of policy making prediction alone can contribute. For example, nobody doubts that forecasting the weather is useful - even if it was done with deep neural networks that noone understood.
The paper can be accessed here. There are many arguments for why quantitative conflict prediction is particularly useful in my opinion and the success of other papers produced by the EconAI team that use the quantative forecasts as data clearly illustrate this. Those who see value in quantification should see value in quantifying forecasts. Forecast data is just more quantiative data and will hopefully have similar great effects on research and policy.
But the list of disadvantages of quantitative forecasts based on machine learning is also long. Two stand out to me: first, human experts make forecasts with an informal causal model in their head which means forecasts can immediately fuel policy advice. This is also taken up in the Raleigh report linked above and directly relate to the prediction policy problem. I would always argue that both causal understanding and prediction need to be solved for prediction policy problems. Second, human models are much much richer because quantification is always simplification first. This is also why I’m skeptical of the „wins“ of ML over human experts reported in academic papers - they can win because they force humans to make narrow predictions. The prediction space that decision makers are interested in is often much richer than what quantitative models can provide (up until now).
However, I am convinced that with the current technology and more data some of these issues can already be addressed. To achieve better integration, we require careful design of both the forecasting/causal inference tools and the communication of the results. More research on human-machine interactions is urgently needed.