Abstract:
Progress towards SDG 16 in the Asia-Pacific region is hindered by internal conflicts, rising
military expenditure, and a lack of indicator data for monitoring. The UN Expert Panel on
Technology and Innovation has advocated for integrating AI technology into early conflict
detection with peacekeeping data. Recent initiatives like the Situational Awareness Geospatial Enterprise (SAGE) and Unite Aware align with this vision. However, challenges persist in effectively utilizing predictive peacekeeping. Brute-force big data approaches and the lack of standardization and validation in logging conflict events hinder progress. Data biases perpetuate through black-box AI models and may discriminate towards underrepresented populations. In this paper, we envision a more actionable
conflict detection system combining both explanatory and predictive capabilities, offering
insights beyond mere predictions. Such a system aims to identify and evaluate correlations among conflict actors, facilitating the generalization of predictions of future events. We argue that explainable machine learning (XAI) with causal inferences has the potential to realize this vision compared to the black-box conundrum. XAI has gained traction in many fields, such as healthcare and finance, where outputs need to be trustworthy and interpretable. We address the current obstacles in predictive peacekeeping, considering data gaps in monitoring SDG 16 in the Asia-Pacific region and provide specific examples of leveraging XAI models in this setting. We also discuss the limitations and challenges of their practical applicability. To the best of our knowledge, this is the first paper that discusses enhancing the actionable intelligence with XAI for predictive peacekeeping in the Asia-Pacific region.