理解与解释复杂的地理现象——从气候变化到社会经济不平等——是地理学及更广泛科学界的核心关注点。已有多种方法用于阐明变量间的关系,从线性回归模型中的系数估计到可解释人工智能(XAI)技术中日益主导的特征归因得分。然而,XAI方法生成的解释通常带有不确定性,这种不确定性源于模型本身及其训练数据。尽管考虑此类不确定性至关重要,但在地理空间领域这一问题仍被普遍忽视。在本研究中,我们基于置信预测开发了一种XAI解释的不确定性量化框架,称为地学解释置信预测(GeoXCP)。通过将空间依赖性纳入建模过程,GeoXCP生成了具有校准不确定性估计的空间自适应解释。我们通过大量模拟实验和真实世界数据集验证了GeoXCP的有效性。结果表明,GeoXCP在多种地理空间场景下均能提供可靠解释,并有效量化不确定性。我们的方法在可解释地理空间机器学习领域实现了重要进展,使决策者能够更好地评估模型驱动洞察的可信度。所提出的框架已实现为名为GeoXCP的Python工具包。
Understanding and explaining complex geographic phenomena—ranging from climate change to socioeconomic disparities—is a central focus in both geography and the broader scientific community. Various methods have been developed to elucidate relationships between variables, from coefficient estimates in linear regression models to the increasingly dominant use of feature attribution scores in Explainable AI (XAI) techniques. However, explanations generated by XAI methods often carry uncertainty, stemming from the model itself and the data used to train the model. Despite the critical importance of accounting for such uncertainty, this issue remains largely overlooked in the geospatial domain. In this study, we developed an uncertainty quantification framework for XAI explanations based on conformal prediction, termed Geospatial eXplanation Conformal Prediction (GeoXCP). By incorporating spatial dependence into the modeling process, GeoXCP produced spatially adaptive explanations with calibrated uncertainty estimates. We validated the effectiveness of GeoXCP through extensive simulation experiments and real-world datasets. The results demonstrated that GeoXCP provided reliable explanations while effectively quantifying uncertainty across diverse geospatial scenarios. Our approach represented a significant advancement in explainable geospatial machine learning, enabling decision-makers to better assess the trustworthiness of model-driven insights. The proposed framework was implemented in a python package, named GeoXCP.