卫星数量的持续增加提升了地球观测的时间分辨率,使基于卫星的洪水制图成为业务化洪水监测中一种颇具前景的方法。作为地理空间人工智能(GeoAI)的重要应用,基于深度学习的卫星影像洪水制图方法通过从海量遥感数据中学习复杂的空谱模式,显著提升了预测性能。然而,深度学习模型决策过程的不透明性仍是其融入关键科学与业务工作流的主要障碍。这凸显了系统评估模型解释是否符合既定遥感领域知识的必要性。为填补这一研究空白,本研究提出了ADAGE(Alignment between Domain Knowledge And GeoAI Explanation Evaluation,领域知识与GeoAI解释对齐性评估)框架。该框架旨在系统评估深度学习模型解释与既定遥感知识(特别是地表独特光谱特性)之间的对齐程度。ADAGE框架采用通道分组SHAP(SHapley Additive exPlanations)方法,估算分组输入通道对像素级预测的贡献。在两项基于卫星的洪水制图任务上的实验表明,ADAGE框架能够:(1)定量评估模型解释与基于领域知识生成的参考解释之间的对齐性;(2)通过对齐得分辅助领域专家识别未对齐的解释。本研究有助于弥合GeoAI可解释性与地球观测领域知识之间的鸿沟,提升GeoAI模型的适用性。
The increasing number of satellites has improved the temporal resolution of Earth observation, making satellite-based flood mapping a promising approach for operational flood monitoring. Deep learning-based approaches for flood mapping using satellite imagery, an important application within Geospatial Artificial Intelligence (GeoAI), have shown improved predictive performance by learning complex spatial and spectral patterns from large volumes of remote sensing data. However, the opaque decision-making processes of deep learning models remain a major barrier to their integration into critical scientific and operational workflows. This highlights the need for a systematic assessment of whether model explanations align with established domain knowledge in remote sensing. To address this research gap, this study introduces the ADAGE (Alignment between Domain Knowledge And GeoAI Explanation Evaluation) framework. The proposed framework is designed to systematically evaluate how well explanations of deep learning models align with established remote sensing knowledge, particularly regarding the distinctive spectral properties of the Earth's surface. The ADAGE framework employs Channel-Group SHAP (SHapley Additive exPlanations) method to estimate the contributions of grouped input channels to pixel-level predictions. Experiments on two satellite-based flood mapping tasks demonstrate that the ADAGE framework can (1) quantitatively assess the alignment between model explanations and reference explanations derived from domain knowledge and (2) help domain experts identify misaligned explanations through alignment scores. This study contributes to bridging the gap between explainability and domain knowledge in GeoAI for Earth observation, enhancing the applicability of GeoAI models in scientific and operational workflows.