论文
Annals of GIS
UrbanCompLab
GIS
中文标题
地理空间数据的表示学习
English Title
Representation learning for geospatial data
Liu, Yu, Wang, Xuechen, Wang, Yidan, Huang, Fei, Huang, Yingjing, Li, Yong, Zhang, Weiyu, Gong, Shuhui, Mai, Gengchen, Yao, Yao
发布时间
2025/1/1 08:00:00
来源类型
journal
语言
en
摘要
中文对照

本文综述了地理空间数据的表示学习,重点探讨从多种数据类型中自动提取有意义特征的方法。通过简化任务并提升准确性,表示学习已成为地理空间分析的强大工具。由于其良好的泛化能力和可扩展性,表示学习为处理本质上多样且非结构化的地理空间数据提供了有效途径。我们总结了针对不同地理空间数据类型的表示学习方法,包括位置、兴趣点(POIs)、轨迹、空间交互、遥感影像以及街景影像。将每种数据类型视为一种独立模态,本文强调多模态表示学习在深化对地理现象理解方面的潜力,并提出一种由大语言模型(LLM)引导的框架作为潜在解决方案。综述最后指出,亟需进一步研究以改进多模态数据对齐,并增强在复杂动态地理环境中的特征表示可解释性。

English Original

This paper reviews representation learning for geospatial data, focusing on methods for automatically extracting meaningful features from diverse data types. By simplifying tasks and improving accuracy, representation learning has emerged as a powerful tool for geospatial analysis. Due to its generalizability and scalability, representation learning provides an effective approach to processing geospatial data, which is inherently diverse and unstructured. We summarize the representation learning methods for different geospatial data types, including locations, points of interest (POIs), trajectories, spatial interactions, remote sensing imagery, and street view imagery. Treating each data type as a distinct modality, we emphasize the potential of multi-modal representation learning to advance the understanding of geographical phenomena and propose an LLM-guided framework as a potential solution. The review concludes by highlighting the need for further research to improve multi-modal data alignment and enhance the interpretability of feature representations, particularly in complex and dynamic geographical environments.

元数据
DOI10.1080/19475683.2025.2552157
来源Annals of GIS
类型论文
抽取状态curated
关键词
UrbanComp Lab
中国地质大学(武汉)位置智能与城市感知实验室
GeoAI
地理大模型
轨迹数据
时空知识图谱
地理大数据
多源多模态地理数据
地理流
复杂网络
城市交通
地理模拟
元胞自动机
representation
geospatial