论文
arXiv
GeoAI
GIS
RemoteSensing
EarthObservation
GeoLargeModel
GeoFoundationModel
中文标题
位置即所需:地球观测数据的连续时空神经表征
English Title
Location Is All You Need: Continuous Spatiotemporal Neural Representations of Earth Observation Data
Mojgan Madadikhaljan, Jonathan Prexl, Isabelle Wittmann, Conrad M Albrecht, Michael Schmitt
发布时间
2026/4/8 21:44:49
来源类型
preprint
语言
en
摘要
中文对照

本文提出 LIANet(Location Is All You Need Network),一种基于坐标的神经表征方法,将特定兴趣区域的多时相星载地球观测(EO)数据建模为连续时空神经场。仅需输入空间与时间坐标,LIANet 即可重建对应的卫星影像。预训练完成后,该神经表征可适配多种 EO 下游任务(如语义分割或像素级回归),且关键在于无需访问原始卫星数据。LIANet 旨在作为地理空间基础模型(Geospatial Foundation Models, GFMs)的用户友好型替代方案,消除终端用户在数据获取与预处理方面的开销,并支持仅基于标签进行微调。我们在不同尺度的目标区域上完成了 LIANet 的预训练,并证明其在下游任务上的微调性能可媲美从头训练或采用现有 GFMs 的方法。源代码与数据集公开于 https://github.com/mojganmadadi/LIANet/tree/v1.0.1。

English Original

In this work, we present LIANet (Location Is All You Need Network), a coordinate-based neural representation that models multi-temporal spaceborne Earth observation (EO) data for a given region of interest as a continuous spatiotemporal neural field. Given only spatial and temporal coordinates, LIANet reconstructs the corresponding satellite imagery. Once pretrained, this neural representation can be adapted to various EO downstream tasks, such as semantic segmentation or pixel-wise regression, importantly, without requiring access to the original satellite data. LIANet intends to serve as a user-friendly alternative to Geospatial Foundation Models (GFMs) by eliminating the overhead of data access and preprocessing for end-users and enabling fine-tuning solely based on labels. We demonstrate the pretraining of LIANet across target areas of varying sizes and show that fine-tuning it for downstream tasks achieves competitive performance compared to training from scratch or using established GFMs. The source code and datasets are publicly available at https://github.com/mojganmadadi/LIANet/tree/v1.0.1.

元数据
arXiv2604.07092v1
来源arXiv
类型论文
抽取状态raw
关键词
GeoAI
GIS
RemoteSensing
EarthObservation
GeoLargeModel
GeoFoundationModel
cs.CV