移动性基础模型(MFMs)已推动人类移动模式建模的发展,但受限于数据规模与语义理解能力,其性能面临瓶颈。大型语言模型(LLMs)虽具备强大的语义推理能力,却缺乏对时空统计规律的固有理解,难以生成物理上合理的移动轨迹。为弥补上述不足,我们提出MoveFM-R——一种新颖框架,通过引入语言驱动的语义推理能力,充分释放移动性基础模型的潜力。该框架着力解决两大关键挑战:连续地理坐标与离散语言词元之间的词汇不匹配问题,以及MFMs隐空间向量与LLMs语义世界之间的表征鸿沟。MoveFM-R基于三项核心创新:语义增强的位置编码以弥合地理与语言之间的差距;渐进式课程学习机制以对齐LLM推理与移动模式;以及支持条件轨迹生成的交互式自反思机制。大量实验表明,MoveFM-R显著优于现有基于MFMs和LLMs的基线方法;在零样本设定下展现出强泛化能力,并能依据自然语言指令高效生成逼真轨迹。通过融合MFMs的统计建模能力与LLMs的深层语义理解,MoveFM-R开创了一种新范式,实现对人类移动更全面、可解释且更强大的建模。MoveFM-R的实现代码已公开于https://anonymous.4open.science/r/MoveFM-R-CDE7/。
Mobility Foundation Models (MFMs) have advanced the modeling of human movement patterns, yet they face a ceiling due to limitations in data scale and semantic understanding. While Large Language Models (LLMs) offer powerful semantic reasoning, they lack the innate understanding of spatio-temporal statistics required for generating physically plausible mobility trajectories. To address these gaps, we propose MoveFM-R, a novel framework that unlocks the full potential of mobility foundation models by leveraging language-driven semantic reasoning capabilities. It tackles two key challenges: the vocabulary mismatch between continuous geographic coordinates and discrete language tokens, and the representation gap between the latent vectors of MFMs and the semantic world of LLMs. MoveFM-R is built on three core innovations: a semantically enhanced location encoding to bridge the geography-language gap, a progressive curriculum to align the LLM's reasoning with mobility patterns, and an interactive self-reflection mechanism for conditional trajectory generation. Extensive experiments demonstrate that MoveFM-R significantly outperforms existing MFM-based and LLM-based baselines. It also shows robust generalization in zero-shot settings and excels at generating realistic trajectories from natural language instructions. By synthesizing the statistical power of MFMs with the deep semantic understanding of LLMs, MoveFM-R pioneers a new paradigm that enables a more comprehensive, interpretable, and powerful modeling of human mobility. The implementation of MoveFM-R is available online at https://anonymous.4open.science/r/MoveFM-R-CDE7/.