ABSTRACT
This research proposes Multi-Pedestrian TrajMamba, a framework using selective state space models to predict future trajectories of interacting pedestrians from egocentric views, addressing limitations of RNNs and Transformers.
PAPER · PDF
Loading PDF...
Key findings
Achieves linear-time complexity with global receptive fields.
Introduces Social State Space Module for multi-agent interaction modeling.
Includes Egocentric Motion Encoder to disentangle camera ego-motion from pedestrian dynamics.
Features Multi-scale Mamba Decoder for probabilistic multi-modal trajectory generation.
Limitations & open questions
The framework's performance in real-world scenarios with diverse pedestrian behaviors is yet to be validated.