This paper proposes a self-supervised pre-training approach for wireless world models in 6G networks, using Masked Autoencoding to learn environment dynamics from unlabeled network telemetry. The model can then be fine-tuned for predictive resource management tasks with minimal labeled data, achieving high accuracy and sample efficiency.
Key findings
Proposes a self-supervised wireless world model architecture based on Masked Autoencoding (MAE).
Demonstrates sample-efficient adaptation to multiple predictive resource management tasks.
Evaluates the approach against strong baselines including MMSE/LS channel estimators and DRL agents.
Provides extensive ablation studies on masking ratios, model sizes, and data efficiency.
Limitations & open questions
Synthetic wireless network datasets used for evaluation may not fully capture real-world complexities.
The model's performance in real-world deployment with limited labeled data remains to be validated.