DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT
TL;DR Summary
DrivingWorld introduces a video GPT model with spatio-temporal fusion, combining next-state and next-token prediction to enhance autonomous driving video generation, achieving over 40 seconds of high-fidelity, coherent video with novel masking and reweighting to reduce drift.
Abstract
Recent successes in autoregressive (AR) generation models, such as the GPT series in natural language processing, have motivated efforts to replicate this success in visual tasks. Some works attempt to extend this approach to autonomous driving by building video-based world models capable of generating realistic future video sequences and predicting ego states. However, prior works tend to produce unsatisfactory results, as the classic GPT framework is designed to handle 1D contextual information, such as text, and lacks the inherent ability to model the spatial and temporal dynamics essential for video generation. In this paper, we present DrivingWorld, a GPT-style world model for autonomous driving, featuring several spatial-temporal fusion mechanisms. This design enables effective modeling of both spatial and temporal dynamics, facilitating high-fidelity, long-duration video generation. Specifically, we propose a next-state prediction strategy to model temporal coherence between consecutive frames and apply a next-token prediction strategy to capture spatial information within each frame. To further enhance generalization ability, we propose a novel masking strategy and reweighting strategy for token prediction to mitigate long-term drifting issues and enable precise control. Our work demonstrates the ability to produce high-fidelity and consistent video clips of over 40 seconds in duration, which is over 2 times longer than state-of-the-art driving world models. Experiments show that, in contrast to prior works, our method achieves superior visual quality and significantly more accurate controllable future video generation. Our code is available at https://github.com/YvanYin/DrivingWorld.
English Analysis
1. Bibliographic Information
- Title: DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT
- Authors: Xiaotao Hu, Wei Yin, Mingkai Jia, Junyuan Deng, Xiaoyang Guo, Qian Zhang, Xiaoxiao Long, Ping Tan
- Affiliations: The Hong Kong University of Science and Technology, Horizon Robotics
- Journal/Conference: This paper is an arXiv preprint. Preprints are preliminary versions of research papers that have not yet undergone formal peer review. They are common in fast-moving fields like machine learning to disseminate findings quickly.
- Publication Year: 2024 (v2 submitted in December)
- Abstract: The paper presents
DrivingWorld
, a world model for autonomous driving based on the GPT architecture. Standard GPT models struggle with video generation because they are designed for 1D text data and cannot adequately model spatial and temporal dynamics. To solve this,DrivingWorld
introduces several spatial-temporal fusion mechanisms. It uses a "next-state" prediction strategy for temporal coherence and a "next-token" prediction strategy for spatial detail within each frame. Additionally, novel masking and reweighting strategies are proposed to reduce long-term video degradation ("drifting") and allow for precise control. The model is shown to generate high-quality, consistent videos over 40 seconds long, outperforming prior state-of-the-art models in visual quality and controllability. - Original Source Link: https://arxiv.org/abs/2412.19505v2 (The provided link seems to be a placeholder for a future submission. The content is analyzed based on the provided text.)
2. Executive Summary
-
Background & Motivation (Why):
- Core Problem: Autoregressive (AR) models like GPT, which are highly successful in natural language processing (NLP), have been adapted for video generation in autonomous driving. However, these adaptations often yield poor results. The fundamental issue is that the classic GPT framework is optimized for 1D sequential data (text) and is ill-suited to model the complex 2D spatial and temporal dynamics of video. This leads to generated videos with low visual quality, artifacts, and a lack of long-term consistency.
- Why It's Important: Creating realistic, long-term simulations of driving scenarios is crucial for training and validating autonomous driving systems. Such "world models" can help test systems in rare or dangerous situations (e.g., accidents) without real-world risk, addressing the long-tail problem where labeled data is scarce.
- Fresh Angle: Instead of naively applying a 1D GPT to a flattened sequence of video tokens, this paper proposes a new architecture that explicitly decouples and models spatial and temporal information. This hybrid approach aims to combine the strengths of state-level prediction for temporal flow with token-level autoregression for high-fidelity image generation.
-
Main Contributions / Findings (What):
- A GPT-style World Model with Spatio-temporal Fusion: The core contribution is
DrivingWorld
, a video generation model that incorporates specialized mechanisms to handle both the spatial layout of individual frames and the temporal evolution between frames. - Hybrid Prediction Strategy: The model uses a two-level prediction scheme: a next-state prediction strategy to ensure temporal coherence between video frames and a next-token prediction strategy to generate the spatial details within each frame autoregressively.
- Temporal-Aware Tokenization: A novel tokenizer is proposed that embeds temporal information during the process of converting video frames into discrete tokens, ensuring smoother transitions in the generated video.
- Long-term Controllable Generation Strategies:
- A random masking strategy (
RMS
) during training helps the model become robust to its own prediction errors during inference, mitigating the "drifting" problem where video quality degrades over time. - A balanced attention strategy gives more weight to vehicle pose tokens (location and orientation), enabling more precise and reliable control over the ego-vehicle's trajectory in the generated simulation.
- A random masking strategy (
- State-of-the-Art Performance: The model can generate high-fidelity, controllable video clips over 40 seconds long, which is more than double the duration of previous state-of-the-art driving world models.
- A GPT-style World Model with Spatio-temporal Fusion: The core contribution is
3. Prerequisite Knowledge & Related Work
-
Foundational Concepts:
- Autoregressive (AR) Models: These are models that generate a sequence of data points by predicting the next point based on all previous points. The GPT (Generative Pre-trained Transformer) series are prominent examples in NLP, where they predict the next word in a sentence.
- World Models: A world model is an internal, learned representation of an environment. It can predict how the environment will change in the future based on past observations and potential actions. In autonomous driving, it acts as a simulator that understands the rules of the road and vehicle dynamics.
- Vector Quantization (VQ): A technique to convert continuous data (like image features) into a discrete set of representations. A VQ-VAE (Vector Quantized-Variational Autoencoder) learns a "codebook" of discrete vectors. An encoder maps an image to the closest vectors in the codebook, representing the image as a grid of discrete "tokens." A decoder then reconstructs the image from these tokens.
VQGAN
improves this with adversarial training for higher realism.
-
Previous Works:
- GAIA-1: The first major attempt to apply the GPT framework to create a driving world model. It tokenized video frames and used a standard next-token prediction approach. The paper notes that
GAIA-1
suffers from low quality and artifacts because the 1D GPT structure is not suitable for video dynamics. - Dreamer Series (Dreamer, DreamerV2, DreamerV3): Foundational world models that learn a compact latent space to predict future states and plan actions, achieving great success in game environments like Atari and Minecraft.
- Other Driving World Models (Drive-WM, WoVoGen, GenAD): Recent works that have explored world models for real-world driving. Many use diffusion models or have limitations in generation length or controllability.
DrivingWorld
positions itself as superior in generating longer, more controllable, and higher-fidelity videos.
- GAIA-1: The first major attempt to apply the GPT framework to create a driving world model. It tokenized video frames and used a standard next-token prediction approach. The paper notes that
-
Differentiation:
DrivingWorld
distinguishes itself from prior works likeGAIA-1
by not treating video as a simple 1D sequence of tokens. Its key innovation is the decoupled spatio-temporal architecture.- vs. Vanilla GPT (GAIA-1):
GAIA-1
uses a single, massive autoregressive process over all tokens.DrivingWorld
separates this into a higher-level state prediction (temporal) and a lower-level frame generation (spatial), making it more efficient and better at capturing temporal consistency. - vs. Diffusion Models: While diffusion models are powerful image/video generators, the paper notes they can be difficult to control precisely.
DrivingWorld
's autoregressive nature and balanced attention mechanism are designed for explicit control over the ego-vehicle's actions.
- vs. Vanilla GPT (GAIA-1):
4. Methodology (Core Technology & Implementation)
The goal of DrivingWorld
is to predict the next state (vehicle orientation, location, and front-view image) based on a history of past states . The process is broken down into tokenization, the world model itself, and decoding.
The overall pipeline is shown in Image 2.
该图像是DrivingWorld的示意图,展示了车辆姿态编码器、时序感知编码器和内部状态自回归模块等组成部分。通过这些模块,实现对车辆方向、位置及前视图的理解与未来状态预测,并生成超过40秒的视频。
3.1. Tokenizer
The first step is to convert continuous, multimodal state data into a sequence of discrete tokens.
-
Temporal-aware Vector Quantized Tokenizer:
- Problem: Standard single-image VQ tokenizers process each frame independently, leading to flickering or inconsistent object identities over time in the generated video.
- Solution: The authors enhance a
VQGAN
by inserting a temporal self-attention layer before and after the quantization step. This allows the model to look at corresponding feature patches across different frames () when deciding on the token for a specific patch in the current frame . This encourages the model to assign similar tokens to the same object over time, improving temporal consistency. - Formula: The token for the patch at position
(i,j)
in frame is chosen by finding the closest codebook vector to the temporally-aware feature:- : The discrete token index for the patch at
(i,j)
in frame . - : The learned codebook of discrete feature vectors.
- : Retrieves the -th vector from the codebook.
- : The temporal self-attention module that processes features from the same patch location across all frames.
- : The continuous feature vector from the encoder for the patch at
(i,j)
in frame .
- : The discrete token index for the patch at
-
Vehicle Pose Tokenizer:
- Problem: Global vehicle coordinates
(x, y)
can grow indefinitely large, making them difficult to normalize and model over long sequences. - Solution: The model uses relative poses—the change in orientation and location between consecutive timesteps. These relative values are then discretized (quantized) into a finite number of bins.
- Formula: The relative orientation and location are converted into single token indices and :
-
: Relative pose changes at time .
-
, etc.: The minimum and maximum range for each pose dimension.
-
: The number of discrete bins for orientation, x-location, and y-location, respectively.
After tokenization, each state at time is represented by a set of discrete tokens: .
-
- Problem: Global vehicle coordinates
3.2. World Model
This is the core transformer-based model that predicts future tokens. It avoids the inefficient vanilla GPT approach by using a decoupled architecture. Image 3 contrasts the vanilla approach with the proposed one.
该图像是论文中图1,展示了两种驾驶世界模型结构对比,左侧是传统的Vanilla GPT模型,右侧是作者提出的结合时序感知机制的模型,图中详细描绘了时间层、模态层及内部自回归模块等关键组成。
The proposed architecture consists of two main modules:
-
Temporal-multimodal Fusion Module:
- Goal: To process the history of state tokens and produce a feature representation that encodes the likely next state.
- Structure: It alternates between two types of transformer layers:
- Temporal Layer: Uses causal attention along the time axis. A token at position in frame can only attend to tokens at the same position in all previous frames (
1...t
). This efficiently captures how each part of the scene evolves over time. - Multimodal Layer: Uses bidirectional attention within a single frame. All tokens within the same frame can attend to each other. This allows the model to fuse information from different modalities (pose and image) and understand the spatial relationships within the scene.
- Temporal Layer: Uses causal attention along the time axis. A token at position in frame can only attend to tokens at the same position in all previous frames (
- This decoupled design is more efficient than a full spatio-temporal attention mechanism, as it reduces the number of tokens involved in each attention operation.
-
Internal-state Autoregressive Module:
- Goal: To generate the high-quality tokens for the next state, , autoregressively.
- Process: It takes the feature output from the fusion module (which represents a prediction for the next state) and uses it to condition a standard GPT-style autoregressive generator. It predicts the tokens for the next frame one by one, from the orientation token to the last image token.
- Formula: The prediction for the -th token of the next state is conditioned on the fused features from the previous state and all previously generated tokens of the current state:
-
: The predicted -th token for the state at time .
-
: The internal-state autoregressive transformer.
-
: The fused feature from the previous module, providing context for the prediction.
-
: An embedding layer that converts discrete token indices into continuous vectors.
The model is trained with a standard cross-entropy loss to predict the correct sequence of ground-truth tokens.
-
3.3. Decoder
Once the world model predicts the next-state tokens , the decoder converts them back into physical data.
- Vehicle Pose Decoder: This performs the inverse operation of the pose tokenizer, converting the discrete token indices back into continuous relative pose values .
- Temporal-aware Decoder: The predicted image tokens are used to look up the corresponding vectors from the codebook. These vectors are then passed through a temporal self-attention layer (mirroring the encoder) before being fed to the VQGAN's decoder CNN to reconstruct the final image .
3.4. Long-term Controllable Generation
Two key strategies are introduced to improve long-duration generation and controllability.
-
Token Dropout for Drifting-free Autoregression:
-
Problem: During inference, the model feeds its own generated outputs back as input. If the model only ever sees perfect ground-truth data during training, it becomes brittle and small errors can accumulate, causing the generated video to "drift" into nonsensical states.
-
Solution: A Random Masking Strategy (RMS) is used during training. With some probability, a portion of the input ground-truth tokens are randomly replaced. This forces the model to learn to recover from noisy or imperfect inputs, making it more robust during long-term autoregressive generation. Image 4 shows a visual comparison of the effect of this strategy.
该图像是来自论文DrivingWorld的对比示意图,展示了含漂移(Drifting)和无漂移(Drifting Free)两种方法下,不同时刻(1,4,7,10帧)的自动驾驶视频帧,突出红色虚线框内场景的稳定性和连续性差异。
-
-
Balanced Attention for Precise Control:
- Problem: In each state, there are many image tokens (e.g., 512) but only two pose tokens. In a standard attention mechanism, the signal from the pose tokens can be drowned out by the image tokens, making it difficult to control the vehicle's trajectory.
- Solution: The authors propose balanced attention, where they manually increase the attention weights for the orientation and location tokens before the softmax operation. This forces the model to "pay more attention" to the control inputs, leading to more precise and reliable vehicle maneuvering in the generated video.
5. Experimental Setup
-
Datasets:
- Training: The model was trained on a massive dataset of over 3456 hours of driving data. This includes 120 hours from the public NuPlan dataset and 3336 hours of private data. The tokenizer was also pre-trained on large image/video datasets like Openimages, COCO, and YoutubeDV.
- Evaluation: The model was evaluated on the test sets of NuPlan and NuScenes. The NuScenes evaluation is "zero-shot," meaning the model was not trained on any NuScenes data.
-
Evaluation Metrics:
-
Fréchet Inception Distance (FID):
- Conceptual Definition: FID measures the quality and diversity of generated images compared to a set of real images. It calculates the distance between the distributions of deep features (from an InceptionV3 network) of the two sets. A lower FID means the generated images are statistically more similar to real images.
- Mathematical Formula:
- Symbol Explanation: and are the mean vectors of the Inception features for real and generated images, respectively. and are their covariance matrices. is the trace of a matrix.
-
Fréchet Video Distance (FVD):
- Conceptual Definition: FVD extends FID to videos. It measures the quality and temporal consistency of generated videos. It computes the Fréchet distance between features extracted from real and generated videos using a pre-trained video classification network. Lower FVD indicates better video quality.
-
Peak Signal-to-Noise Ratio (PSNR):
- Conceptual Definition: PSNR measures the pixel-level reconstruction quality of an image by comparing it to an original, ground-truth image. It is based on the mean squared error (MSE). A higher PSNR indicates a more accurate reconstruction.
- Mathematical Formula:
- Symbol Explanation: is the maximum possible pixel value of the image (e.g., 255 for an 8-bit image). is the mean squared error between the original and reconstructed images.
-
Learned Perceptual Image Patch Similarity (LPIPS):
- Conceptual Definition: LPIPS measures the perceptual similarity between two images. Unlike PSNR, which is pixel-based, LPIPS compares deep features from a pre-trained neural network, which aligns better with human perception of image similarity. A lower LPIPS score means the images are more perceptually similar.
-
-
Baselines: The paper compares
DrivingWorld
against several other driving world models, includingGAIA-1
,Vista
,DriveDreamer
,WoVoGen
,GenAD
, and the popular video generation modelStable Video Diffusion (SVD)
.
6. Results & Analysis
Core Results
-
Long-time Video Generation: Image 5 showcases the model's ability to generate a 64-second video (640 frames at 10 Hz) from just 15 initial frames. The generated sequence maintains high visual fidelity and structural consistency over this long duration.
该图像是一个自动驾驶视频生成的长时序帧示意图,展示了不同时间步(编号1至640)的视频帧样例,体现了DrivingWorld模型生成超过40秒高保真连续场景的能力。
-
Quantitative Comparison: In the following manually transcribed table (reconstructed from Table 2 in the text),
DrivingWorld
is compared to other methods on the NuScenes dataset. It achieves competitive FID and FVD scores while generating significantly longer videos. (Note: "DrivingWorld (w/o P)" likely refers to the model trained without the large private dataset).Table 2: Quantitative Comparison on NuScenes (Manually transcribed from paper text)
Metric DriveDreamer [37] WoVoGen [27] Drive-WM [38] Vista [9] DriveGAN [30] GenAD (OpenDV) [42] DrivingWorld (w/o P) DrivingWorld FID ↓ 52.6 27.6 15.8 6.9 73.4 15.4 16.4 7.4 FVD ↓ 452.0 417.7 122.7 89.4 502.3 184.0 174.4 90.9 Max Duration / Frames 4s / 48 2.5s / 5 8s / 16 15s / 150 N/A 4s / 8 30s / 300 40s / 400 Analysis:
DrivingWorld
is highly competitive with the state-of-the-artVista
on FID/FVD metrics, despite the evaluation being zero-shot. Crucially, its maximum generation duration (40s) far exceeds all competitors. -
Qualitative Comparison: As shown in Image 6, when compared to
SVD
on a NuScenes scene,DrivingWorld
produces videos with better temporal consistency, preserving details like street lanes and vehicle identities more effectively over time.该图像是图像对比图,展示了在zero-shot NuScenes场景下,SVD方法与本文方法生成的26帧视频中红框区域的细节表现。结果显示本文方法在街道车道线和车辆身份保持上表现更优。
-
Image Tokenizer Comparison: The proposed
Temporal-aware
tokenizer outperforms other VQVAE methods, including a fine-tunedLlama-Gen
, across all metrics (FVD, FID, PSNR, LPIPS), confirming its effectiveness in producing high-quality and temporally consistent token sequences.Table 3: Quantitative Comparison of VQVAE Methods (Manually transcribed from paper)
VQVAE Methods FVD12 ↓ FID ↓ PSNR ↑ LPIPS ↓ VAR [34] 164.66 11.75 22.35 0.2018 VQGAN [8] 156.58 8.46 21.52 0.2602 Llama-Gen [33] 57.78 5.99 22.31 0.2054 Llama-Gen [33] Finetuned 20.33 5.19 22.71 0.1909 Temporal-aware (Ours) 14.66 4.29 23.82 0.1828
Ablation Study
-
Random Masking Strategy (RMS): Removing the
RMS
during training leads to a significant increase in FVD, especially for longer videos (FVD40
). This confirms that the masking strategy is crucial for preventing content drifting and improving the model's robustness during long-term inference.Table 4: Impact of Random Masking Strategy (Manually transcribed from paper)
Methods FVD10 ↓ FVD25 ↓ FVD40 ↓ w/o Masking 449.40 595.49 662.60 Ours 445.22 574.57 637.60 -
Comparison with Vanilla GPT structure: The proposed decoupled architecture is vastly superior to a vanilla GPT-2 approach.
- Performance:
DrivingWorld
achieves dramatically lower (better) FVD scores, indicating much higher video quality (Table 5). - Efficiency: The vanilla GPT-2's memory usage grows quadratically with sequence length and quickly runs out of memory (
OOM
).DrivingWorld
's memory consumption scales much more gracefully, making long-sequence processing feasible (Table 6).
Table 5: Performance vs. GPT-2 (Manually transcribed from paper)
Methods FVD10 ↓ FVD25 ↓ FVD40 ↓ GPT-2 [29] 2976.97 3505.22 4017.15 Ours 445.22 574.57 637.60 Table 6: Memory Usage (GB) vs. GPT-2 (Manually transcribed from paper)
Num. of frames 5 6 7 8 9 10 15 GPT-2 [29] 31.555 39.305 47.237 55.604 66.169 77.559 OOM Ours 21.927 24.815 26.987 29.877 31.219 34.325 45.873 - Performance:
7. Conclusion & Reflections
-
Conclusion Summary:
DrivingWorld
successfully adapts the GPT framework for autonomous driving by introducing a novel spatio-temporal architecture. By decoupling temporal and spatial modeling and incorporating specialized strategies for long-term consistency (RMS
) and control (Balanced Attention
), the model generates high-fidelity, controllable driving videos over 40 seconds long. This work represents a significant step forward in building powerful, scalable world models for autonomous driving simulation. -
Limitations & Future Work: The authors state their intention to extend the model by:
- Incorporating more multimodal information (e.g., LiDAR, radar).
- Integrating multiple camera view inputs to build a more comprehensive 3D understanding of the environment.
-
Personal Insights & Critique:
- Strengths: The paper's core strength is its thoughtful architectural design. The decoupling of temporal and spatial modeling is an elegant solution to the scaling and performance issues of vanilla GPTs for video. The ablation studies convincingly demonstrate the value of each proposed component, from the temporal-aware tokenizer to the RMS and balanced attention.
- Potential Weaknesses: The model's impressive performance relies heavily on a very large, private dataset (3336 hours). This makes it difficult for the broader research community to reproduce the results and raises questions about how well the approach would work with only publicly available data. The "DrivingWorld (w/o P)" results in Table 2 suggest there is a noticeable performance drop without this private data.
- Future Impact: This work provides a strong blueprint for future research into autoregressive world models. The principles of decoupling information streams (spatial, temporal, multimodal) and designing specific training strategies (like RMS) to handle autoregressive error accumulation are highly transferable to other domains beyond driving, such as robotics and long-form video synthesis. It reinforces the idea that directly applying models from one domain (NLP) to another (vision) is rarely optimal; thoughtful adaptation is key.
Similar papers
Recommended via semantic vector search.
STAGE: A Stream-Centric Generative World Model for Long-Horizon Driving-Scene Simulation
STAGE uses hierarchical temporal feature transfer and multi-stage training to enhance long-horizon driving video consistency and quality, mitigating error accumulation and generating high-quality 600-frame videos outperforming prior methods.
Epona: Autoregressive Diffusion World Model for Autonomous Driving
Epona is an autoregressive diffusion world model that decouples spatiotemporal factors and integrates trajectory planning, enabling long-horizon, high-res video generation with improved performance and reduced error accumulation in autonomous driving tasks.
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
Vista introduces a generalizable driving world model, overcoming limitations in fidelity, generalization, and control. It leverages novel loss functions and latent replacement for high-fidelity, long-term predictions, and integrates versatile multi-level controls. Outperforming S
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
Vista presents a generalizable driving world model with high fidelity and versatile control. It uses novel losses and latent replacement for accurate, long-term dynamic scene prediction. Integrating diverse action controls, Vista outperforms SOTA models, generalizes seamlessly, a
Discussion
Leave a comment
No comments yet. Start the discussion!