Logo

VideoVerse: How Far is Your T2V Generator from a World Model?

Zeqing Wang1,4*, Xinyu Wei2,4*, Bairui Li2,4*, Zhen Guo2,4, Jinrui Zhang2,4, Hongyang Wei3,4, Keze Wang1†, Lei Zhang2,4†
1Sun Yat-sen University, 2Hong Kong Polytechnic University, 3Tsinghua University
4OPPO Research Institute

*Equal contribution Corresponding Author

See the features of our VideoVerse!

Abstract

The recent rapid advancement of Text-to-Video (T2V) generation technologies, which are critical to build "world models", makes the existing benchmarks increasingly insufficient to evaluate state-of-the-art T2V models. First, current evaluation dimensions, such as per-frame aesthetic quality and temporal consistency, are no longer able to differentiate state-of-the-art T2V models. Second, event-level temporal causality, which not only distinguishes video from other modalities but also constitutes a crucial component of world models, is severely underexplored in existing benchmarks. Third, existing benchmarks lack a systematic assessment of world knowledge, which are essential capabilities for building world models. To address these issues, we introduce VideoVerse, a comprehensive benchmark that focuses on evaluating whether a T2V model could understand complex temporal causality and world knowledge in the real world. We collect representative videos across diverse domains (e.g., natural landscapes, sports, indoor scenes, science fiction, chemical and physical experiments) and extract their event-level descriptions with inherent temporal causality, which are then rewritten into text-to-video prompts by independent annotators. For each prompt, we design a suite of binary evaluation questions from the perspective of dynamic and static properties, with a total of ten carefully defined evaluation dimensions. In total, our VideoVerse comprises 300 carefully curated prompts, involving 815 events and 825 binary evaluation questions. Consequently, a human preference aligned QA-based evaluation pipeline is developed by using modern vision-language models. Finally, we perform a systematic evaluation of state-of-the-art open-source and closed-source T2V models on VideoVerse, providing in-depth analysis on how far the current T2V generators are from world models.

Evolution Demos

VideoVerse Benchmark Leaderboard (Evaluated By Gemini2.5 Pro)

Model Overall Dynamic Static
Event
Following
Camera
Control
Interaction Mechanics Material
Properties
Natural
Constra.
Common
Sense
Attr.
Correct.
2D
Layout
3D
Depth
Open-Source Models
CogVideoX1.5 (S)9224243737252636411786652
CogVideoX1.5 (L)9164263838282238381835847
SkyReels-V2 (S)9634844337302232431616150
SkyReels-V2 (L)9975113742332436361696247
Wan2.1-14B9984964334322435461686852
Hunyuan9234463932342537421606048
OpenSora2.010154824836292748501826251
Wan2.2-A14B11125676136393037441856449
Closed-Source Models
Minimax-Hailuo12416237644423655531886955
Veo-313346807754503668581886855

BibTeX

@article{YourPaperKey2024,
  title={Your Paper Title Here},
  author={First Author and Second Author and Third Author},
  journal={Conference/Journal Name},
  year={2024},
  url={https://your-domain.com/your-project-page}
}