An overview of MUGEN.

Multimodal video-audio-text understanding and generation can benefit from datasets that are narrow but rich. The narrowness allows bite-sized challenges that the research community can make progress on. The richness ensures we are making progress along the core challenges. To this end, we present a large-scale video-audio-text dataset MUGEN, collected using the open-sourced platform game CoinRun. We made substantial modifications to make the game richer by introducing audio and enabling new interactions. We trained RL agents with different objectives to navigate the game and interact with 13 objects and characters. This allows us to automatically extract a large collection of diverse videos and associated audio. We sample 375K video clips (3.2s each) and collect text descriptions from human annotators. Each video has additional annotations that are extracted automatically from the game engine, such as accurate semantic maps for each frame and templated textual descriptions Altogether, MUGEN can help progress research in many tasks in multimodal understanding and generation. We benchmark representative approaches on tasks involving video-audio-text retrieval and generation. Both MUGEN and the enhanced game engine will be released to serve as a playground for multimodal research.


Team

Meta AI
University of Rochester
Meta AI
University of Maryland
Meta AI
Meta AI
* equal contribution, ordered alphabetically

Citation

MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration

Thomas Hayes*, Songyang Zhang*, Xi Yin, Guan Pang, Sasha Sheng, Harry Yang, Songwei Ge, Qiyuan Hu, Devi Parikh



Template borrowed from nocaps. Thanks!