MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation

Purdue University, West Lafayette
ICCV 2025

Abstract

We introduce Multimodal DuetDance (MDD), a diverse multimodal benchmark dataset designed for text-controlled and music-conditioned 3D duet dance motion generation. Our dataset comprises 620 minutes of high-quality motion capture data performed by professional dancers, synchronized with music, and detailed with over 10K fine-grained natural language descriptions. The annotations capture a rich movement vocabulary, detailing spatial relationships, body movements, and rhythm, making MDD the first dataset to seamlessly integrate human motions, music, and text for duet dance synthesis. We introduce two novel tasks supported by our dataset: (1) Text-to-Duet, where given music and a textual prompt, both the leader and follower dance motion are generated (2) Text-to-Dance Accompaniment, where given music, textual prompt, and the leader's motion, the follower's motion is generated in a cohesive, text-aligned manner.

Video Presentation

Ballroom

Latin

Social

BibTeX

@misc{gupta2025mdddatasettextandmusicconditioned,
      title={MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation}, 
      author={Prerit Gupta and Jason Alexander Fotso-Puepi and Zhengyuan Li and Jay Mehta and Aniket Bera},
      year={2025},
      eprint={2508.16911},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2508.16911}, 
}