Music-driven dance generation is a challenging task, as models must respect genre conventions, preserve physical realism, and achieve fine-grained synchronization between movement and musical beat and rhythm. Despite recent progress in music conditioned generation, many methods still struggle to express distinctive genre specific style. We present GCDance, a diffusion based framework for genre specific 3D full body dance generation conditioned on music and descriptive text. The approach introduces a text based control mechanism that converts prompts, including explicit genre labels and free form descriptions, into genre specific control signals, enabling accurate and controllable synthesis of genre consistent motion.
@article{liu2025gcdance,
title={GCDance: Genre-Controlled 3D Full Body Dance Generation Driven By Music},
author={Liu, Xinran and Dong, Xu and Shenbin Qian and Kanojia, Diptesh and Wang, Wenwu and Feng, Zhenhua},
journal={arXiv preprint arXiv:2502.18309},
year={2025}
}