JCSE, vol. 18, no. 4, pp.181-195, 2024
DOI: http://dx.doi.org/10.5626/JCSE.2024.18.4.181
Generative but Controllable Motion Generation through Key Poses and Context Vectors
Jaeyeong Ryu, Soungsill Park, and Youngho Chai
Graduate School of Advanced Imaging Science, Chung-Ang University, Seoul, South Korea
Abstract: We investigate a motion generation model capable of producing desired motions using minimal pose data. Although similar to conventional motion interpolation models in terms of motion data input, the key difference lies in our model's ability to generate diverse motions tailored to user intentions. To differentiate it from motion interpolation models, we establish motion recognition and controllable motion generation systems utilizing pretrained generative models. We develop the motion recognition system using a latent vector derived from the pretrained model's encoder, which encodes substantial contextual information and can be identified by a simple linear support vector machine. The controllable motion generation system employs the recognized latent vector and input poses, based on the pretrained model's decoder. In experiments, our model demonstrates superior generated motion accuracy compared to text-based motion generation models. We also compare our model with motion interpolation models, showing comparable performance. Furthermore, we validate the efficacy of skip connections through qualitative evaluations. Finally, we confirm that our system can generate various types of motion utilizing latent vectors.
Keyword:
Motion generation; Action recognition; Generative pre-trained transformer; Metaverse applications
Full Paper: 47 Downloads, 105 View
|