Stanford University researchers have developed a generative AI model that can choreograph human dance animation to match any piece of music. Called Editable Dance GEneration, the tool will help choreographers design sequences and communicate their ideas to live dancers by visualising 3D dance sequences. Key to the program's advanced capabilities is editability. Researchers believe that EDGE could be used to create computer-animated dance sequences by allowing animators to intuitively edit any parts of dance motion.
For example, the animator can design specific leg movements of the character, and EDGE will “auto-complete” the entire body from that positioning in a way that is realistic, seamless, and physically plausible as well — a human could complete the moves. Above all, the moves are consistent with the animator’s choice of music.
Like other generative models for images and text — ChatGPT and DALL-E, for instance — EDGE represents a new tool for choreographic idea generation and movement planning. The editability means that dance artists and choreographers can iteratively refine their sequences move by move, position by position, adding specific poses at precise moments. EDGE then incorporates the additional details into the sequence automatically. In the near future, EDGE will allow users to input their own music and even demonstrate the moves themselves in front of a camera.
(Information and Image Courtesy: Stanford University).
COMMENT
Please email us: info@rajagirimedia.com