AI & Art: Can A Robot Dance The Dying Swan?

Yolanda Spinola Elias, professor of the University of Seville and guest researcher at the department of computer science, and Prof Christoph Schommer, programme manager at the AI&Art pavilion (Photo © University of Luxembourg)

Can a robot recreate a dance that touches the soul as deeply as the world’s greatest dancers have done? It is one of several artistic challenges being worked on at the University of Luxembourg.

When you picture a robot what do you see? Perhaps your first thoughts are of a surveillance drone, a defence tool or precision technology to carry out surgeries. But, a robot dancing the dying swan? Why not, says Nooshin Shojaee, a student working with dancer Maria Betania Antico to develop AI which could, one day, do just that.

The iconic solo dance was first choreographed by Mikhail Fokine to Camille Saint-Saëns’s Le Cygne for the ballerina Anna Pavlova. Using fluttering movements to convey the bird’s death, the dance is considered exceptional because of the technical skill required by the dancer. Could a robot reproduce such an intricate choreography?

The computer scientist and dancer are exploring ways to do this by using different AI models to generate dance sequences based on the input music.

“It was important for her [Antico] to see how the avatar is able to mimic the body movement of the dancer, who is already mimicking the movement of a swan dying. It’s already a challenging task for a human,” says Shojaee.

She ran the experiment several times and said she was sometimes surprised to see how well the AI developed an understanding. In the next step, she took a video of a ballerina dancing the piece and imposed detection on video to compare the AI performer against the human dancer, frame by frame.

“There were some poses that didn’t completely match. It depends a lot on how complete the training data set is. It was a very interesting experiment to see how AI can dance with this music, and the dying swan.”

The student-artist collaborative project is one of 12 being showcased at the University of Luxembourg’s Computational Creativity Hub (CCH), an exhibition space where people interested in AI can meet and talk.

The project began two years ago with a photobooth developed through a machine-learning course. “We produced 12 calendar images in the style of a different painter,” Prof Christoph Schommer, programme manager at the AI&Art pavilion, explains. The visitor has their photo taken in a booth with a QT robot acting as interface. They then select an artistic style in which the photo is reproduced, similar to a filter. The interaction and machine learning system behind the photo booth uses generative adversarial networks, says Prof Schommer.

“One of the most important things is that for artists AI is a tool to be discovered and it can bring help to their art.”

Dr Sana Nouzri

Origins of AI and Art

MIT pioneered the pairing of AI and Art with its media lab, founded in 1985, although inter-disciplinary research existed before then (think Leonardo da Vinci). Today, there are numerous labs around the world. “They have different names. Some are called Fab labs, another is living lab, where people are integrated into the lab as part of an experiment,” says Yolanda Spinola Elias, professor of the University of Seville and guest researcher at the department of computer science. She stresses their importance as places to explore. If the prototypes developed in these kinds of labs do not always work as hoped, that does not mean that it is “a failure. We need to do this experimentation and have good and bad results, because that’s how we advance with innovation.”

MIT’s alumni go on to become inventors and entrepreneurs, finding solutions to global challenges. The CCH team has similar hopes for its cohort but it also hopes to assist artists in the creative process.

“One of the most important things is that for artists AI is a tool to be discovered and it can bring help to their art,” says Dr Sana Nouzri, who is responsible for matching artists and students.

The lab demonstrates that the possibilities are vast. For anyone who ever wanted to slip into the shoes of one of the most enigmatic women in art, Mirror Mirror does just that, enabling participants to see their facial expression projected onto the Mona Lisa. The deep fake technology then inverts the features, so that when the subject smiles, the projected image frowns.

“The artist who proposed this project wants to explain to the visitor the other projections of yourself,” explains research assistant Prince Yaw Gharbin. In another installation, participants can generate their own film soundtrack based on hand and facial movements, part of an experiment in enhancing interaction. Yet another uses VR to pitch the participant into the path of a black hole and track their emotional response.

“Each artist comes with a new collaboration idea so we continue to work on the project,” says Dr Nouzri.

For the dance project, the next stage of the collaboration will involve programming a robot so that it can reproduce the dance. “That would be another experiment, to see how the robot dances,” says Shojaee.

The CCH is open three days a week with interactive presentations on most Saturdays.

To find out more, visit CCH.

Total
0
Shares
Related Posts
Total
0
Share