Explore Docs Sign in Join the waitlist

👋 Do you have a model that needs a demo? Email us to request early access. In the meantime, check out the getting started docs.


Emotional conditioned music generation using transformer-based model.
395 runs

This is a demo accompanying the repository EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. The paper has been accepted by International Society for Music Information Retrieval Conference 2021.

The piano music was generated by a Transformer-based model and it's in MIDI format. During the training phase, the model takes a sequence of music token as input and also output a sequence of music token. We use a published piano dataset called AILabs1k7 to pre-train the model, and then use the self-collected dataset EMOPIA to finetune and condition it.

Motivated by CTRL, we prepend an emotion token to each music sequence to make the model aware of the emotion.

The objective and subjective evaluations show that the generation quality becomes more stable when the model is pre-trained with a larger dataset, and our Transformer-based model is capable of generating music with a given target emotion to a certain degree.

There is still room for improvement in the conditioning ability. So take a look at EMOPIA and create some fantastic works using it!

Replicate Reproducible machine learning