amaai-lab / video2music

Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model

  • Public
  • 119 runs
  • GitHub
  • Paper
  • License

We propose a novel AI-powered multimodal music generation framework called Video2Music. This framework uniquely uses video features as conditioning input to generate matching music using a Transformer architecture. By employing cutting-edge technology, our system aims to provide video creators with a seamless and efficient solution for generating tailor-made background music.