sakemin / musicgen-chord

Generate music restricted to chord sequences and tempo

  • Public
  • 2K runs
  • GitHub
  • License



Run time and cost

This model runs on Nvidia A100 (40GB) GPU hardware. Predictions typically complete within 5 minutes. The predict time for this model varies significantly based on the inputs.


MusicGen Chord

MusicGen Chord is the modified version of Meta’s MusicGen Melody model, which can generate music based on audio-based chord conditions or text-based chord conditions.

Text Based Chord Conditioning

Text Chord Condition Format

  • SPACE is used as split token. Each splitted chunk is assigned to a single bar.
    • C G E:min A:min
  • When multiple chords must be assigned in a single bar, then append more chords with ,.
    • C G,G:7 E:min,E:min7 A:min
  • Chord type can be specified after :.
    • Just using a single uppercase alphabet(eg. C, E) is considered as a major chord.
    • maj, min, dim, aug, min6, maj6, min7, minmaj7, maj7, 7, dim7, hdim7, sus2 and sus4 can be appended with :.
      • eg. E:dim, B:sus2
  • ‘sharp’ and ‘flat’ can be specified with # and b.
    • eg. E#:min Db

BPM and Time Signature

  • To create chord chroma, bpm and time_sig values must be specified.
    • bpm can be a float value. (eg. 132, 60)
    • The format of time_sig is (int)/(int). (eg. 4/4, 3/4, 6/8, 7/8, 5/4)
  • bpm and time_sig values will be automatically concatenated after prompt description value, so you don’t need to specify bpm or time signature information in the description for prompt.

Audio Based Chord Conditioning

Audio Chord Conditioning Instruction

  • You can also give chord condition with audio_chords.
  • With audio_start and audio_end values, you can specify which part of the audio_chords file input will be used as chord condition.
  • The chords will be recognized from the audio_chords, using BTC model.

Additional Feature


  • If continuation is True, then the input audio file given at audio_chords will not be used as audio chord condition. The generated music output will be continued from the given file.
  • You can also use audio_start and audio_end values to crop the input audio file.

Infinite Generation

  • You can set duration longer than 30 seconds.
  • Due to MusicGen’s limitation of generating a maximum 30-second audio in one iteration, if the specified duration exceeds 30 seconds, the model will create multiple sequences. It will utilize the latter portion of the output from the previous generation step as the audio prompt (following the same continuation method) for the subsequent generation step.

Multi-Band Diffusion

  • Multi-Band Diffusion(MBD) is used for decoding the EnCodec tokens.
  • If the tokens are decoded with MBD, than the output audio quality is better.
  • Using MBD takes more calculation time, since it has its own prediction sequence.

Fine-tuning MusicGen Chord

For the instruction of MusicGen fine-tuning, please check the blog post : Fine-tune MusicGen to generate music in any style



  • Compressed files in formats like .zip, .tar, .gz, and .tgz are compatible for dataset uploads.
  • Single audio files with .mp3, .wav, and .flac formats can also be uploaded.
  • Audio files within the dataset must exceed 30 seconds in duration.
  • Audio Chunking : Files surpassing 30 seconds will be divided into multiple 30-second chunks.
  • Vocal Removal : If drop_vocals is set to True, the vocal tracks in the audio files will be isolated and removed.(Default : drop_vocals = True)
    • For datasets containing audio without vocals, setting drop_vocals = False reduces data preprocessing time and maintains audio file quality.

Text Description

  • If each audio file requires a distinct description, create a .txt file with a single-line description corresponding to each .mp3 or .wav file. (eg. 01_A_Man_Without_Love.mp3 and 01_A_Man_Without_Love.txt)
  • For a uniform description across all audio files, set the one_same_description argument to your desired description(str). In this case, there’s no need for individual .txt files.
  • Auto Labeling : When auto_labeling is set to True, labels such as ‘genre’, ‘mood’, ‘theme’, ‘instrumentation’, ‘key’, and ‘bpm’ will be generated and added to each audio file in the dataset(Default : auto_labeling = True)

Train Parameters

Train Inputs

  • dataset_path: Path = Input(“Path to dataset directory”,)
  • one_same_description: str = Input(description=”A description for all of audio data”, default=None)
  • auto_labeling: bool = Input(description=”Creating label data like genre, mood, theme, instrumentation, key, bpm for each track. Using essentia-tensorflow for music information retrieval.”, default=True)
  • drop_vocals: bool = Input(description=”Dropping the vocal tracks from the audio files in dataset, by separating sources with Demucs.”, default=True)
  • lr: float = Input(description=”Learning rate”, default=1)
  • epochs: int = Input(description=”Number of epochs to train for”, default=3)
  • updates_per_epoch: int = Input(description=”Number of iterations for one epoch”, default=100) If None, iterations per epoch will be set according to dataset/batch size. If there’s a value, then the number of iterations per epoch will be set as the value.
  • batch_size: int = Input(description=”Batch size”, default=16)

Default Parameters

  • For 8 gpu multiprocessing, batch_size must be a multiple of 8. If not, batch_size will be automatically floored to the nearest multiple of 8.
  • For chord model, maximum batch_size is 16 with 8 x Nvidia A40 machine setting.

Example Code

import replicate

training = replicate.trainings.create(
    "one_same_description":"description for your dataset music",