lucataco / bge-m3

BGE-M3, the first embedding model which supports multiple retrieval mode, multilingual and multi-granularity retrieval.

  • Public
  • 252 runs
  • T4
  • GitHub
  • Paper
  • License
  • Prediction

    lucataco/bge-m3:3af6c861
    ID
    tdyhblrbhppnc7cf45f5jwvavu
    Status
    Succeeded
    Source
    Web
    Hardware
    T4
    Total duration
    Created

    Input

    max_length
    4096
    sentences_1
    What is BGE M3? Defination of BM25
    sentences_2
    BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction. BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document
    embedding_type
    dense

    Output

    [[0.626 0.3477] [0.3499 0.678 ]]
    Generated in
  • Prediction

    lucataco/bge-m3:3af6c861
    ID
    wu2utkjbymdffjnyr4dgrroqjm
    Status
    Succeeded
    Source
    Web
    Hardware
    T4
    Total duration
    Created

    Input

    max_length
    8192
    sentences_1
    What is BGE M3? Defination of BM25
    sentences_2
    BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction. BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document
    embedding_type
    sparse

    Output

    0.19549560546875 0
    Generated in
  • Prediction

    lucataco/bge-m3:3af6c861
    ID
    vy65tbbbpvldplp3fqaqvodhti
    Status
    Succeeded
    Source
    Web
    Hardware
    T4
    Total duration
    Created

    Input

    max_length
    8192
    sentences_1
    What is BGE M3? Defination of BM25
    sentences_2
    BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction. BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document
    embedding_type
    colbert

    Output

    tensor(0.7796) tensor(0.4622)
    Generated in

Want to make some of these yourself?

Run this model