center-for-curriculum-redesign / bge_1-5_embedding

BAAI_bge-large-en v1.5 text embedding model for passage retrieval and sentence similarity

  • Public
  • 985 runs
  • Paper
  • License

Run time and cost

This model runs on Nvidia A40 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

BAAI’s (english) embedding model version 1.5 for passage retrieval and sentence similarity, using the Center For Curriculum Redesign’s Replicate Embedding Cog template.

As of September 2023, this model is state of the art.

  • output dimensions : 1024
  • parameters: 326M
  • MTEB score average: 64.23
DimensionSequence LengthAverage (56)Retrieval (15)Clustering (11)Pair Classification (3)Reranking (4)STS (10)Summarization (1)Classification (12)
102451264.2354.2946.0887.1260.0383.1131.6175.97

Usage


This endpoint is structured primarily for generating passage/document embeddings. You can give it a large excerpt_files json object of the following form as input:

{
"file1.json": "http://someplace.com/somefile1.json",
//..
"fileN.json": "https://someplace.come/somefileN.json"
}

The contents of each input file should be an array of strings parsing to no more than 512 BERT tokens per string.

This endpoint will asynchronously download the files from the provided URLs and generate embedings for the strings contained in each file. The embeddings will be stored as downloadable files named after the keys of your excerpt_files JSON. Whatever webhook you specify will be provided with URLS from which you may download the generated embeddings for each input file. Each generated embedding files will contain an array of 1024 dimensional vectors, where each is the embedding of the string at the same index in your input file.

Make sure to test your webhook code with a few small batches before attempting a large indexing run. And make sure your webhook code downloads the files within an hour of being notified of their availability.