center-for-curriculum-redesign
/
bge_1-5_embedding
BAAI_bge-large-en v1.5 text embedding model for passage retrieval and sentence similarity
Run center-for-curriculum-redesign/bge_1-5_embedding with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
query_texts |
string
|
[]
|
A serialized JSON array of strings you wish to generate *retreival* embeddings for. (note, that you should keep this list short to avoid Replicate response size limitations). Use this to embed short text queries intended for comparison against document text. A vector will be returned corresponding to each line of text in the input array (in order of input). This endpoint will automatically format your query strings for retrieval, you do not need to preprocess them.
|
excerpt_files |
string
|
{ }
|
A serialized JSON object, where each value of the object is a URL to a publicly accessible `.json` file containing an array of text chunks you wish to generate embeddings for, and each key of the object is an identifier which will be used to webhook-notify you that the embeddings for the file at that url are ready (or that they have failed). If the contents of your file are successfully embedded, a new JSON file will be generated for you to download. This file will contain an array of vectors, where the vector at each index represents the embedding of the text excerpt at the corresponding index of your text chunk array in your input file. Each vector is itself just an array, so returned file should ultimately just look like an array of arrays of floating point numbers.
|
normalize |
boolean
|
True
|
normalizes returned embedding vectors to a magnitude of 1. (default: true, as this model presumes cosine similarity comparisons downstream)
|
batchtoken_max |
number
|
200
Min: 0.5 |
maximumum number of kibiTokens (1 kibiToken = 1024 tokens) to try to stuff into a batch (to avoid out of memory errors but maximize throughput). If the total number of tokens across the flattened list of requested embeddings exceed this value, the list will be split internally and run across multiple forward passes. This will not affect the shape of your output, just the time it takes to run.
|
precision |
string
(enum)
|
full
Options: full, half |
numerical precision for inference computations. Either full or half. Defaults to a paranoid value of full. You may want to test if 'half' is sufficient for your needs, though regardless you should probably prefer to use the same precision for querying as you do for archiving.
|
{
"type": "object",
"title": "Input",
"properties": {
"normalize": {
"type": "boolean",
"title": "Normalize",
"default": true,
"x-order": 2,
"description": "normalizes returned embedding vectors to a magnitude of 1. (default: true, as this model presumes cosine similarity comparisons downstream)"
},
"precision": {
"enum": [
"full",
"half"
],
"type": "string",
"title": "precision",
"description": "numerical precision for inference computations. Either full or half. Defaults to a paranoid value of full. You may want to test if 'half' is sufficient for your needs, though regardless you should probably prefer to use the same precision for querying as you do for archiving.",
"default": "full",
"x-order": 4
},
"query_texts": {
"type": "string",
"title": "Query Texts",
"default": "[]",
"x-order": 0,
"description": "A serialized JSON array of strings you wish to generate *retreival* embeddings for. (note, that you should keep this list short to avoid Replicate response size limitations). Use this to embed short text queries intended for comparison against document text. A vector will be returned corresponding to each line of text in the input array (in order of input). This endpoint will automatically format your query strings for retrieval, you do not need to preprocess them."
},
"excerpt_files": {
"type": "string",
"title": "Excerpt Files",
"default": " { }",
"x-order": 1,
"description": "A serialized JSON object, where each value of the object is a URL to a publicly accessible `.json` file containing an array of text chunks you wish to generate embeddings for, and each key of the object is an identifier which will be used to webhook-notify you that the embeddings for the file at that url are ready (or that they have failed). If the contents of your file are successfully embedded, a new JSON file will be generated for you to download. This file will contain an array of vectors, where the vector at each index represents the embedding of the text excerpt at the corresponding index of your text chunk array in your input file. Each vector is itself just an array, so returned file should ultimately just look like an array of arrays of floating point numbers."
},
"batchtoken_max": {
"type": "number",
"title": "Batchtoken Max",
"default": 200,
"minimum": 0.5,
"x-order": 3,
"description": "maximumum number of kibiTokens (1 kibiToken = 1024 tokens) to try to stuff into a batch (to avoid out of memory errors but maximize throughput). If the total number of tokens across the flattened list of requested embeddings exceed this value, the list will be split internally and run across multiple forward passes. This will not affect the shape of your output, just the time it takes to run."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "object",
"title": "Output",
"required": [
"query_embeddings",
"document_embeddings",
"failed_document_embeddings",
"extra_metrics"
],
"properties": {
"extra_metrics": {
"type": "string",
"title": "Extra Metrics"
},
"query_embeddings": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "number"
}
},
"title": "Query Embeddings"
},
"document_embeddings": {
"type": "array",
"items": {
"type": "string",
"format": "uri"
},
"title": "Document Embeddings"
},
"failed_document_embeddings": {
"type": "array",
"items": {
"type": "string"
},
"title": "Failed Document Embeddings"
}
}
}