You're looking at a specific version of this model. Jump to the model overview.

center-for-curriculum-redesign /bge_1-5_embedding:58ef3e95

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
query_texts
string
[]
A serialized JSON array of strings you wish to generate *retreival* embeddings for. (note, that you should keep this list short to avoid Replicate response size limitations). Use this to embed short text queries intended for comparison against document text. A vector will be returned corresponding to each line of text in the input array (in order of input). This endpoint will automatically format your query strings for retrieval, you do not need to preprocess them.
excerpt_files
string
{ }
A serialized JSON object, where each value of the object is a URL to a publicly accessible `.json` file containing an array of text chunks you wish to generate embeddings for, and each key of the object is an identifier which will be used to webhook-notify you that the embeddings for the file at that url are ready (or that they have failed). If the contents of your file are successfully embedded, a new JSON file will be generated for you to download. This file will contain an array of vectors, where the vector at each index represents the embedding of the text excerpt at the corresponding index of your text chunk array in your input file. Each vector is itself just an array, so returned file should ultimately just look like an array of arrays of floating point numbers.
normalize
boolean
True
normalizes returned embedding vectors to a magnitude of 1. (default: true, as this model presumes cosine similarity comparisons downstream)
batchtoken_max
number
200

Min: 0.5

maximumum number of kibiTokens (1 kibiToken = 1024 tokens) to try to stuff into a batch (to avoid out of memory errors but maximize throughput). If the total number of tokens across the flattened list of requested embeddings exceed this value, the list will be split internally and run across multiple forward passes. This will not affect the shape of your output, just the time it takes to run.
precision
string (enum)
full

Options:

full, half

numerical precision for inference computations. Either full or half. Defaults to a paranoid value of full. You may want to test if 'half' is sufficient for your needs, though regardless you should probably prefer to use the same precision for querying as you do for archiving.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'properties': {'document_embeddings': {'items': {'format': 'uri',
                                                  'type': 'string'},
                                        'title': 'Document Embeddings',
                                        'type': 'array'},
                'extra_metrics': {'title': 'Extra Metrics', 'type': 'string'},
                'failed_document_embeddings': {'items': {'type': 'string'},
                                               'title': 'Failed Document '
                                                        'Embeddings',
                                               'type': 'array'},
                'query_embeddings': {'items': {'items': {'type': 'number'},
                                               'type': 'array'},
                                     'title': 'Query Embeddings',
                                     'type': 'array'}},
 'required': ['query_embeddings',
              'document_embeddings',
              'failed_document_embeddings',
              'extra_metrics'],
 'title': 'Output',
 'type': 'object'}