Multilingual-E5-large-instruct
Multilingual E5 Text Embeddings: A Technical Report. Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 24 layers and the embedding size is 1024.
Supported Languages
This model is initialized from xlm-roberta-large and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation.
Training Details
Initialization: xlm-roberta-large
First stage: contrastive pre-training with 1 billion weakly supervised text pairs.
Second stage: fine-tuning on datasets from the E5-mistral paper.
MTEB Benchmark Evaluation
Check out unilm/e5 to reproduce evaluation results on the BEIR and MTEB benchmark.
FAQ
1. Do I need to add instructions to the query?
Yes, this is how the model is trained, otherwise you will see a performance degradation. The task definition should be a one-sentence instruction that describes the task. This is a way to customize text embeddings for different scenarios through natural language instructions.
Please check out unilm/e5/utils.py for instructions we used for evaluation.
On the other hand, there is no need to add instructions to the document side.
2. Why are my reproduced results slightly different from reported in the model card?
Different versions of transformers
and pytorch
could cause negligible but non-zero performance differences.
3. Why does the cosine similarity scores distribute around 0.7 to 1.0?
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue.
Citation
If you find our paper or models helpful, please consider cite as follows:
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
Limitations
Long texts will be truncated to at most 512 tokens.