mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-30 17:52:35 +06:00
![]() * Create README.md * Update model_cards/sentence-transformers/LaBSE/README.md Co-authored-by: Julien Chaumond <chaumond@gmail.com> Co-authored-by: Julien Chaumond <chaumond@gmail.com> |
||
---|---|---|
.. | ||
README.md |
LaBSE Pytorch Version
This is a pytorch port of the tensorflow version of LaBSE.
To get the sentence embeddings, you can use the following code:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/LaBSE")
model = AutoModel.from_pretrained("sentence-transformers/LaBSE")
sentences = ["Hello World", "Hallo Welt"]
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
print(embeddings)
When you have sentence-transformers installed, you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["Hello World", "Hallo Welt"]
model = SentenceTransformer('LaBSE')
embeddings = model.encode(sentences)
print(embeddings)
Reference:
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. Language-agnostic BERT Sentence Embedding. July 2020
License: https://tfhub.dev/google/LaBSE/1