transformers/model_cards/sentence-transformers/LaBSE
Nils Reimers 3032de9369
Model Card (#7752)
* Create README.md

* Update model_cards/sentence-transformers/LaBSE/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-10-14 13:30:58 -04:00
..
README.md Model Card (#7752) 2020-10-14 13:30:58 -04:00

LaBSE Pytorch Version

This is a pytorch port of the tensorflow version of LaBSE.

To get the sentence embeddings, you can use the following code:

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/LaBSE")
model = AutoModel.from_pretrained("sentence-transformers/LaBSE")

sentences = ["Hello World", "Hallo Welt"]

encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt')

with torch.no_grad():
    model_output = model(**encoded_input, return_dict=True)

embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
print(embeddings)

When you have sentence-transformers installed, you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["Hello World", "Hallo Welt"]

model = SentenceTransformer('LaBSE')
embeddings = model.encode(sentences)
print(embeddings)

Reference:

Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Narveen Ari, Wei Wang. Language-agnostic BERT Sentence Embedding. July 2020

License: https://tfhub.dev/google/LaBSE/1