mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-01 02:31:11 +06:00
[Model card] SinhalaBERTo model. (#7558)
* [Model card] SinhalaBERTo model. This is the model card for keshan/SinhalaBERTo model. * Update model_cards/keshan/SinhalaBERTo/README.md Co-authored-by: Julien Chaumond <chaumond@gmail.com>
This commit is contained in:
parent
167bce56f2
commit
e10d389561
37
model_cards/keshan/SinhalaBERTo/README.md
Normal file
37
model_cards/keshan/SinhalaBERTo/README.md
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
language: si
|
||||||
|
tags:
|
||||||
|
- SinhalaBERTo
|
||||||
|
- Sinhala
|
||||||
|
- roberta
|
||||||
|
datasets:
|
||||||
|
- oscar
|
||||||
|
---
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
This is a slightly smaller model trained on [OSCAR](https://oscar-corpus.com/) Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
|
||||||
|
|
||||||
|
## Model Specification
|
||||||
|
|
||||||
|
|
||||||
|
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
|
||||||
|
1. vocab_size=52000
|
||||||
|
2. max_position_embeddings=514
|
||||||
|
3. num_attention_heads=12
|
||||||
|
4. num_hidden_layers=6
|
||||||
|
5. type_vocab_size=1
|
||||||
|
|
||||||
|
## How to Use
|
||||||
|
You can use this model directly with a pipeline for masked language modeling:
|
||||||
|
|
||||||
|
```py
|
||||||
|
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
|
||||||
|
|
||||||
|
model = BertForMaskedLM.from_pretrained("keshan/SinhalaBERTo")
|
||||||
|
tokenizer = BertTokenizer.from_pretrained("keshan/SinhalaBERTo")
|
||||||
|
|
||||||
|
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
|
||||||
|
|
||||||
|
fill_mask("මම ගෙදර <mask>.")
|
||||||
|
|
||||||
|
```
|
Loading…
Reference in New Issue
Block a user