![]() * add bert bahasa readme * update readme * update readme * added xlnet * added tiny-bert and fix xlnet readme * added albert base |
||
---|---|---|
.. | ||
README.md |
language |
---|
malay |
Bahasa Albert Model
Pretrained Albert base language model for Malay and Indonesian.
Pretraining Corpus
albert-base-bahasa-cased
model was pretrained on ~1.8 Billion words. We trained on both standard and social media language structures, and below is list of data we trained on,
- dumping wikipedia.
- local instagram.
- local twitter.
- local news.
- local parliament text.
- local singlish/manglish text.
- IIUM Confession.
- Wattpad.
- Academia PDF.
Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.
Pretraining details
- This model was trained using Google Albert's github repository on v3-8 TPU.
- All steps can reproduce from here, Malaya/pretrained-model/albert.
Load Pretrained Model
You can use this model by installing torch
or tensorflow
and Huggingface library transformers
. And you can use it directly by initializing it like this:
from transformers import AlbertTokenizer, AlbertModel
model = BertModel.from_pretrained('huseinzol05/albert-base-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'huseinzol05/albert-base-bahasa-cased',
do_lower_case = False,
)
Example using AutoModelWithLMHead
from transformers import AlbertTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained('huseinzol05/albert-base-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
'huseinzol05/albert-base-bahasa-cased',
do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model = model, tokenizer = tokenizer)
print(fill_mask('makan ayam dengan [MASK]'))
Output is,
[{'sequence': '[CLS] makan ayam dengan ayam[SEP]',
'score': 0.044952988624572754,
'token': 629},
{'sequence': '[CLS] makan ayam dengan sayur[SEP]',
'score': 0.03621877357363701,
'token': 1639},
{'sequence': '[CLS] makan ayam dengan ikan[SEP]',
'score': 0.034429922699928284,
'token': 758},
{'sequence': '[CLS] makan ayam dengan nasi[SEP]',
'score': 0.032447945326566696,
'token': 453},
{'sequence': '[CLS] makan ayam dengan rendang[SEP]',
'score': 0.028885239735245705,
'token': 2451}]
Results
For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.
Acknowledgement
Thanks to Im Big, LigBlou, Mesolitica and KeyReply for sponsoring AWS, Google and GPU clouds to train Albert for Bahasa.