.. | ||
README.md |
language |
---|
ms |
Bahasa ELECTRA Model
Pretrained ELECTRA base language model for Malay and Indonesian.
Pretraining Corpus
electra-base-discriminator-bahasa-cased
model was pretrained on ~1.8 Billion words. We trained on both standard and social media language structures, and below is list of data we trained on,
- dumping wikipedia.
- local instagram.
- local twitter.
- local news.
- local parliament text.
- local singlish/manglish text.
- IIUM Confession.
- Wattpad.
- Academia PDF.
Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.
Pretraining details
- This model was trained using Google ELECTRA's github repository on a single TESLA V100 32GB VRAM.
- All steps can reproduce from here, Malaya/pretrained-model/electra.
Load Pretrained Model
You can use this model by installing torch
or tensorflow
and Huggingface library transformers
. And you can use it directly by initializing it like this:
from transformers import ElectraTokenizer, ElectraModel
model = ElectraModel.from_pretrained('huseinzol05/electra-base-discriminator-bahasa-cased')
tokenizer = ElectraTokenizer.from_pretrained(
'huseinzol05/electra-base-discriminator-bahasa-cased',
do_lower_case = False,
)
Example using ElectraForPreTraining
from transformers import ElectraTokenizer, AutoModelWithLMHead, pipeline
model = ElectraForPreTraining.from_pretrained('huseinzol05/electra-base-discriminator-bahasa-cased')
tokenizer = ElectraTokenizer.from_pretrained(
'huseinzol05/electra-base-discriminator-bahasa-cased',
do_lower_case = False
)
sentence = 'kerajaan sangat prihatin terhadap rakyat'
fake_tokens = tokenizer.tokenize(sentence)
fake_inputs = tokenizer.encode(sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
list(zip(fake_tokens, predictions.tolist()))
Output is,
[('kerajaan', 0.0),
('sangat', 0.0),
('prihatin', 0.0),
('terhadap', 0.0),
('rakyat', 0.0)]
Results
For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.
Acknowledgement
Thanks to Im Big, LigBlou, Mesolitica and KeyReply for sponsoring AWS, Google and GPU clouds to train ELECTRA for Bahasa.