transformers/model_cards/huseinzol05/bert-base-bahasa-cased
2020-07-15 18:59:20 +02:00
..
README.md [model_cards] Switch all languages codes to ISO-639-{1,2,3} 2020-07-15 18:59:20 +02:00

language
ms

Bahasa BERT Model

Pretrained BERT base language model for Malay and Indonesian.

Pretraining Corpus

bert-base-bahasa-cased model was pretrained on ~1.8 Billion words. We trained on both standard and social media language structures, and below is list of data we trained on,

  1. dumping wikipedia.
  2. local instagram.
  3. local twitter.
  4. local news.
  5. local parliament text.
  6. local singlish/manglish text.
  7. IIUM Confession.
  8. Wattpad.
  9. Academia PDF.

Preprocessing steps can reproduce from here, Malaya/pretrained-model/preprocess.

Pretraining details

Load Pretrained Model

You can use this model by installing torch or tensorflow and Huggingface library transformers. And you can use it directly by initializing it like this:

from transformers import AlbertTokenizer, BertModel

model = BertModel.from_pretrained('huseinzol05/bert-base-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
    'huseinzol05/bert-base-bahasa-cased',
    unk_token = '[UNK]',
    pad_token = '[PAD]',
    do_lower_case = False,
)

We use google/sentencepiece to train the tokenizer, so to use it, need to load from AlbertTokenizer.

Example using AutoModelWithLMHead

from transformers import AlbertTokenizer, AutoModelWithLMHead, pipeline

model = AutoModelWithLMHead.from_pretrained('huseinzol05/bert-base-bahasa-cased')
tokenizer = AlbertTokenizer.from_pretrained(
    'huseinzol05/bert-base-bahasa-cased',
    unk_token = '[UNK]',
    pad_token = '[PAD]',
    do_lower_case = False,
)
fill_mask = pipeline('fill-mask', model = model, tokenizer = tokenizer)
print(fill_mask('makan ayam dengan [MASK]'))

Output is,

[{'sequence': '[CLS] makan ayam dengan rendang[SEP]',
  'score': 0.10812027007341385,
  'token': 2446},
 {'sequence': '[CLS] makan ayam dengan kicap[SEP]',
  'score': 0.07653367519378662,
  'token': 12928},
 {'sequence': '[CLS] makan ayam dengan nasi[SEP]',
  'score': 0.06839974224567413,
  'token': 450},
 {'sequence': '[CLS] makan ayam dengan ayam[SEP]',
  'score': 0.059544261544942856,
  'token': 638},
 {'sequence': '[CLS] makan ayam dengan sayur[SEP]',
  'score': 0.05294966697692871,
  'token': 1639}]

Results

For further details on the model performance, simply checkout accuracy page from Malaya, https://malaya.readthedocs.io/en/latest/Accuracy.html, we compared with traditional models.

Acknowledgement

Thanks to Im Big, LigBlou, Mesolitica and KeyReply for sponsoring AWS, Google and GPU clouds to train BERT for Bahasa.