From 67c4b0c5178c8a532cf461ed2a1152fe821dc750 Mon Sep 17 00:00:00 2001
From: Dat Quoc Nguyen <2412555+datquocnguyen@users.noreply.github.com>
Date: Mon, 21 Sep 2020 17:12:51 +0700
Subject: [PATCH] Add model cards for new pre-trained BERTweet-COVID19 models
(#7269)
Two new pre-trained models "vinai/bertweet-covid19-base-cased" and "vinai/bertweet-covid19-base-uncased" are resulted by further pre-training the pre-trained model "vinai/bertweet-base" on a corpus of 23M COVID-19 English Tweets for 40 epochs.
---
model_cards/vinai/bertweet-base/README.md | 47 ++++++-----
.../bertweet-covid19-base-cased/README.md | 80 +++++++++++++++++++
.../bertweet-covid19-base-uncased/README.md | 80 +++++++++++++++++++
model_cards/vinai/phobert-base/README.md | 28 ++++---
model_cards/vinai/phobert-large/README.md | 28 ++++---
5 files changed, 220 insertions(+), 43 deletions(-)
create mode 100644 model_cards/vinai/bertweet-covid19-base-cased/README.md
create mode 100644 model_cards/vinai/bertweet-covid19-base-uncased/README.md
diff --git a/model_cards/vinai/bertweet-base/README.md b/model_cards/vinai/bertweet-base/README.md
index 67bf43daa51..4d6b041f5d4 100644
--- a/model_cards/vinai/bertweet-base/README.md
+++ b/model_cards/vinai/bertweet-base/README.md
@@ -1,10 +1,10 @@
# BERTweet: A pre-trained language model for English Tweets
- BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure, using the same model configuration as [BERT-base](https://github.com/google-research/bert).
- - The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related the **COVID-19** pandemic.
+ - The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
- BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
-The general architecture and experimental results of BERTweet can be found in our EMNLP-2020 demo [paper](https://arxiv.org/abs/2005.10200):
+The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
@inproceedings{bertweet,
title = {{BERTweet: A pre-trained language model for English Tweets}},
@@ -17,29 +17,35 @@ The general architecture and experimental results of BERTweet can be found in ou
For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
-## Installation
+### Installation
- - Python version >= 3.6
- - [PyTorch](http://pytorch.org/) version >= 1.4.0
- - `pip3 install transformers emoji`
+ - Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
+ - Install `transformers`:
+ - `git clone https://github.com/huggingface/transformers.git`
+ - `cd transformers`
+ - `pip3 install --upgrade .`
+ - Install `emoji`: `pip3 install emoji`
+
+### Pre-trained models
-## Pre-trained model
Model | #params | Arch. | Pre-training data
---|---|---|---
-`vinai/bertweet-base` | 135M | base | 845M English Tweets (80GB)
+`vinai/bertweet-base` | 135M | base | 845M English Tweets (cased)
+`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
+`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
+Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
-## Example usage
+### Example usage
```python
import torch
-from transformers import AutoModel, AutoTokenizer #, BertweetTokenizer
+from transformers import AutoModel, AutoTokenizer
bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
-#tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base")
# INPUT TWEET IS ALREADY NORMALIZED!
line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
@@ -48,22 +54,25 @@ input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = bertweet(input_ids) # Models outputs are now tuples
+
+## With TensorFlow 2.0+:
+# from transformers import TFAutoModel
+# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
```
-## Normalize raw input Tweets
+### Normalize raw input Tweets
-Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets.
+Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
```python
import torch
-from transformers import BertweetTokenizer
+from transformers import AutoTokenizer
-# Load the BertweetTokenizer with a normalization mode if the input Tweet is raw
-tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
+# Load the AutoTokenizer with a normalization mode if the input Tweet is raw
+tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
-# BERTweet's tokenizer can be also loaded in the "Auto" mode
-# from transformers import AutoTokenizer
-# tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
+# from transformers import BertweetTokenizer
+# tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"
diff --git a/model_cards/vinai/bertweet-covid19-base-cased/README.md b/model_cards/vinai/bertweet-covid19-base-cased/README.md
new file mode 100644
index 00000000000..e09c71e4b71
--- /dev/null
+++ b/model_cards/vinai/bertweet-covid19-base-cased/README.md
@@ -0,0 +1,80 @@
+# BERTweet: A pre-trained language model for English Tweets
+
+ - BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure, using the same model configuration as [BERT-base](https://github.com/google-research/bert).
+ - The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
+ - BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
+
+The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
+
+ @inproceedings{bertweet,
+ title = {{BERTweet: A pre-trained language model for English Tweets}},
+ author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
+ year = {2020}
+ }
+
+**Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software.
+
+For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
+
+### Installation
+
+ - Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
+ - Install `transformers`:
+ - `git clone https://github.com/huggingface/transformers.git`
+ - `cd transformers`
+ - `pip3 install --upgrade .`
+ - Install `emoji`: `pip3 install emoji`
+
+### Pre-trained models
+
+
+Model | #params | Arch. | Pre-training data
+---|---|---|---
+`vinai/bertweet-base` | 135M | base | 845M English Tweets (cased)
+`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
+`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
+
+Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
+
+### Example usage
+
+
+```python
+import torch
+from transformers import AutoModel, AutoTokenizer
+
+bertweet = AutoModel.from_pretrained("vinai/bertweet-covid19-base-cased")
+tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-cased")
+
+# INPUT TWEET IS ALREADY NORMALIZED!
+line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
+
+input_ids = torch.tensor([tokenizer.encode(line)])
+
+with torch.no_grad():
+ features = bertweet(input_ids) # Models outputs are now tuples
+
+## With TensorFlow 2.0+:
+# from transformers import TFAutoModel
+# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-covid19-base-cased")
+```
+
+### Normalize raw input Tweets
+
+Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
+
+```python
+import torch
+from transformers import AutoTokenizer
+
+# Load the AutoTokenizer with a normalization mode if the input Tweet is raw
+tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-cased", normalization=True)
+
+# from transformers import BertweetTokenizer
+# tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-covid19-base-cased", normalization=True)
+
+line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"
+
+input_ids = torch.tensor([tokenizer.encode(line)])
+```
diff --git a/model_cards/vinai/bertweet-covid19-base-uncased/README.md b/model_cards/vinai/bertweet-covid19-base-uncased/README.md
new file mode 100644
index 00000000000..4f807de06aa
--- /dev/null
+++ b/model_cards/vinai/bertweet-covid19-base-uncased/README.md
@@ -0,0 +1,80 @@
+# BERTweet: A pre-trained language model for English Tweets
+
+ - BERTweet is the first public large-scale language model pre-trained for English Tweets. BERTweet is trained based on the [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) pre-training procedure, using the same model configuration as [BERT-base](https://github.com/google-research/bert).
+ - The corpus used to pre-train BERTweet consists of 850M English Tweets (16B word tokens ~ 80GB), containing 845M Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the **COVID-19** pandemic.
+ - BERTweet does better than its competitors RoBERTa-base and [XLM-R-base](https://arxiv.org/abs/1911.02116) and outperforms previous state-of-the-art models on three downstream Tweet NLP tasks of Part-of-speech tagging, Named entity recognition and text classification.
+
+The general architecture and experimental results of BERTweet can be found in our [paper](https://arxiv.org/abs/2005.10200):
+
+ @inproceedings{bertweet,
+ title = {{BERTweet: A pre-trained language model for English Tweets}},
+ author = {Dat Quoc Nguyen and Thanh Vu and Anh Tuan Nguyen},
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
+ year = {2020}
+ }
+
+**Please CITE** our paper when BERTweet is used to help produce published results or is incorporated into other software.
+
+For further information or requests, please go to [BERTweet's homepage](https://github.com/VinAIResearch/BERTweet)!
+
+### Installation
+
+ - Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
+ - Install `transformers`:
+ - `git clone https://github.com/huggingface/transformers.git`
+ - `cd transformers`
+ - `pip3 install --upgrade .`
+ - Install `emoji`: `pip3 install emoji`
+
+### Pre-trained models
+
+
+Model | #params | Arch. | Pre-training data
+---|---|---|---
+`vinai/bertweet-base` | 135M | base | 845M English Tweets (cased)
+`vinai/bertweet-covid19-base-cased` | 135M | base | 23M COVID-19 English Tweets (cased)
+`vinai/bertweet-covid19-base-uncased` | 135M | base | 23M COVID-19 English Tweets (uncased)
+
+Two pre-trained models `vinai/bertweet-covid19-base-cased` and `vinai/bertweet-covid19-base-uncased` are resulted by further pre-training the pre-trained model `vinai/bertweet-base` on a corpus of 23M COVID-19 English Tweets for 40 epochs.
+
+### Example usage
+
+
+```python
+import torch
+from transformers import AutoModel, AutoTokenizer
+
+bertweet = AutoModel.from_pretrained("vinai/bertweet-covid19-base-uncased")
+tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-uncased")
+
+# INPUT TWEET IS ALREADY NORMALIZED!
+line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
+
+input_ids = torch.tensor([tokenizer.encode(line)])
+
+with torch.no_grad():
+ features = bertweet(input_ids) # Models outputs are now tuples
+
+## With TensorFlow 2.0+:
+# from transformers import TFAutoModel
+# bertweet = TFAutoModel.from_pretrained("vinai/bertweet-covid19-base-uncased")
+```
+
+### Normalize raw input Tweets
+
+Before applying `fastBPE` to the pre-training corpus of 850M English Tweets, we tokenized these Tweets using `TweetTokenizer` from the NLTK toolkit and used the `emoji` package to translate emotion icons into text strings (here, each icon is referred to as a word token). We also normalized the Tweets by converting user mentions and web/url links into special tokens `@USER` and `HTTPURL`, respectively. Thus it is recommended to also apply the same pre-processing step for BERTweet-based downstream applications w.r.t. the raw input Tweets. BERTweet provides this pre-processing step by enabling the `normalization` argument.
+
+```python
+import torch
+from transformers import AutoTokenizer
+
+# Load the AutoTokenizer with a normalization mode if the input Tweet is raw
+tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-covid19-base-uncased", normalization=True)
+
+# from transformers import BertweetTokenizer
+# tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-covid19-base-uncased", normalization=True)
+
+line = "SC has first two presumptive cases of coronavirus, DHEC confirms https://postandcourier.com/health/covid19/sc-has-first-two-presumptive-cases-of-coronavirus-dhec-confirms/article_bddfe4ae-5fd3-11ea-9ce4-5f495366cee6.html?utm_medium=social&utm_source=twitter&utm_campaign=user-share… via @postandcourier"
+
+input_ids = torch.tensor([tokenizer.encode(line)])
+```
diff --git a/model_cards/vinai/phobert-base/README.md b/model_cards/vinai/phobert-base/README.md
index 471c6708b76..afae1177c72 100644
--- a/model_cards/vinai/phobert-base/README.md
+++ b/model_cards/vinai/phobert-base/README.md
@@ -1,6 +1,6 @@
# PhoBERT: Pre-trained language models for Vietnamese
-
-Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
+
+Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
@@ -18,28 +18,28 @@ The general architecture and experimental results of PhoBERT can be found in our
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
-## Installation
- - Python version >= 3.6
- - [PyTorch](http://pytorch.org/) version >= 1.4.0
- - `pip3 install transformers`
+### Installation
+ - Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
+ - Install `transformers`:
+ - `git clone https://github.com/huggingface/transformers.git`
+ - `cd transformers`
+ - `pip3 install --upgrade .`
-## Pre-trained models
+### Pre-trained models
-
-Model | #params | Arch. | Pre-training data
+Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/phobert-base` | 135M | base | 20GB of texts
`vinai/phobert-large` | 370M | large | 20GB of texts
-## Example usage
+### Example usage
```python
import torch
-from transformers import AutoModel, AutoTokenizer #, PhobertTokenizer
+from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
-#tokenizer = PhobertTokenizer.from_pretrained("vinai/phobert-base")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
@@ -48,4 +48,8 @@ input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
+
+## With TensorFlow 2.0+:
+# from transformers import TFAutoModel
+# phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
```
diff --git a/model_cards/vinai/phobert-large/README.md b/model_cards/vinai/phobert-large/README.md
index 316ea36478c..7bbf4521ef9 100644
--- a/model_cards/vinai/phobert-large/README.md
+++ b/model_cards/vinai/phobert-large/README.md
@@ -1,6 +1,6 @@
# PhoBERT: Pre-trained language models for Vietnamese
-
-Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
+
+Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
@@ -18,28 +18,28 @@ The general architecture and experimental results of PhoBERT can be found in our
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
-## Installation
- - Python version >= 3.6
- - [PyTorch](http://pytorch.org/) version >= 1.4.0
- - `pip3 install transformers`
+### Installation
+ - Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
+ - Install `transformers`:
+ - `git clone https://github.com/huggingface/transformers.git`
+ - `cd transformers`
+ - `pip3 install --upgrade .`
-## Pre-trained models
+### Pre-trained models
-
-Model | #params | Arch. | Pre-training data
+Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/phobert-base` | 135M | base | 20GB of texts
`vinai/phobert-large` | 370M | large | 20GB of texts
-## Example usage
+### Example usage
```python
import torch
-from transformers import AutoModel, AutoTokenizer #, PhobertTokenizer
+from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-large")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-large")
-#tokenizer = PhobertTokenizer.from_pretrained("vinai/phobert-base")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
@@ -48,4 +48,8 @@ input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
+
+## With TensorFlow 2.0+:
+# from transformers import TFAutoModel
+# phobert = TFAutoModel.from_pretrained("vinai/phobert-large")
```