mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
[model_cards] dbmdz models
Co-Authored-By: Stefan Schweter <stefan-it@users.noreply.github.com>
This commit is contained in:
parent
6636826f04
commit
ddb6f9476b
66
model_cards/dbmdz/bert-base-german-cased/README.md
Normal file
66
model_cards/dbmdz/bert-base-german-cased/README.md
Normal file
@ -0,0 +1,66 @@
|
||||
# 🤗 + 📚 dbmdz German BERT models
|
||||
|
||||
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
|
||||
Library open sources another German BERT models 🎉
|
||||
|
||||
# German BERT
|
||||
|
||||
## Stats
|
||||
|
||||
In addition to the recently released [German BERT](https://deepset.ai/german-bert)
|
||||
model by [deepset](https://deepset.ai/) we provide another German-language model.
|
||||
|
||||
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
|
||||
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
|
||||
a size of 16GB and 2,350,234,427 tokens.
|
||||
|
||||
For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps
|
||||
(sentence piece model for vocab generation) follow those used for training
|
||||
[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial
|
||||
sequence length of 512 subwords and was performed for 1.5M steps.
|
||||
|
||||
This release includes both cased and uncased models.
|
||||
|
||||
## Model weights
|
||||
|
||||
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
|
||||
compatible weights are available. If you need access to TensorFlow checkpoints,
|
||||
please raise an issue!
|
||||
|
||||
| Model | Downloads
|
||||
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
|
||||
| `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt)
|
||||
| `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt)
|
||||
|
||||
## Usage
|
||||
|
||||
With Transformers >= 2.3 our German BERT models can be loaded like:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
|
||||
```
|
||||
|
||||
## Results
|
||||
|
||||
For results on downstream tasks like NER or PoS tagging, please refer to
|
||||
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
|
||||
|
||||
# Huggingface model hub
|
||||
|
||||
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
|
||||
|
||||
# Contact (Bugs, Feedback, Contribution and more)
|
||||
|
||||
For questions about our BERT models just open an issue
|
||||
[here](https://github.com/dbmdz/berts/issues/new) 🤗
|
||||
|
||||
# Acknowledgments
|
||||
|
||||
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
||||
Thanks for providing access to the TFRC ❤️
|
||||
|
||||
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
|
||||
it is possible to download both cased and uncased models from their S3 storage 🤗
|
66
model_cards/dbmdz/bert-base-german-uncased/README.md
Normal file
66
model_cards/dbmdz/bert-base-german-uncased/README.md
Normal file
@ -0,0 +1,66 @@
|
||||
# 🤗 + 📚 dbmdz German BERT models
|
||||
|
||||
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
|
||||
Library open sources another German BERT models 🎉
|
||||
|
||||
# German BERT
|
||||
|
||||
## Stats
|
||||
|
||||
In addition to the recently released [German BERT](https://deepset.ai/german-bert)
|
||||
model by [deepset](https://deepset.ai/) we provide another German-language model.
|
||||
|
||||
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
|
||||
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
|
||||
a size of 16GB and 2,350,234,427 tokens.
|
||||
|
||||
For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps
|
||||
(sentence piece model for vocab generation) follow those used for training
|
||||
[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial
|
||||
sequence length of 512 subwords and was performed for 1.5M steps.
|
||||
|
||||
This release includes both cased and uncased models.
|
||||
|
||||
## Model weights
|
||||
|
||||
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
|
||||
compatible weights are available. If you need access to TensorFlow checkpoints,
|
||||
please raise an issue!
|
||||
|
||||
| Model | Downloads
|
||||
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
|
||||
| `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt)
|
||||
| `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt)
|
||||
|
||||
## Usage
|
||||
|
||||
With Transformers >= 2.3 our German BERT models can be loaded like:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
|
||||
```
|
||||
|
||||
## Results
|
||||
|
||||
For results on downstream tasks like NER or PoS tagging, please refer to
|
||||
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
|
||||
|
||||
# Huggingface model hub
|
||||
|
||||
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
|
||||
|
||||
# Contact (Bugs, Feedback, Contribution and more)
|
||||
|
||||
For questions about our BERT models just open an issue
|
||||
[here](https://github.com/dbmdz/berts/issues/new) 🤗
|
||||
|
||||
# Acknowledgments
|
||||
|
||||
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
||||
Thanks for providing access to the TFRC ❤️
|
||||
|
||||
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
|
||||
it is possible to download both cased and uncased models from their S3 storage 🤗
|
73
model_cards/dbmdz/bert-base-italian-cased/README.md
Normal file
73
model_cards/dbmdz/bert-base-italian-cased/README.md
Normal file
@ -0,0 +1,73 @@
|
||||
# 🤗 + 📚 dbmdz BERT models
|
||||
|
||||
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
|
||||
Library open sources Italian BERT models 🎉
|
||||
|
||||
# Italian BERT
|
||||
|
||||
The source data for the Italian BERT model consists of a recent Wikipedia dump and
|
||||
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
|
||||
training corpus has a size of 13GB and 2,050,057,573 tokens.
|
||||
|
||||
For sentence splitting, we use NLTK (faster compared to spacy).
|
||||
Our cased and uncased models are training with an initial sequence length of 512
|
||||
subwords for ~2-3M steps.
|
||||
|
||||
For the XXL Italian models, we use the same training data from OPUS and extend
|
||||
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
|
||||
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
|
||||
|
||||
## Model weights
|
||||
|
||||
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
|
||||
compatible weights are available. If you need access to TensorFlow checkpoints,
|
||||
please raise an issue!
|
||||
|
||||
| Model | Downloads
|
||||
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------
|
||||
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
|
||||
|
||||
## Results
|
||||
|
||||
For results on downstream tasks like NER or PoS tagging, please refer to
|
||||
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
|
||||
|
||||
## Usage
|
||||
|
||||
With Transformers >= 2.3 our Italian BERT models can be loaded like:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
```
|
||||
|
||||
To load the (recommended) Italian XXL BERT models, just use:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
```
|
||||
|
||||
# Huggingface model hub
|
||||
|
||||
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
|
||||
|
||||
# Contact (Bugs, Feedback, Contribution and more)
|
||||
|
||||
For questions about our BERT models just open an issue
|
||||
[here](https://github.com/dbmdz/berts/issues/new) 🤗
|
||||
|
||||
# Acknowledgments
|
||||
|
||||
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
||||
Thanks for providing access to the TFRC ❤️
|
||||
|
||||
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
|
||||
it is possible to download both cased and uncased models from their S3 storage 🤗
|
73
model_cards/dbmdz/bert-base-italian-uncased/README.md
Normal file
73
model_cards/dbmdz/bert-base-italian-uncased/README.md
Normal file
@ -0,0 +1,73 @@
|
||||
# 🤗 + 📚 dbmdz BERT models
|
||||
|
||||
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
|
||||
Library open sources Italian BERT models 🎉
|
||||
|
||||
# Italian BERT
|
||||
|
||||
The source data for the Italian BERT model consists of a recent Wikipedia dump and
|
||||
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
|
||||
training corpus has a size of 13GB and 2,050,057,573 tokens.
|
||||
|
||||
For sentence splitting, we use NLTK (faster compared to spacy).
|
||||
Our cased and uncased models are training with an initial sequence length of 512
|
||||
subwords for ~2-3M steps.
|
||||
|
||||
For the XXL Italian models, we use the same training data from OPUS and extend
|
||||
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
|
||||
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
|
||||
|
||||
## Model weights
|
||||
|
||||
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
|
||||
compatible weights are available. If you need access to TensorFlow checkpoints,
|
||||
please raise an issue!
|
||||
|
||||
| Model | Downloads
|
||||
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------
|
||||
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
|
||||
|
||||
## Results
|
||||
|
||||
For results on downstream tasks like NER or PoS tagging, please refer to
|
||||
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
|
||||
|
||||
## Usage
|
||||
|
||||
With Transformers >= 2.3 our Italian BERT models can be loaded like:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
```
|
||||
|
||||
To load the (recommended) Italian XXL BERT models, just use:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
```
|
||||
|
||||
# Huggingface model hub
|
||||
|
||||
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
|
||||
|
||||
# Contact (Bugs, Feedback, Contribution and more)
|
||||
|
||||
For questions about our BERT models just open an issue
|
||||
[here](https://github.com/dbmdz/berts/issues/new) 🤗
|
||||
|
||||
# Acknowledgments
|
||||
|
||||
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
||||
Thanks for providing access to the TFRC ❤️
|
||||
|
||||
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
|
||||
it is possible to download both cased and uncased models from their S3 storage 🤗
|
73
model_cards/dbmdz/bert-base-italian-xxl-cased/README.md
Normal file
73
model_cards/dbmdz/bert-base-italian-xxl-cased/README.md
Normal file
@ -0,0 +1,73 @@
|
||||
# 🤗 + 📚 dbmdz BERT models
|
||||
|
||||
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
|
||||
Library open sources Italian BERT models 🎉
|
||||
|
||||
# Italian BERT
|
||||
|
||||
The source data for the Italian BERT model consists of a recent Wikipedia dump and
|
||||
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
|
||||
training corpus has a size of 13GB and 2,050,057,573 tokens.
|
||||
|
||||
For sentence splitting, we use NLTK (faster compared to spacy).
|
||||
Our cased and uncased models are training with an initial sequence length of 512
|
||||
subwords for ~2-3M steps.
|
||||
|
||||
For the XXL Italian models, we use the same training data from OPUS and extend
|
||||
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
|
||||
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
|
||||
|
||||
## Model weights
|
||||
|
||||
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
|
||||
compatible weights are available. If you need access to TensorFlow checkpoints,
|
||||
please raise an issue!
|
||||
|
||||
| Model | Downloads
|
||||
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------
|
||||
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
|
||||
|
||||
## Results
|
||||
|
||||
For results on downstream tasks like NER or PoS tagging, please refer to
|
||||
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
|
||||
|
||||
## Usage
|
||||
|
||||
With Transformers >= 2.3 our Italian BERT models can be loaded like:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
```
|
||||
|
||||
To load the (recommended) Italian XXL BERT models, just use:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
```
|
||||
|
||||
# Huggingface model hub
|
||||
|
||||
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
|
||||
|
||||
# Contact (Bugs, Feedback, Contribution and more)
|
||||
|
||||
For questions about our BERT models just open an issue
|
||||
[here](https://github.com/dbmdz/berts/issues/new) 🤗
|
||||
|
||||
# Acknowledgments
|
||||
|
||||
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
||||
Thanks for providing access to the TFRC ❤️
|
||||
|
||||
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
|
||||
it is possible to download both cased and uncased models from their S3 storage 🤗
|
73
model_cards/dbmdz/bert-base-italian-xxl-uncased/README.md
Normal file
73
model_cards/dbmdz/bert-base-italian-xxl-uncased/README.md
Normal file
@ -0,0 +1,73 @@
|
||||
# 🤗 + 📚 dbmdz BERT models
|
||||
|
||||
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
|
||||
Library open sources Italian BERT models 🎉
|
||||
|
||||
# Italian BERT
|
||||
|
||||
The source data for the Italian BERT model consists of a recent Wikipedia dump and
|
||||
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
|
||||
training corpus has a size of 13GB and 2,050,057,573 tokens.
|
||||
|
||||
For sentence splitting, we use NLTK (faster compared to spacy).
|
||||
Our cased and uncased models are training with an initial sequence length of 512
|
||||
subwords for ~2-3M steps.
|
||||
|
||||
For the XXL Italian models, we use the same training data from OPUS and extend
|
||||
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
|
||||
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
|
||||
|
||||
## Model weights
|
||||
|
||||
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
|
||||
compatible weights are available. If you need access to TensorFlow checkpoints,
|
||||
please raise an issue!
|
||||
|
||||
| Model | Downloads
|
||||
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------
|
||||
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
|
||||
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
|
||||
|
||||
## Results
|
||||
|
||||
For results on downstream tasks like NER or PoS tagging, please refer to
|
||||
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
|
||||
|
||||
## Usage
|
||||
|
||||
With Transformers >= 2.3 our Italian BERT models can be loaded like:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-cased")
|
||||
```
|
||||
|
||||
To load the (recommended) Italian XXL BERT models, just use:
|
||||
|
||||
```python
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
|
||||
```
|
||||
|
||||
# Huggingface model hub
|
||||
|
||||
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
|
||||
|
||||
# Contact (Bugs, Feedback, Contribution and more)
|
||||
|
||||
For questions about our BERT models just open an issue
|
||||
[here](https://github.com/dbmdz/berts/issues/new) 🤗
|
||||
|
||||
# Acknowledgments
|
||||
|
||||
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
||||
Thanks for providing access to the TFRC ❤️
|
||||
|
||||
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
|
||||
it is possible to download both cased and uncased models from their S3 storage 🤗
|
Loading…
Reference in New Issue
Block a user