mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-30 17:52:35 +06:00

* doc
* [tests] Add sample files for a regression task
* [HUGE] Trainer
* Feedback from @sshleifer
* Feedback from @thomwolf + logging tweak
* [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes
* [glue] Use default max_seq_length of 128 like before
* [glue] move DataTrainingArguments around
* [ner] Change interface of InputExample, and align run_{tf,pl}
* Re-align the pl scripts a little bit
* ner
* [ner] Add integration test
* Fix language_modeling with API tweak
* [ci] Tweak loss target
* Don't break console output
* amp.initialize: model must be on right device before
* [multiple-choice] update for Trainer
* Re-align to 827d6d6ef0
26 lines
218 B
Plaintext
26 lines
218 B
Plaintext
B-LOC
|
|
B-LOCderiv
|
|
B-LOCpart
|
|
B-ORG
|
|
B-ORGderiv
|
|
B-ORGpart
|
|
B-OTH
|
|
B-OTHderiv
|
|
B-OTHpart
|
|
B-PER
|
|
B-PERderiv
|
|
B-PERpart
|
|
I-LOC
|
|
I-LOCderiv
|
|
I-LOCpart
|
|
I-ORG
|
|
I-ORGderiv
|
|
I-ORGpart
|
|
I-OTH
|
|
I-OTHderiv
|
|
I-OTHpart
|
|
I-PER
|
|
I-PERderiv
|
|
I-PERpart
|
|
O
|