Francesco
62c1fc3c1e
Removed duplicate XLMConfig, XLMForQuestionAnswering and XLMTokenizer from import statement of run_squad.py script
2019-12-19 09:50:56 -05:00
Ejar
284572efc0
Updated typo on the link
...
Updated documentation due to typo
2019-12-19 09:36:43 -05:00
Thomas Wolf
f061606277
Merge pull request #2164 from huggingface/cleanup-configs
...
[SMALL BREAKING CHANGE] Cleaning up configuration classes - Adding Model Cards
2019-12-17 09:10:16 +01:00
Lysandre
18a879f475
fix #2180
2019-12-16 16:44:29 -05:00
Lysandre
d803409215
Fix run squad evaluate during training
2019-12-16 16:31:38 -05:00
thomwolf
7140363e09
update bertabs
2019-12-14 09:44:53 +01:00
Thomas Wolf
a52d56c8d9
Merge branch 'master' into cleanup-configs
2019-12-14 09:43:07 +01:00
Lysandre
c8ed1c82c8
[SQUAD] Load checkpoint when evaluating without training
2019-12-13 12:13:48 -05:00
Pierric Cistac
5a5c4349e8
Fix summarization to_cpu
doc
2019-12-13 10:02:33 -05:00
thomwolf
47f0e3cfb7
cleaning up configuration classes
2019-12-13 14:33:24 +01:00
LysandreJik
7296f1010b
Cleanup squad and add allow train_file and predict_file usage
2019-12-12 13:01:04 -05:00
Alan deLevie
fbf5455a86
Fix typo in examples/run_glue.py args declaration.
...
deay -> decay
2019-12-12 11:16:19 -05:00
LysandreJik
6a73382706
Complete warning + cleanup
2019-12-10 14:33:24 -05:00
Lysandre
dc4e9e5cb3
DataParallel for SQuAD + fix XLM
2019-12-10 19:21:20 +00:00
Rémi Louf
4b82c485de
remove misplaced summarization documentation
2019-12-10 09:13:33 -05:00
Thomas Wolf
e57d00ee10
Merge pull request #1984 from huggingface/squad-refactor
...
[WIP] Squad refactor
2019-12-10 11:07:26 +01:00
Julien Chaumond
1d18930462
Harmonize no_cuda
flag with other scripts
2019-12-09 20:37:55 -05:00
Rémi Louf
f7eba09007
clean for release
2019-12-09 20:37:55 -05:00
Rémi Louf
2a64107e44
improve device usage
2019-12-09 20:37:55 -05:00
Rémi Louf
c0707a85d2
add README
2019-12-09 20:37:55 -05:00
Rémi Louf
ade3cdf5ad
integrate ROUGE
2019-12-09 20:37:55 -05:00
Rémi Louf
076602bdc4
prevent BERT weights from being downloaded twice
2019-12-09 20:37:55 -05:00
Rémi Louf
a1994a71ee
simplified model and configuration
2019-12-09 20:37:55 -05:00
Rémi Louf
3a9a9f7861
default output dir to documents dir
2019-12-09 20:37:55 -05:00
Rémi Louf
693606a75c
update the docs
2019-12-09 20:37:55 -05:00
Rémi Louf
2403a66598
give transformers API to BertAbs
2019-12-09 20:37:55 -05:00
Rémi Louf
ba089c780b
share pretrained embeddings
2019-12-09 20:37:55 -05:00
Rémi Louf
9660ba1cbd
Add beam search
2019-12-09 20:37:55 -05:00
Rémi Louf
1c71ecc880
load the pretrained weights for encoder-decoder
...
We currently save the pretrained_weights of the encoder and decoder in
two separate directories `encoder` and `decoder`. However, for the
`from_pretrained` function to operate with automodels we need to
specify the type of model in the path to the weights.
The path to the encoder/decoder weights is handled by the
`PreTrainedEncoderDecoder` class in the `save_pretrained` function. Sice
there is no easy way to infer the type of model that was initialized for
the encoder and decoder we add a parameter `model_type` to the function.
This is not an ideal solution as it is error prone, and the model type
should be carried by the Model classes somehow.
This is a temporary fix that should be changed before merging.
2019-12-09 20:37:55 -05:00
Rémi Louf
07f4cd73f6
update function to add special tokens
...
Since I started my PR the `add_special_token_single_sequence` function
has been deprecated for another; I replaced it with the new function.
2019-12-09 20:37:55 -05:00
Bilal Khan
79526f82f5
Remove unnecessary epoch variable
2019-12-09 16:24:35 -05:00
Bilal Khan
9626e0458c
Add functionality to continue training from last saved global_step
2019-12-09 16:24:35 -05:00
Bilal Khan
2d73591a18
Stop saving current epoch
2019-12-09 16:24:35 -05:00
Bilal Khan
0eb973b0d9
Use saved optimizer and scheduler states if available
2019-12-09 16:24:35 -05:00
Bilal Khan
a03fcf570d
Save tokenizer after each epoch to be able to resume training from a checkpoint
2019-12-09 16:24:35 -05:00
Bilal Khan
f71b1bb05a
Save optimizer state, scheduler state and current epoch
2019-12-09 16:24:35 -05:00
LysandreJik
2a4ef098d6
Add ALBERT and XLM to SQuAD script
2019-12-09 10:46:47 -05:00
Lysandre Debut
00c4e39581
Merge branch 'master' into squad-refactor
2019-12-09 10:41:15 -05:00
Thomas Wolf
5482822a2b
Merge pull request #2046 from jplu/tf2-ner-example
...
Add NER TF2 example.
2019-12-06 12:12:22 +01:00
LysandreJik
e9217da5ff
Cleanup
...
Improve global visibility on the run_squad script, remove unused files and fixes related to XLNet.
2019-12-05 16:01:51 -05:00
LysandreJik
9ecd83dace
Patch evaluation for impossible values + cleanup
2019-12-05 14:44:57 -05:00
VictorSanh
35ff345fc9
update requirements
2019-12-05 12:07:04 -05:00
VictorSanh
552c44a9b1
release distilm-bert
2019-12-05 10:14:58 -05:00
Rosanne Liu
ee53de7aac
Pr for pplm ( #2060 )
...
* license
* changes
* ok
* Update paper link and commands to run
* pointer to uber repo
2019-12-05 09:20:07 -05:00
Julien Plu
9200a759d7
Add few tests on the TF optimization file with some info in the documentation. Complete the README.
2019-12-05 12:56:43 +01:00
thomwolf
75a97af6bc
fix #1450 - add doc
2019-12-05 11:26:55 +01:00
LysandreJik
f7e4a7cdfa
Cleanup
2019-12-04 16:24:15 -05:00
LysandreJik
cca75e7884
Kill the demon spawn
2019-12-04 15:42:29 -05:00
LysandreJik
9ddc3f1a12
Naming update + XLNet/XLM evaluation
2019-12-04 10:37:00 -05:00
thomwolf
5bfcd0485e
fix #1991
2019-12-04 14:53:11 +01:00