mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 22:30:09 +06:00
![]() * ready for PR
* cleanup
* correct FSMT_PRETRAINED_MODEL_ARCHIVE_LIST
* fix
* perfectionism
* revert change from another PR
* odd, already committed this one
* non-interactive upload workaround
* backup the failed experiment
* store langs in config
* workaround for localizing model path
* doc clean up as in https://github.com/huggingface/transformers/pull/6956
* style
* back out debug mode
* document: run_eval.py --num_beams 10
* remove unneeded constant
* typo
* re-use bart's Attention
* re-use EncoderLayer, DecoderLayer from bart
* refactor
* send to cuda and fp16
* cleanup
* revert (moved to another PR)
* better error message
* document run_eval --num_beams
* solve the problem of tokenizer finding the right files when model is local
* polish, remove hardcoded config
* add a note that the file is autogenerated to avoid losing changes
* prep for org change, remove unneeded code
* switch to model4.pt, update scores
* s/python/bash/
* missing init (but doesn't impact the finetuned model)
* cleanup
* major refactor (reuse-bart)
* new model, new expected weights
* cleanup
* cleanup
* full link
* fix model type
* merge porting notes
* style
* cleanup
* have to create a DecoderConfig object to handle vocab_size properly
* doc fix
* add note (not a public class)
* parametrize
* - add bleu scores integration tests
* skip test if sacrebleu is not installed
* cache heavy models/tokenizers
* some tweaks
* remove tokens that aren't used
* more purging
* simplify code
* switch to using decoder_start_token_id
* add doc
* Revert "major refactor (reuse-bart)"
This reverts commit
|
||
---|---|---|
.. | ||
_static | ||
imgs | ||
internal | ||
main_classes | ||
model_doc | ||
benchmarks.rst | ||
bertology.rst | ||
conf.py | ||
contributing.md | ||
converting_tensorflow_models.rst | ||
custom_datasets.rst | ||
examples.md | ||
favicon.ico | ||
glossary.rst | ||
index.rst | ||
installation.md | ||
migration.md | ||
model_sharing.rst | ||
model_summary.rst | ||
multilingual.rst | ||
notebooks.md | ||
perplexity.rst | ||
philosophy.rst | ||
preprocessing.rst | ||
pretrained_models.rst | ||
quicktour.rst | ||
serialization.rst | ||
task_summary.rst | ||
testing.rst | ||
tokenizer_summary.rst | ||
training.rst |