This avoids an unnecessary call and avoids problems during
initialization of class hierarchies.
Co-authored-by: Samuel Melm <samuel.melm@stud.uni-heidelberg.de>
* First Pass All Tests Pass
* WIP
* Adding file to documentation tests
* Change the base model for the example in the doc test.
* Fix Code Styling by running
make fixup
* Called Style
* Reverted to gpt2 model rather than distill gpt2
Then used a token classification model over a sequence model for an example.
* Fix Styling Issue
* Hopefully ignores the formatting issue.
Co-authored-by: ArEnSc <xx.mike.chung.xx@gmail.com>
* Fix t5 shard on TPU Pods
The current script doesn't work properly on a TPU pod because the global batch is not divided correctly per host.
This pull request fixes this issue by dividing the global batch to each host before it is shared on each host.
* fix style
Co-authored-by: ahmed-elnaggar <ahmed.elnaggar@allianz.com>
I create an archive of older checkpoints during training the checkpoint has a name with `f"{checkpoint_prefix}-*.zip/.tar `
previously `glob(f"{checkpoint_prefix}-*")` takes all files/folders starting with the name checkpoint, and later `shutil.rmtree(checkpoint)` takes a folder name; since at some point it my get a zip file; it crashes training; adding this `if os.path.isdir(x)` allows only folders on `glob_checkpoints`
* add simple multi gpu complet
* add human_eval_multi_gpu
* use copy strategy to distribute across gpu, to avoid padding
* add doc string
* update code style
* use task id to arrange output
* truncate input to avoid zero pad
* Stop the copy mechanism
* update style
* restore copies to scale better in distributed mode
* update style
* replace human eval
* Apply suggestions from code review
1. Tokenize all input at the same time
2. use attention_mask to get the input length
3. other small fixes
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* correct typo and update docstring
* update code style
* remove num sample division constraint
* remove max len calculation
* use accelerator.gather once to speed up
* use accelerate set_seed; update accelerate version
* correct gather bug
Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com>
* update proto sentencepiece model
* Revert "update proto sentencepiece model"
This reverts commit b07f671747.
* add check
* add test
* Revert "Revert "update proto sentencepiece model""
This reverts commit 46108257b8.
* test for log level
* test for log level 2
* warning at the warning level
* clean
* format
* add explanation in docstring
* Fixed some bugs involving saving during epochs
* Added tests mimicking the existing examples tests
* Added in json exporting to all `no_trainer` examples for consistency
* Add TapexTokenizer
* Improve docstrings and provide option to provide answer
* Remove option for pretokenized inputs
* Add TAPEX to README
* Fix copies
* Remove option for pretokenized inputs
* Initial commit: add tapex fine-tuning examples on both table-based question answering and table-based fact verification.
* - Draft a README file for running the script and introducing some background.
- Remove unused code lines in tabfact script.
- Disable the deafult `pad_to_max_length` option which is memory-consuming.
* * Support `as_target_tokenizer` function for TapexTokenizer.
* Fix the do_lower_case behaviour of TapexTokenizer.
* Add unit tests for target scenarios and cased/uncased scenarios for both source and target.
* * Replace the label BartTokenizer with TapexTokenizer's as_target_tokenizer function.
* Fix typos in tapex example README.
* * fix the evaluation script - remove the property `task_name`
* * Make the label space more clear for tabfact tasks
* * Using a new fine-tuning script for tapex-base on tabfact.
* * Remove the lowercase code outside the tokenizer - we use the tokenizer to control whether do_lower_case
* Guarantee the hyper-parameter can be run without out-of-memory on 16GB card and report the new reproduced number on wikisql
* * Remove the default tokenizer_name option.
* Provide evaluation command.
* * Support for WikiTableQuestion dataset.
* Fix a typo in README.
* * Fix the datasets's key name in WikiTableQuestions
* Run make fixup and move test to folder
* Fix quality
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Apply some more suggestions from code review
* Improve docstrings
* Overwrite failing test
* Improve comment in example scripts
* Fix rebase
* Add TAPEX to Auto mapping
* Add TAPEX to auto config mappings
* Put TAPEX higher than BART in auto mapping
* Add TAPEX to doc tests
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MBP.localdomain>
Co-authored-by: SivilTaram <qianlxc@outlook.com>
Co-authored-by: Niels Rogge <nielsrogge@nielss-mbp.home>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
* Update README.md Support Image
Updates the Support image linking to our EAP page (to give it a refresh + help avoid image fatigue).
Slack thread checking in with #open-source-internal on this update (https://huggingface.slack.com/archives/C021H1P1HKR/p1648838903316709)
* Compressed Updated Support image
* Improves Support Image Logo + Height
Updated the image based on logo + size feedback. Big thanks to Bibi for making quick edits to this image.
* Add inputs vector to calculate metric method
* Include inputs for evaluation metrics with backwards compatibility
* Prevent inputs create OOM issue and documentation details
* Update style and code documentation
* Fix style formatting issues
* Update files format with make style