mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-03 03:31:05 +06:00

* Fixing roberta for slow-fast tests * WIP getting equivalence on pipelines * slow-to-fast equivalence - working on question-answering pipeline * optional FAISS tests * Pipeline Q&A * Move pipeline tests to their own test job again * update tokenizer to add sequence id methods * update to tokenizers 0.9.4 * set sentencepiecce as optional * clean up squad * clean up pipelines to use sequence_ids * style/quality * wording * Switch to use_fast = True by default * update tests for use_fast at True by default * fix rag tokenizer test * removing protobuf from required dependencies * fix NER test for use_fast = True by default * fixing example tests (Q&A examples use slow tokenizers for now) * protobuf in main deps extras["sentencepiece"] and example deps * fix protobug install test * try to fix seq2seq by switching to slow tokenizers for now * Update src/transformers/tokenization_utils_base.py Co-authored-by: Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/tokenization_utils_base.py Co-authored-by: Lysandre Debut <lysandre@huggingface.co> Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
22 lines
243 B
Plaintext
22 lines
243 B
Plaintext
tensorboard
|
|
scikit-learn
|
|
seqeval
|
|
psutil
|
|
sacrebleu
|
|
rouge-score
|
|
tensorflow_datasets
|
|
pytorch-lightning==1.0.4
|
|
matplotlib
|
|
git-python==1.0.3
|
|
faiss-cpu
|
|
streamlit
|
|
elasticsearch
|
|
nltk
|
|
pandas
|
|
datasets
|
|
fire
|
|
pytest
|
|
conllu
|
|
sentencepiece != 0.1.92
|
|
protobuf
|