mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00

* adding template * update model * model update * update conf for debug model * update conversion * update conversion script * update conversion script * fix missing keys check * add tests to test the tokenizer in the local machine * Change variable name * add tests on xnli dataset * add more description * add descriptions + clearer code * clearer code * adding new tests + skipping few tests because of env problems * change comment * add dtype on the configuration * add test embeddings * add hardcoded test * fix dtype issue * adding torch.float16 to config * adding more metrics (min, max, mean) * add sum * now the test passes with almost equal * add files for conversion - test passes on cpu gpu * add final changes * cleaning code * add new args in the docstring * fix one liner function * remove macros * remove forward attention * clean up init funtion * add comments on the issue * rm scale mask softmax * do make style * fix dtype in init * fixing for loop on att probs * fix style with black * fix style + doc error * fix and debug CI errors (docs + style) * some updates - change new operations - finally add scaled softmax - added new args in the config * make use cache working * add changes - save sharded models - final changes on the modeling script * add changes - comment on alibi - add TODO on seq length * test commit - added a text to test the commit Co-authored-by: thomasw21 <24695242+thomasw21@users.noreply.github.com> * final changes - attention mask change - generation works on BS176b Co-authored-by: thomasw21 <24695242+thomasw21@users.noreply.github.com> * changes - model + conversion * move to correct dir * put , * fex fixes * fix tokenizer autodoc * fix minor CI issues * fix minor CI issues * fix minor CI issues * fix style issue * fix minor import issues * fix few issues * remove def main on the test * add require torch * replace decorator with 'with' * fix style * change to bloom * add quick fix tokenizer * fix tokenizer file * fix tokenizer - merge tests - small fixes * fix import issue * add bloom to readme * fix consistency * Update docs/source/en/model_doc/bloom.mdx Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Apply suggestions from code review fix comment issues on file headers Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix doc issue * small fix - modeling test * some changes - refactor some code - taking into account reviews - more tests should pass - removed pruning tests * remove useless division * more tests should pass * more tests should pass * more tests should pass * let's try this one -add alibi offset - remove all permutes to make the grad operations work - finger crossed * refactor - refactor code - style changes - add new threshold for test * major changes - change BLOOM to Bloom - add quick doc on bloom.mdx - move embeddings test on modeling test * modify readme * small fixes * small fix - better threshold for a test * remove old test file from fetcher * fix small typo * major change - change BloomLMHead to BloomForCausalLM * remove onnx config * major changes - refactor the code - remove asserts - change tol for test * make style * small change * adding a slow test + commenting old ones for now * make style * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * make style * fix duplicates * cleaning comments on config * clean a bit conversion file * refacor a bit modeling file * refactor tokenizer file * fix tokenization test issue * fix tokenization issue #2 * fix tokenization issue second try * fix test issue * make style + add suggestions * change test fetcher * try this one - slow tests should pass - finger crossed * possible final changes * make style * try fix padding side issue * fix side * fix padding issue * fix ko-readme * fix config auto * cleaning modeling file * keep bloom in caps in ko * update config docs * remove pretraining_pp * remove model parallel * update config - add correct config files * fix duplicates * fix fetcher * fix refactor issue - remove divide function * try to remove alibi * small fixes - fix alibi - remove seq length - refactor a bit the code * put correct values - fix bos and eos token ids * fix attention mask loop Co-authored-by: thomasw21 <24695242+thomasw21@users.noreply.github.com> * small fixes: - remove skip bias add * small fixes - fix typo in readme - fix typos in config * small changes - remove a test - add reconstruction test - change config * small changes - change Scaled Softmax to BloomScaledSoftmax * small fixes - fix alibi dtype * major changes - removing explicit dtype when loading modules - fixing test args (torch_dtype=auto) - add dosctring * fix readmes * major changes - now bloom supports alibi shifting - refactor a bit the code - better test tolerance now * refactor a bit * refactor a bit * put correct name on test * change docstring * small changes - fix docstring modeling - fix test tolerance * fix small nit - take dtype from tensors in the conversion script * minor fix - fix mdx issue * minor fix - change config docstring * forward contrib credits from PR14084 * Apply suggestions from code review Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * apply modifications Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * resolve softmax upcast * Apply suggestions from code review Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * Update src/transformers/models/bloom/modeling_bloom.py Co-authored-by: Niklas Muennighoff <n.muennighoff@gmail.com> * final changes modeling Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * Merge commit 'd156898f3b9b2c990e5963f5030a7143d57921a2' * merge commit * Apply suggestions from code review Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * apply suggestions Apply suggestions from Stas comments Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * Fix gradient checkpointing Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> * add slow but exact * add accelerate compatibility Co-authored-by: Nicolas Patry <Narsil@users.noreply.github.com> * forward contrib credits Co-authored-by: thomasw21 <thomasw21@users.noreply.github.com> Co-authored-by: sgugger <sgugger@users.noreply.github.com> Co-authored-by: patrickvonplaten <patrickvonplaten@users.noreply.github.com> Co-authored-by: Niklas Muennighoff <n.muennighoff@gmail.com> Co-authored-by: LysandreJik <LysandreJik@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * fix torch device on tests * make style * Apply suggestions from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * fix nits Co-authored-by: patrickvonplaten<patrickvonplaten@users.noreply.github.com> * remove final nits * fix doc - add more details on the doc - add links to checkpoints * Update src/transformers/__init__.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/bloom/modeling_bloom.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * apply suggestions Co-authored-by: sgugger <sgugger@users.noreply.github.com> * put test torchscript to false * Update src/transformers/models/bloom/modeling_bloom.py Co-authored-by: justheuristic <justheuristic@gmail.com> * fix alibi - create alibi only once * add small doc * make quality * replace torch.nn * remove token type emb * fix fused op + output bias * add fused op - now can control fused operation from config * remove fused op * make quality * small changes - remove unsed args on config - removed bias gelu file - make the model torchscriptable - add torchscript slow tests * Update src/transformers/models/bloom/modeling_bloom.py * fix slow * make style * add accelerate support * add bloom to deepspeed tests * minor changes * Apply suggestions from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * minor change * slow tests pass * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update docs/source/en/model_doc/bloom.mdx Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * minor changes: - change docstring - add link to paper Co-authored-by: Thomwolf <thomwolf@gmail.com> Co-authored-by: Thomas Wolf <thomas@huggingface.co> Co-authored-by: thomasw21 <24695242+thomasw21@users.noreply.github.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by: sIncerass <sheng.s@berkeley.edu> Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> Co-authored-by: Niklas Muennighoff <n.muennighoff@gmail.com> Co-authored-by: Nicolas Patry <Narsil@users.noreply.github.com> Co-authored-by: thomasw21 <thomasw21@users.noreply.github.com> Co-authored-by: sgugger <sgugger@users.noreply.github.com> Co-authored-by: patrickvonplaten <patrickvonplaten@users.noreply.github.com> Co-authored-by: LysandreJik <LysandreJik@users.noreply.github.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: justheuristic <justheuristic@gmail.com> Co-authored-by: Stas Bekman <stas@stason.org>
427 lines
12 KiB
YAML
427 lines
12 KiB
YAML
- sections:
|
|
- local: index
|
|
title: 🤗 Transformers
|
|
- local: quicktour
|
|
title: Quick tour
|
|
- local: installation
|
|
title: Installation
|
|
title: Get started
|
|
- sections:
|
|
- local: pipeline_tutorial
|
|
title: Pipelines for inference
|
|
- local: autoclass_tutorial
|
|
title: Load pretrained instances with an AutoClass
|
|
- local: preprocessing
|
|
title: Preprocess
|
|
- local: training
|
|
title: Fine-tune a pretrained model
|
|
- local: accelerate
|
|
title: Distributed training with 🤗 Accelerate
|
|
- local: model_sharing
|
|
title: Share a model
|
|
title: Tutorials
|
|
- sections:
|
|
- local: fast_tokenizers
|
|
title: "Use tokenizers from 🤗 Tokenizers"
|
|
- local: create_a_model
|
|
title: Create a custom architecture
|
|
- local: custom_models
|
|
title: Sharing custom models
|
|
- sections:
|
|
- local: tasks/sequence_classification
|
|
title: Text classification
|
|
- local: tasks/token_classification
|
|
title: Token classification
|
|
- local: tasks/question_answering
|
|
title: Question answering
|
|
- local: tasks/language_modeling
|
|
title: Language modeling
|
|
- local: tasks/translation
|
|
title: Translation
|
|
- local: tasks/summarization
|
|
title: Summarization
|
|
- local: tasks/multiple_choice
|
|
title: Multiple choice
|
|
- local: tasks/audio_classification
|
|
title: Audio classification
|
|
- local: tasks/asr
|
|
title: Automatic speech recognition
|
|
- local: tasks/image_classification
|
|
title: Image classification
|
|
title: Fine-tune for downstream tasks
|
|
- local: run_scripts
|
|
title: Train with a script
|
|
- local: sagemaker
|
|
title: Run training on Amazon SageMaker
|
|
- local: multilingual
|
|
title: Inference for multilingual models
|
|
- local: converting_tensorflow_models
|
|
title: Converting TensorFlow Checkpoints
|
|
- local: serialization
|
|
title: Export 🤗 Transformers models
|
|
- local: performance
|
|
title: Performance and scalability
|
|
- local: big_models
|
|
title: Instantiating a big model
|
|
- local: benchmarks
|
|
title: Benchmarks
|
|
- local: migration
|
|
title: Migrating from previous packages
|
|
- local: troubleshooting
|
|
title: Troubleshoot
|
|
- local: debugging
|
|
title: Debugging
|
|
- local: notebooks
|
|
title: "🤗 Transformers Notebooks"
|
|
- local: community
|
|
title: Community
|
|
- local: contributing
|
|
title: How to contribute to transformers?
|
|
- local: add_new_model
|
|
title: "How to add a model to 🤗 Transformers?"
|
|
- local: add_new_pipeline
|
|
title: "How to add a pipeline to 🤗 Transformers?"
|
|
- local: perf_train_gpu_one
|
|
title: Training on one GPU
|
|
- local: perf_train_gpu_many
|
|
title: Training on many GPUs
|
|
- local: perf_train_cpu
|
|
title: Training on CPU
|
|
- local: perf_hardware
|
|
title: Custom hardware for training
|
|
- local: testing
|
|
title: Testing
|
|
- local: pr_checks
|
|
title: Checks on a Pull Request
|
|
title: How-to guides
|
|
- sections:
|
|
- local: philosophy
|
|
title: Philosophy
|
|
- local: glossary
|
|
title: Glossary
|
|
- local: task_summary
|
|
title: Summary of the tasks
|
|
- local: model_summary
|
|
title: Summary of the models
|
|
- local: tokenizer_summary
|
|
title: Summary of the tokenizers
|
|
- local: pad_truncation
|
|
title: Padding and truncation
|
|
- local: bertology
|
|
title: BERTology
|
|
- local: perplexity
|
|
title: Perplexity of fixed-length models
|
|
title: Conceptual guides
|
|
- sections:
|
|
- sections:
|
|
- local: main_classes/callback
|
|
title: Callbacks
|
|
- local: main_classes/configuration
|
|
title: Configuration
|
|
- local: main_classes/data_collator
|
|
title: Data Collator
|
|
- local: main_classes/keras_callbacks
|
|
title: Keras callbacks
|
|
- local: main_classes/logging
|
|
title: Logging
|
|
- local: main_classes/model
|
|
title: Models
|
|
- local: main_classes/text_generation
|
|
title: Text Generation
|
|
- local: main_classes/onnx
|
|
title: ONNX
|
|
- local: main_classes/optimizer_schedules
|
|
title: Optimization
|
|
- local: main_classes/output
|
|
title: Model outputs
|
|
- local: main_classes/pipelines
|
|
title: Pipelines
|
|
- local: main_classes/processors
|
|
title: Processors
|
|
- local: main_classes/tokenizer
|
|
title: Tokenizer
|
|
- local: main_classes/trainer
|
|
title: Trainer
|
|
- local: main_classes/deepspeed
|
|
title: DeepSpeed Integration
|
|
- local: main_classes/feature_extractor
|
|
title: Feature Extractor
|
|
title: Main Classes
|
|
- sections:
|
|
- local: model_doc/albert
|
|
title: ALBERT
|
|
- local: model_doc/auto
|
|
title: Auto Classes
|
|
- local: model_doc/bart
|
|
title: BART
|
|
- local: model_doc/barthez
|
|
title: BARThez
|
|
- local: model_doc/bartpho
|
|
title: BARTpho
|
|
- local: model_doc/beit
|
|
title: BEiT
|
|
- local: model_doc/bert
|
|
title: BERT
|
|
- local: model_doc/bertweet
|
|
title: Bertweet
|
|
- local: model_doc/bert-generation
|
|
title: BertGeneration
|
|
- local: model_doc/bert-japanese
|
|
title: BertJapanese
|
|
- local: model_doc/big_bird
|
|
title: BigBird
|
|
- local: model_doc/bigbird_pegasus
|
|
title: BigBirdPegasus
|
|
- local: model_doc/blenderbot
|
|
title: Blenderbot
|
|
- local: model_doc/blenderbot-small
|
|
title: Blenderbot Small
|
|
- local: model_doc/bloom
|
|
title: BLOOM
|
|
- local: model_doc/bort
|
|
title: BORT
|
|
- local: model_doc/byt5
|
|
title: ByT5
|
|
- local: model_doc/camembert
|
|
title: CamemBERT
|
|
- local: model_doc/canine
|
|
title: CANINE
|
|
- local: model_doc/convnext
|
|
title: ConvNeXT
|
|
- local: model_doc/clip
|
|
title: CLIP
|
|
- local: model_doc/convbert
|
|
title: ConvBERT
|
|
- local: model_doc/cpm
|
|
title: CPM
|
|
- local: model_doc/ctrl
|
|
title: CTRL
|
|
- local: model_doc/cvt
|
|
title: CvT
|
|
- local: model_doc/data2vec
|
|
title: Data2Vec
|
|
- local: model_doc/deberta
|
|
title: DeBERTa
|
|
- local: model_doc/deberta-v2
|
|
title: DeBERTa-v2
|
|
- local: model_doc/decision_transformer
|
|
title: Decision Transformer
|
|
- local: model_doc/deit
|
|
title: DeiT
|
|
- local: model_doc/detr
|
|
title: DETR
|
|
- local: model_doc/dialogpt
|
|
title: DialoGPT
|
|
- local: model_doc/distilbert
|
|
title: DistilBERT
|
|
- local: model_doc/dit
|
|
title: DiT
|
|
- local: model_doc/dpr
|
|
title: DPR
|
|
- local: model_doc/dpt
|
|
title: DPT
|
|
- local: model_doc/electra
|
|
title: ELECTRA
|
|
- local: model_doc/encoder-decoder
|
|
title: Encoder Decoder Models
|
|
- local: model_doc/flaubert
|
|
title: FlauBERT
|
|
- local: model_doc/flava
|
|
title: FLAVA
|
|
- local: model_doc/fnet
|
|
title: FNet
|
|
- local: model_doc/fsmt
|
|
title: FSMT
|
|
- local: model_doc/funnel
|
|
title: Funnel Transformer
|
|
- local: model_doc/glpn
|
|
title: GLPN
|
|
- local: model_doc/herbert
|
|
title: HerBERT
|
|
- local: model_doc/ibert
|
|
title: I-BERT
|
|
- local: model_doc/imagegpt
|
|
title: ImageGPT
|
|
- local: model_doc/layoutlm
|
|
title: LayoutLM
|
|
- local: model_doc/layoutlmv2
|
|
title: LayoutLMV2
|
|
- local: model_doc/layoutlmv3
|
|
title: LayoutLMV3
|
|
- local: model_doc/layoutxlm
|
|
title: LayoutXLM
|
|
- local: model_doc/led
|
|
title: LED
|
|
- local: model_doc/levit
|
|
title: LeViT
|
|
- local: model_doc/longformer
|
|
title: Longformer
|
|
- local: model_doc/luke
|
|
title: LUKE
|
|
- local: model_doc/lxmert
|
|
title: LXMERT
|
|
- local: model_doc/marian
|
|
title: MarianMT
|
|
- local: model_doc/maskformer
|
|
title: MaskFormer
|
|
- local: model_doc/m2m_100
|
|
title: M2M100
|
|
- local: model_doc/mbart
|
|
title: MBart and MBart-50
|
|
- local: model_doc/mctct
|
|
title: MCTCT
|
|
- local: model_doc/megatron-bert
|
|
title: MegatronBERT
|
|
- local: model_doc/megatron_gpt2
|
|
title: MegatronGPT2
|
|
- local: model_doc/mluke
|
|
title: mLUKE
|
|
- local: model_doc/mobilebert
|
|
title: MobileBERT
|
|
- local: model_doc/mpnet
|
|
title: MPNet
|
|
- local: model_doc/mt5
|
|
title: MT5
|
|
- local: model_doc/nystromformer
|
|
title: Nyströmformer
|
|
- local: model_doc/openai-gpt
|
|
title: OpenAI GPT
|
|
- local: model_doc/opt
|
|
title: OPT
|
|
- local: model_doc/gpt2
|
|
title: OpenAI GPT2
|
|
- local: model_doc/gptj
|
|
title: GPT-J
|
|
- local: model_doc/gpt_neo
|
|
title: GPT Neo
|
|
- local: model_doc/gpt_neox
|
|
title: GPT NeoX
|
|
- local: model_doc/hubert
|
|
title: Hubert
|
|
- local: model_doc/perceiver
|
|
title: Perceiver
|
|
- local: model_doc/pegasus
|
|
title: Pegasus
|
|
- local: model_doc/phobert
|
|
title: PhoBERT
|
|
- local: model_doc/plbart
|
|
title: PLBart
|
|
- local: model_doc/poolformer
|
|
title: PoolFormer
|
|
- local: model_doc/prophetnet
|
|
title: ProphetNet
|
|
- local: model_doc/qdqbert
|
|
title: QDQBert
|
|
- local: model_doc/rag
|
|
title: RAG
|
|
- local: model_doc/realm
|
|
title: REALM
|
|
- local: model_doc/reformer
|
|
title: Reformer
|
|
- local: model_doc/rembert
|
|
title: RemBERT
|
|
- local: model_doc/regnet
|
|
title: RegNet
|
|
- local: model_doc/resnet
|
|
title: ResNet
|
|
- local: model_doc/retribert
|
|
title: RetriBERT
|
|
- local: model_doc/roberta
|
|
title: RoBERTa
|
|
- local: model_doc/roformer
|
|
title: RoFormer
|
|
- local: model_doc/segformer
|
|
title: SegFormer
|
|
- local: model_doc/sew
|
|
title: SEW
|
|
- local: model_doc/sew-d
|
|
title: SEW-D
|
|
- local: model_doc/speech-encoder-decoder
|
|
title: Speech Encoder Decoder Models
|
|
- local: model_doc/speech_to_text
|
|
title: Speech2Text
|
|
- local: model_doc/speech_to_text_2
|
|
title: Speech2Text2
|
|
- local: model_doc/splinter
|
|
title: Splinter
|
|
- local: model_doc/squeezebert
|
|
title: SqueezeBERT
|
|
- local: model_doc/swin
|
|
title: Swin Transformer
|
|
- local: model_doc/t5
|
|
title: T5
|
|
- local: model_doc/t5v1.1
|
|
title: T5v1.1
|
|
- local: model_doc/tapas
|
|
title: TAPAS
|
|
- local: model_doc/tapex
|
|
title: TAPEX
|
|
- local: model_doc/trajectory_transformer
|
|
title: Trajectory Transformer
|
|
- local: model_doc/transfo-xl
|
|
title: Transformer XL
|
|
- local: model_doc/trocr
|
|
title: TrOCR
|
|
- local: model_doc/unispeech
|
|
title: UniSpeech
|
|
- local: model_doc/unispeech-sat
|
|
title: UniSpeech-SAT
|
|
- local: model_doc/van
|
|
title: VAN
|
|
- local: model_doc/vilt
|
|
title: ViLT
|
|
- local: model_doc/vision-encoder-decoder
|
|
title: Vision Encoder Decoder Models
|
|
- local: model_doc/vision-text-dual-encoder
|
|
title: Vision Text Dual Encoder
|
|
- local: model_doc/vit
|
|
title: Vision Transformer (ViT)
|
|
- local: model_doc/vit_mae
|
|
title: ViTMAE
|
|
- local: model_doc/visual_bert
|
|
title: VisualBERT
|
|
- local: model_doc/wav2vec2
|
|
title: Wav2Vec2
|
|
- local: model_doc/wav2vec2-conformer
|
|
title: Wav2Vec2-Conformer
|
|
- local: model_doc/wav2vec2_phoneme
|
|
title: Wav2Vec2Phoneme
|
|
- local: model_doc/wavlm
|
|
title: WavLM
|
|
- local: model_doc/xglm
|
|
title: XGLM
|
|
- local: model_doc/xlm
|
|
title: XLM
|
|
- local: model_doc/xlm-prophetnet
|
|
title: XLM-ProphetNet
|
|
- local: model_doc/xlm-roberta
|
|
title: XLM-RoBERTa
|
|
- local: model_doc/xlm-roberta-xl
|
|
title: XLM-RoBERTa-XL
|
|
- local: model_doc/xlnet
|
|
title: XLNet
|
|
- local: model_doc/xlsr_wav2vec2
|
|
title: XLSR-Wav2Vec2
|
|
- local: model_doc/xls_r
|
|
title: XLS-R
|
|
- local: model_doc/yolos
|
|
title: YOLOS
|
|
- local: model_doc/yoso
|
|
title: YOSO
|
|
title: Models
|
|
- sections:
|
|
- local: internal/modeling_utils
|
|
title: Custom Layers and Utilities
|
|
- local: internal/pipelines_utils
|
|
title: Utilities for pipelines
|
|
- local: internal/tokenization_utils
|
|
title: Utilities for Tokenizers
|
|
- local: internal/trainer_utils
|
|
title: Utilities for Trainer
|
|
- local: internal/generation_utils
|
|
title: Utilities for Generation
|
|
- local: internal/file_utils
|
|
title: General Utilities
|
|
title: Internal Helpers
|
|
title: API
|