mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-30 09:42:22 +06:00
![]() * Add pip install update to resolve import error Add pip install upgrade tensorflow-gpu to remove error below: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-094fadb93f3f> in <module>() 1 import torch ----> 2 from transformers import AutoModel, AutoTokenizer, BertTokenizer 3 4 torch.set_grad_enabled(False) 4 frames /usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>() 133 134 # Pipelines --> 135 from .pipelines import ( 136 Conversation, 137 ConversationalPipeline, /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <module>() 46 import tensorflow as tf 47 ---> 48 from .modeling_tf_auto import ( 49 TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, 50 TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in <module>() 49 from .configuration_utils import PretrainedConfig 50 from .file_utils import add_start_docstrings ---> 51 from .modeling_tf_albert import ( 52 TFAlbertForMaskedLM, 53 TFAlbertForMultipleChoice, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_albert.py in <module>() 22 import tensorflow as tf 23 ---> 24 from .activations_tf import get_tf_activation 25 from .configuration_albert import AlbertConfig 26 from .file_utils import ( /usr/local/lib/python3.6/dist-packages/transformers/activations_tf.py in <module>() 52 "gelu": tf.keras.layers.Activation(gelu), 53 "relu": tf.keras.activations.relu, ---> 54 "swish": tf.keras.activations.swish, 55 "silu": tf.keras.activations.swish, 56 "gelu_new": tf.keras.layers.Activation(gelu_new), AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish' ``` I have tried running the colab after this change and it seems to work fine (all the cells run with no errors). * Update notebooks/02-transformers.ipynb only need to upgrade tensorflow, not tensorflow-gpu. Co-authored-by: Lysandre Debut <lysandre@huggingface.co> Co-authored-by: Lysandre Debut <lysandre@huggingface.co> |
||
---|---|---|
.. | ||
01-training-tokenizers.ipynb | ||
02-transformers.ipynb | ||
03-pipelines.ipynb | ||
04-onnx-export.ipynb | ||
05-benchmark.ipynb | ||
README.md |
🤗 Transformers Notebooks
You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging 🤗 Transformers and would like be listed here, please open a Pull Request so it can be included under the Community notebooks.
Hugging Face's notebooks 🤗
Notebook | Description | |
---|---|---|
Getting Started Tokenizers | How to train and use your very own tokenizer | |
Getting Started Transformers | How to easily start using transformers | |
How to use Pipelines | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | |
How to train a language model | Highlight all the steps to effectively train Transformer model on custom data | |
How to generate text | How to use different decoding methods for language generation with transformers | |
How to export model to ONNX | Highlight how to export and run inference workloads through ONNX | |
How to use Benchmarks | How to benchmark models with transformers | |
Reformer | How Reformer pushes the limits of language modeling |
Community notebooks:
Notebook | Description | Author | |
---|---|---|---|
Train T5 in Tensorflow 2 | How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD | Muhammad Harris | |
Train T5 on TPU | How to train T5 on SQUAD with Transformers and Nlp | Suraj Patil | |
Fine-tune T5 for Classification and Multiple Choice | How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning | Suraj Patil | |
Fine-tune DialoGPT on New Datasets and Languages | How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots | Nathan Cooper | |
Long Sequence Modeling with Reformer | How to train on sequences as long as 500,000 tokens with Reformer | Patrick von Platen | |
Fine-tune BART for Summarization | How to fine-tune BART for summarization with fastai using blurr | Wayde Gilliam | |
Fine-tune a pre-trained Transformer on anyone's tweets | How to generate tweets in the style of your favorite Twitter account by fine-tune a GPT-2 model | Boris Dayma | |
A Step by Step Guide to Tracking Hugging Face Model Performance | A quick tutorial for training NLP models with HuggingFace and & visualizing their performance with Weights & Biases | Jack Morris | |
Pretrain Longformer | How to build a "long" version of existing pretrained models | Iz Beltagy | |
Fine-tune Longformer for QA | How to fine-tune longformer model for QA task | Suraj Patil | |
Evaluate Model with 🤗nlp | How to evaluate longformer on TriviaQA with nlp |
Patrick von Platen | |
Fine-tune T5 for Sentiment Span Extraction | How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning | Lorenzo Ampil | |
Fine-tune DistilBert for Multiclass Classification | How to fine-tune DistilBert for multiclass classification with PyTorch | Abhishek Kumar Mishra | |
Fine-tune BERT for Multi-label Classification | How to fine-tune BERT for multi-label classification using PyTorch | Abhishek Kumar Mishra | |
Fine-tune T5 for Summarization | How to fine-tune T5 for summarization in PyTorch and track experiments with WandB | Abhishek Kumar Mishra | |
Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing | How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing | Michael Benesty | |
Pretrain Reformer for Masked Language Modeling | How to train a Reformer model with bi-directional self-attention layers | Patrick von Platen | |
Expand and Fine Tune Sci-BERT | How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. | Tanmay Thakur | |
Fine-tune Electra and interpret with Integrated Gradients | How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients | Eliza Szczechla | |
fine-tune a non-English GPT-2 Model with Trainer class | How to fine-tune a non-English GPT-2 Model with Trainer class | Philipp Schmid | |
Fine-tune a DistilBERT Model for Multi Label Classification task | How to fine-tune a DistilBERT Model for Multi Label Classification task | Dhaval Taunk | |
Fine-tune ALBERT for sentence-pair classification | How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task | Nadir El Manouzi | |
Fine-tune Roberta for sentiment analysis | How to fine-tune an Roberta model for sentiment analysis | Dhaval Taunk | |
Evaluating Question Generation Models | How accurate are the answers to questions generated by your seq2seq transformer model? | Pascal Zoleko | |
Classify text with DistilBERT and Tensorflow | How to fine-tune DistilBERT for text classification in TensorFlow | Peter Bayerle | |
Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail | How to warm-start a EncoderDecoderModel with a bert-base-uncased checkpoint for summarization on CNN/Dailymail | Patrick von Platen | |
Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum | How to warm-start a shared EncoderDecoderModel with a roberta-base checkpoint for summarization on BBC/XSum | Patrick von Platen |