mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
273 lines
10 KiB
Plaintext
273 lines
10 KiB
Plaintext
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Question answering
|
|
|
|
<Youtube id="ajPx5LwJD-I"/>
|
|
|
|
Question answering tasks return an answer given a question. There are two common forms of question answering:
|
|
|
|
- Extractive: extract the answer from the given context.
|
|
- Abstractive: generate an answer from the context that correctly answers the question.
|
|
|
|
This guide will show you how to fine-tune [DistilBERT](https://huggingface.co/distilbert-base-uncased) on the [SQuAD](https://huggingface.co/datasets/squad) dataset for extractive question answering.
|
|
|
|
<Tip>
|
|
|
|
See the question answering [task page](https://huggingface.co/tasks/question-answering) for more information about other forms of question answering and their associated models, datasets, and metrics.
|
|
|
|
</Tip>
|
|
|
|
## Load SQuAD dataset
|
|
|
|
Load the SQuAD dataset from the 🤗 Datasets library:
|
|
|
|
```py
|
|
>>> from datasets import load_dataset
|
|
|
|
>>> squad = load_dataset("squad")
|
|
```
|
|
|
|
Then take a look at an example:
|
|
|
|
```py
|
|
>>> squad["train"][0]
|
|
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
|
|
'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',
|
|
'id': '5733be284776f41900661182',
|
|
'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
|
|
'title': 'University_of_Notre_Dame'
|
|
}
|
|
```
|
|
|
|
The `answers` field is a dictionary containing the starting position of the answer and the `text` of the answer.
|
|
|
|
## Preprocess
|
|
|
|
<Youtube id="qgaM0weJHpA"/>
|
|
|
|
Load the DistilBERT tokenizer to process the `question` and `context` fields:
|
|
|
|
```py
|
|
>>> from transformers import AutoTokenizer
|
|
|
|
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
|
|
```
|
|
|
|
There are a few preprocessing steps particular to question answering that you should be aware of:
|
|
|
|
1. Some examples in a dataset may have a very long `context` that exceeds the maximum input length of the model. Truncate only the `context` by setting `truncation="only_second"`.
|
|
2. Next, map the start and end positions of the answer to the original `context` by setting
|
|
`return_offset_mapping=True`.
|
|
3. With the mapping in hand, you can find the start and end tokens of the answer. Use the [`sequence_ids`](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#tokenizers.Encoding.sequence_ids) method to
|
|
find which part of the offset corresponds to the `question` and which corresponds to the `context`.
|
|
|
|
Here is how you can create a function to truncate and map the start and end tokens of the answer to the `context`:
|
|
|
|
```py
|
|
>>> def preprocess_function(examples):
|
|
... questions = [q.strip() for q in examples["question"]]
|
|
... inputs = tokenizer(
|
|
... questions,
|
|
... examples["context"],
|
|
... max_length=384,
|
|
... truncation="only_second",
|
|
... return_offsets_mapping=True,
|
|
... padding="max_length",
|
|
... )
|
|
|
|
... offset_mapping = inputs.pop("offset_mapping")
|
|
... answers = examples["answers"]
|
|
... start_positions = []
|
|
... end_positions = []
|
|
|
|
... for i, offset in enumerate(offset_mapping):
|
|
... answer = answers[i]
|
|
... start_char = answer["answer_start"][0]
|
|
... end_char = answer["answer_start"][0] + len(answer["text"][0])
|
|
... sequence_ids = inputs.sequence_ids(i)
|
|
|
|
... # Find the start and end of the context
|
|
... idx = 0
|
|
... while sequence_ids[idx] != 1:
|
|
... idx += 1
|
|
... context_start = idx
|
|
... while sequence_ids[idx] == 1:
|
|
... idx += 1
|
|
... context_end = idx - 1
|
|
|
|
... # If the answer is not fully inside the context, label it (0, 0)
|
|
... if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
|
|
... start_positions.append(0)
|
|
... end_positions.append(0)
|
|
... else:
|
|
... # Otherwise it's the start and end token positions
|
|
... idx = context_start
|
|
... while idx <= context_end and offset[idx][0] <= start_char:
|
|
... idx += 1
|
|
... start_positions.append(idx - 1)
|
|
|
|
... idx = context_end
|
|
... while idx >= context_start and offset[idx][1] >= end_char:
|
|
... idx -= 1
|
|
... end_positions.append(idx + 1)
|
|
|
|
... inputs["start_positions"] = start_positions
|
|
... inputs["end_positions"] = end_positions
|
|
... return inputs
|
|
```
|
|
|
|
Use 🤗 Datasets [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) function to apply the preprocessing function over the entire dataset. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once. Remove the columns you don't need:
|
|
|
|
```py
|
|
>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
|
|
```
|
|
|
|
Use [`DefaultDataCollator`] to create a batch of examples. Unlike other data collators in 🤗 Transformers, the `DefaultDataCollator` does not apply additional preprocessing such as padding.
|
|
|
|
<frameworkcontent>
|
|
<pt>
|
|
```py
|
|
>>> from transformers import DefaultDataCollator
|
|
|
|
>>> data_collator = DefaultDataCollator()
|
|
```
|
|
</pt>
|
|
<tf>
|
|
```py
|
|
>>> from transformers import DefaultDataCollator
|
|
|
|
>>> data_collator = DefaultDataCollator(return_tensors="tf")
|
|
```
|
|
</tf>
|
|
</frameworkcontent>
|
|
|
|
## Train
|
|
|
|
<frameworkcontent>
|
|
<pt>
|
|
Load DistilBERT with [`AutoModelForQuestionAnswering`]:
|
|
|
|
```py
|
|
>>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
|
|
|
|
>>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased")
|
|
```
|
|
|
|
<Tip>
|
|
|
|
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
|
|
|
|
</Tip>
|
|
|
|
At this point, only three steps remain:
|
|
|
|
1. Define your training hyperparameters in [`TrainingArguments`].
|
|
2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, and data collator.
|
|
3. Call [`~Trainer.train`] to fine-tune your model.
|
|
|
|
```py
|
|
>>> training_args = TrainingArguments(
|
|
... output_dir="./results",
|
|
... evaluation_strategy="epoch",
|
|
... learning_rate=2e-5,
|
|
... per_device_train_batch_size=16,
|
|
... per_device_eval_batch_size=16,
|
|
... num_train_epochs=3,
|
|
... weight_decay=0.01,
|
|
... )
|
|
|
|
>>> trainer = Trainer(
|
|
... model=model,
|
|
... args=training_args,
|
|
... train_dataset=tokenized_squad["train"],
|
|
... eval_dataset=tokenized_squad["validation"],
|
|
... tokenizer=tokenizer,
|
|
... data_collator=data_collator,
|
|
... )
|
|
|
|
>>> trainer.train()
|
|
```
|
|
</pt>
|
|
<tf>
|
|
To fine-tune a model in TensorFlow, start by converting your datasets to the `tf.data.Dataset` format with [`to_tf_dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset). Specify inputs and the start and end positions of an answer in `columns`, whether to shuffle the dataset order, batch size, and the data collator:
|
|
|
|
```py
|
|
>>> tf_train_set = tokenized_squad["train"].to_tf_dataset(
|
|
... columns=["attention_mask", "input_ids", "start_positions", "end_positions"],
|
|
... dummy_labels=True,
|
|
... shuffle=True,
|
|
... batch_size=16,
|
|
... collate_fn=data_collator,
|
|
... )
|
|
|
|
>>> tf_validation_set = tokenized_squad["validation"].to_tf_dataset(
|
|
... columns=["attention_mask", "input_ids", "start_positions", "end_positions"],
|
|
... dummy_labels=True,
|
|
... shuffle=False,
|
|
... batch_size=16,
|
|
... collate_fn=data_collator,
|
|
... )
|
|
```
|
|
|
|
<Tip>
|
|
|
|
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
|
|
|
|
</Tip>
|
|
|
|
Set up an optimizer function, learning rate schedule, and some training hyperparameters:
|
|
|
|
```py
|
|
>>> from transformers import create_optimizer
|
|
|
|
>>> batch_size = 16
|
|
>>> num_epochs = 2
|
|
>>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
|
|
>>> optimizer, schedule = create_optimizer(
|
|
... init_lr=2e-5,
|
|
... num_warmup_steps=0,
|
|
... num_train_steps=total_train_steps,
|
|
... )
|
|
```
|
|
|
|
Load DistilBERT with [`TFAutoModelForQuestionAnswering`]:
|
|
|
|
```py
|
|
>>> from transformers import TFAutoModelForQuestionAnswering
|
|
|
|
>>> model = TFAutoModelForQuestionAnswering("distilbert-base-uncased")
|
|
```
|
|
|
|
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method):
|
|
|
|
```py
|
|
>>> import tensorflow as tf
|
|
|
|
>>> model.compile(optimizer=optimizer)
|
|
```
|
|
|
|
Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) to fine-tune the model:
|
|
|
|
```py
|
|
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3)
|
|
```
|
|
</tf>
|
|
</frameworkcontent>
|
|
|
|
<Tip>
|
|
|
|
For a more in-depth example of how to fine-tune a model for question answering, take a look at the corresponding
|
|
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)
|
|
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
|
|
|
|
</Tip> |