Fix a typo (add a coma) (#16291)

As mentioned: https://github.com/huggingface/transformers/issues/16277
This commit is contained in:
PolarisRisingWar 2022-03-21 20:10:24 +08:00 committed by GitHub
parent 641e5f3f55
commit abf3cc7064
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -213,7 +213,7 @@ Here is an example of question answering using a model and a tokenizer. The proc
with the weights stored in the checkpoint.
2. Define a text and a few questions.
3. Iterate over the questions and build a sequence from the text and the current question, with the correct
model-specific separators token type ids and attention masks.
model-specific separators, token type ids and attention masks.
4. Pass this sequence through the model. This outputs a range of scores across the entire sequence tokens (question and
text), for both the start and end positions.
5. Compute the softmax of the result to get probabilities over the tokens.