mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Fix a typo (add a coma) (#16291)
As mentioned: https://github.com/huggingface/transformers/issues/16277
This commit is contained in:
parent
641e5f3f55
commit
abf3cc7064
@ -213,7 +213,7 @@ Here is an example of question answering using a model and a tokenizer. The proc
|
||||
with the weights stored in the checkpoint.
|
||||
2. Define a text and a few questions.
|
||||
3. Iterate over the questions and build a sequence from the text and the current question, with the correct
|
||||
model-specific separators token type ids and attention masks.
|
||||
model-specific separators, token type ids and attention masks.
|
||||
4. Pass this sequence through the model. This outputs a range of scores across the entire sequence tokens (question and
|
||||
text), for both the start and end positions.
|
||||
5. Compute the softmax of the result to get probabilities over the tokens.
|
||||
|
Loading…
Reference in New Issue
Block a user