mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
minor docs grammar fixes (#6889)
This commit is contained in:
parent
8abd7f69fc
commit
ee1bff06f8
@ -128,7 +128,7 @@ The encoded versions have different lengths:
|
||||
>>> len(encoded_sequence_a), len(encoded_sequence_b)
|
||||
(8, 19)
|
||||
|
||||
Therefore, we can't be put then together in a same tensor as-is. The first sequence needs to be padded up to the length
|
||||
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
|
||||
of the second one, or the second one needs to be truncated down to the length of the first one.
|
||||
|
||||
In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
|
||||
|
@ -214,7 +214,7 @@ Using the model
|
||||
|
||||
Once your input has been preprocessed by the tokenizer, you can send it directly to the model. As we mentioned, it will
|
||||
contain all the relevant information the model needs. If you're using a TensorFlow model, you can pass the
|
||||
dictionary keys directly to tensor, for a PyTorch model, you need to unpack the dictionary by adding :obj:`**`.
|
||||
dictionary keys directly to tensors, for a PyTorch model, you need to unpack the dictionary by adding :obj:`**`.
|
||||
|
||||
.. code-block::
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user