minor docs grammar fixes (#6889)

This commit is contained in:
Harry Wang 2020-09-02 06:45:19 -04:00 committed by GitHub
parent 8abd7f69fc
commit ee1bff06f8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 2 deletions

View File

@ -128,7 +128,7 @@ The encoded versions have different lengths:
>>> len(encoded_sequence_a), len(encoded_sequence_b)
(8, 19)
Therefore, we can't be put then together in a same tensor as-is. The first sequence needs to be padded up to the length
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
of the second one, or the second one needs to be truncated down to the length of the first one.
In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask

View File

@ -214,7 +214,7 @@ Using the model
Once your input has been preprocessed by the tokenizer, you can send it directly to the model. As we mentioned, it will
contain all the relevant information the model needs. If you're using a TensorFlow model, you can pass the
dictionary keys directly to tensor, for a PyTorch model, you need to unpack the dictionary by adding :obj:`**`.
dictionary keys directly to tensors, for a PyTorch model, you need to unpack the dictionary by adding :obj:`**`.
.. code-block::