Deleted duplicate sentence (#26394)

This commit is contained in:
titi 2023-09-26 10:11:28 +02:00 committed by GitHub
parent a09130feee
commit a8531f3bfd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -68,8 +68,6 @@ You can benefit from considerable speedups for fine-tuning and inference, especi
To overcome this, one should use Flash Attention without padding tokens in the sequence for training (e.g., by packing a dataset, i.e., concatenating sequences until reaching the maximum sequence length. An example is provided [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516).
Below is the expected speedup you can get for a simple forward pass on [tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b) with a sequence length of 4096 and various batch sizes without padding tokens:
Below is the expected speedup you can get for a simple forward pass on [tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b) with a sequence length of 4096 and various batch sizes, without padding tokens:
<div style="text-align: center">