transformers/docs/source/main_classes
Yossi Synett bc0d26d1de
[All Seq2Seq model + CLM models that can be used with EncoderDecoder] Add cross-attention weights to outputs (#8071)
* Output cross-attention with decoder attention output

* Update src/transformers/modeling_bert.py

* add cross-attention for t5 and bart as well

* fix tests

* correct typo in docs

* add sylvains and sams comments

* correct typo

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2020-11-06 19:34:48 +01:00
..
callback.rst Add AzureML in integrations via dedicated callback (#8062) 2020-10-27 14:21:54 -04:00
configuration.rst Models doc (#7345) 2020-09-23 13:20:45 -04:00
logging.rst Doc styling (#8067) 2020-10-26 18:26:02 -04:00
model.rst Refactoring the generate() function (#6949) 2020-11-03 16:04:22 +01:00
optimizer_schedules.rst Models doc (#7345) 2020-09-23 13:20:45 -04:00
output.rst [All Seq2Seq model + CLM models that can be used with EncoderDecoder] Add cross-attention weights to outputs (#8071) 2020-11-06 19:34:48 +01:00
pipelines.rst Doc styling (#8067) 2020-10-26 18:26:02 -04:00
processors.rst Doc styling (#8067) 2020-10-26 18:26:02 -04:00
tokenizer.rst Doc styling (#8067) 2020-10-26 18:26:02 -04:00
trainer.rst Doc styling (#8067) 2020-10-26 18:26:02 -04:00