* Reformer model head classification implementation for text classification
* Reformat the reformer model classification code
* PR review comments, and test case implementation for reformer for classification head changes
* CI/CD reformer for classification head test import error fix
* CI/CD test case implementation added ReformerForSequenceClassification to all_model_classes
* Code formatting- fixed
* Normal test cases added for reformer classification head
* Fix test cases implementation for the reformer classification head
* removed token_type_id parameter from the reformer classification head
* fixed the test case for reformer classification head
* merge conflict with master fixed
* merge conflict, changed reformer classification to accept the choice_label parameter added in latest code
* refactored the the reformer classification head test code
* reformer classification head, common transform test cases fixed
* final set of the review comment, rearranging the reformer classes and docstring add to classification forward method
* fixed the compilation error and text case fix for reformer classification head
* Apply suggestions from code review
Remove unnecessary dup
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix longformer global attention output
* fix multi gpu problem
* replace -10000 with 0
* better comment
* make attention output equal local and global
* Update src/transformers/modeling_longformer.py
* Add model type check for pipelines
* Add model type check for pipelines
* rename func
* Fix the init parameters
* Fix format
* rollback unnecessary refactor
* Pytorch gpu => cpu proper device
* Memoryless XLNet warning + fixed memories during generation
* Revert "Memoryless XLNet warning + fixed memories during generation"
This reverts commit 3d3251ff
* Took the operations on the generated_sequence out of the ensure_device scope
* Ensure padding and question cannot have higher probs than context.
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Add bart the the list of tokenizers adding two <sep> tokens for squad_convert_example_to_feature
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Format.
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Addressing @patrickvonplaten comments.
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Addressing @patrickvonplaten comments about masking non-context element when generating the answer.
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Addressing @sshleifer comments.
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Make sure we mask CLS after handling impossible answers
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Mask in the correct vectors ...
Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
* Add B I handling to grouping
* Add fix to include separate entity as last token
* move last_idx definition outside loop
* Use first entity in entity group as reference for entity type
* Add test cases
* Take out extra class accidentally added
* Return tf ner grouped test to original
* Take out redundant last entity
* Get last_idx safely
Co-authored-by: ColleterVi <36503688+ColleterVi@users.noreply.github.com>
* Fix first entity comment
* Create separate functions for group_sub_entities and group_entities (splitting call method to testable functions)
* Take out unnecessary last_idx
* Remove additional forward pass test
* Move token classification basic tests to separate class
* Move token classification basic tests back to monocolumninputtestcase
* Move base ner tests to nerpipelinetests
* Take out unused kwargs
* Add back mandatory_keys argument
* Add unitary tests for group_entities in _test_ner_pipeline
* Fix last entity handling
* Fix grouping fucntion used
* Add typing to group_sub_entities and group_entities
Co-authored-by: ColleterVi <36503688+ColleterVi@users.noreply.github.com>