* [gptj] support older pytorch version
* contributor
* contributor
* make copies
---------
Co-authored-by: Michael Wyatt <michaelwyatt@microsoft.com>
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
* Chunkable classification pipeline
The TokenClassificationPipeline is now able to process sequences longer than 512. No matter the framework, the model, the tokenizer. We just have to pass process_all=True and a stride number (optional). The behavior remains the same if you don't pass these optional parameters. For overlapping parts when using stride above 0, we consider only the max scores for each overlapped token in all chunks where the token is.
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* update with latest black format
* update black format
* Update token_classification.py
* Update token_classification.py
* format correction
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update comments
* Update src/transformers/pipelines/token_classification.py
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
* Update token_classification.py
Correct spaces, remove process_all and keep only stride. If stride is provided, the pipeline is applied to the whole text.
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update chunk aggregation
Update the chunk aggregation strategy based on entities aggregation.
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
Remove unnecessary pop from outputs dict
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update token_classification.py
* Update src/transformers/pipelines/token_classification.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* add chunking tests
* correct formating
* correct formatting
* correct model id for test chunking
* update scores with nested simplify
* Update test_pipelines_token_classification.py
* Update test_pipelines_token_classification.py
* update model to a tiny one
* Update test_pipelines_token_classification.py
* Adding smaller test for chunking.
* Fixup
* Update token_classification.py
* Update src/transformers/pipelines/token_classification.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/pipelines/token_classification.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
---------
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer. Earlier xpath_sub_list was same as xpath_tags_list
Co-authored-by: dusejat <dusejat@amazon.com>
* Revert "[GPT-J] add deprecation warning (#21869)"
This reverts commit fb76994c41.
* Fix position embeddings for GPT-J and CodeGen
* Address review comments from @gante
* Fix "Copied from" comment referencing wrong function
* Fix copy/paste mistake
* Fix training path
* Hopefully make torch.fx happy
* Move position_ids long cast
* Revert "Hopefully make torch.fx happy"
This reverts commit e41a6f4cad3ff441124c7457b19cfb630d4ca025.
* Changes to help with torch.fx tracing
* Linter fix
* Correct position_ids tensor type hint
* Work-around torch.fx tracing issue
* Get the changes to work with torch.fx
* Address review comment from @michaelbenayoun
* Another small adjustment
* Add explanatory comment; small code tidyup
* add low_cpu_mem_usage option in run_clm.py example which will benefit LLM loading
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* update all the example and README under language-modeling
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* time to say goodbye, torch 1.7 and 1.8
* clean up torch_int_div
* clean up is_torch_less_than_1_8-9
* update
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Make sure CVT can be trained using mixed precision
* Add test for keras-fit with mixed-precision
* Update tests/models/cvt/test_modeling_tf_cvt.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
---------
Co-authored-by: gcuder <Gerald.Cuder@iacapps.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Add kernel size to NATTEN's QK arguments.
The new NATTEN 0.14.5 supports PyTorch 2.0, but also adds an additional
argument to the QK operation to allow optional RPBs.
This ends up failing NATTEN tests.
This commit adds NATTEN back to circleci and adds the arguments to get
it working again.
* Force NATTEN >= 0.14.5
* fix AutoTP in deepspeed could not work for bloom
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* add a method in BloomModel to build ailib
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>