* added args to the pipeline
* added test
* more sensical tests
* fixup
* docs
* typo
;
* docs
* made changes to support named args
* fixed test
* docs update
* styles
* docs
* docs
* Add the XPU check for pipeline mode
When setting xpu device for pipeline, It needs to use is_torch_xpu_available to load ipex and determine whether the device is available.
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Don't move model to device when hf_device_map isn't None
1. Don't move model to device when hf_device_map is not None
2. The device string maybe includes the device index, so use 'in'instead of equal
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Raise the error when xpu is not available
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Update src/transformers/pipelines/base.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/pipelines/base.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Modify the error message
Signed-off-by: yuanwu <yuan.wu@intel.com>
* Change message format.
Signed-off-by: yuanwu <yuan.wu@intel.com>
---------
Signed-off-by: yuanwu <yuan.wu@intel.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Fix TF Regnet docstring
* Fix TF Regnet docstring
* Make a change to the PyTorch Regnet too to make sure the CI is checking it
* Add skips for TFRegnet
* Update error message for docstring checker
* Correct the implementation of auxiliary loss of mixtrtal
* correct the implementation of auxiliary loss of mixtrtal
* Implement a simpler calculation method
---------
Co-authored-by: zhangliangxu3 <zhangliangxu3@jd.com>
* chore(phi): Updates configuration_phi with missing keys.
* chore(phi): Adds first draft of combined modeling_phi.
* fix(phi): Fixes according to latest review.
* fix(phi): Removes pad_vocab_size_multiple to prevent inconsistencies.
* fix(phi): Fixes unit and integration tests.
* fix(phi): Ensures that everything works with microsoft/phi-1 for first integration.
* fix(phi): Fixes output of docstring generation.
* fix(phi): Fixes according to latest review.
* fix(phi): Fixes according to latest review.
* fix(tests): Re-enables Phi-1.5 test.
* fix(phi): Fixes attention overflow on PhiAttention (for Phi-2).
* fix(phi): Improves how queries and keys are upcast.
* fix(phi): Small updates on latest changes.
* optionally preprocess segmentation maps for mobilevit
* changed pretrained model name to that of segmentation model
* removed voc-deeplabv3 from model archive list
* added preprocess_image and preprocess_mask methods for processing images and segmentation masks respectively
* added tests for segmentation masks based on segformer feature extractor
* use crop_size instead of size
* reverting to initial model
While using `run_clm.py`,[^1] I noticed that some files were being added
to my global cache, not the local cache. I set the `cache_dir` parameter
for the one call to `evaluate.load()`, which partially solved the
problem. I figured that while I was fixing the one script upstream, I
might as well fix the problem in all other example scripts that I could.
There are still some files being added to my global cache, but this
appears to be a bug in `evaluate` itself. This commit at least moves
some of the files into the local cache, which is better than before.
To create this PR, I made the following regex-based transformation:
`evaluate\.load\((.*?)\)` -> `evaluate\.load\($1,
cache_dir=model_args.cache_dir\)`. After using that, I manually fixed
all modified files with `ruff` serving as useful guidance. During the
process, I removed one existing usage of the `cache_dir` parameter in a
script that did not have a corresponding `--cache-dir` argument
declared.
[^1]: I specifically used `pytorch/language-modeling/run_clm.py` from
v4.34.1 of the library. For the original code, see the following URL:
acc394c4f5/examples/pytorch/language-modeling/run_clm.py.
* Remove ErnieConfig, ErnieMConfig check_docstrings
* Run fix_and_overwrite for ErnieConfig, ErnieMConfig
* Replace <fill_type> and <fill_docstring> in configuration_ernie, configuration_ernie_m.py with type and docstring values
---------
Co-authored-by: vignesh-raghunathan <vignesh_raghunathan@intuit.com>
* Changed logic for renaming staging directory when saving checkpoint to only operate with the main process.
Added fsync functionality to attempt to flush the write changes in case os.rename is not atomic.
* Updated styling using make fixup
* Updated check for main process to use built-in versions from trainer
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* Fixed incorrect usage of trainer main process checks
Added with open usage to ensure better file closing as suggested from PR
Added rotate_checkpoints into main process logic
* Removed "with open" due to not working with directory. os.open seems to work for directories.
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>