* add a new example for flax inference cases
* Update examples/flax/language-modeling/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update examples/flax/language-modeling/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update examples/flax/language-modeling/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update examples/flax/language-modeling/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update examples/flax/language-modeling/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update examples/flax/language-modeling/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* fix for "make fixup"
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Pass datasets trust_remote_code
* Pass trust_remote_code in more tests
* Add trust_remote_dataset_code arg to some tests
* Revert "Temporarily pin datasets upper version to fix CI"
This reverts commit b7672826ca.
* Pass trust_remote_code in librispeech_asr_dummy docstrings
* Revert "Pin datasets<2.20.0 for examples"
This reverts commit 833fc17a3e.
* Pass trust_remote_code to all examples
* Revert "Add trust_remote_dataset_code arg to some tests" to research_projects
* Pass trust_remote_code to tests
* Pass trust_remote_code to docstrings
* Fix flax examples tests requirements
* Pass trust_remote_dataset_code arg to tests
* Replace trust_remote_dataset_code with trust_remote_code in one example
* Fix duplicate trust_remote_code
* Replace args.trust_remote_dataset_code with args.trust_remote_code
* Replace trust_remote_dataset_code with trust_remote_code in parser
* Replace trust_remote_dataset_code with trust_remote_code in dataclasses
* Replace trust_remote_dataset_code with trust_remote_code arg
* Remove deprecated logic and warnings
* Add back some code that seems to be important...
* Let's just add all he nllb stuff back; removing it is a bit more involved
* Remove kwargs
* Remove more kwargs
* [DO NOT MERGE] Testing tokenizers 0.19.0rc0
* Accounting for the breaking change.
* Ruff.
* Upgrading to tokenizers `0.19` (new release with preprend_scheme fixed
and new surface for BPE tiktoken bug).
* Update legacy Repository usage in `examples/pytorch/text-classification/run_glue_no_trainer.py`
Marked for deprecation here https://huggingface.co/docs/huggingface_hub/guides/upload#legacy-upload-files-with-git-lfs
* Fix import order
* Replace all example usage of deprecated Repository
* Fix remaining repo call and rename args variable
* Revert removing creation of gitignore files and don't change research examples
* Fix typos and grammar mistakes in docs and examples
* Fix typos in docstrings and comments
* Fix spelling of `tokenizer` in model tests
* Remove erroneous spaces in decorators
* Remove extra spaces in Markdown link texts
`jnp.array` is a function, not a type:
https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html
so it never makes sense to use `jnp.array` in a type annotation. Presumably the intent was to write `jnp.ndarray` aka `jax.Array`.
Co-authored-by: Peter Hawkins <phawkins@google.com>
* Result of black 23.1
* Update target to Python 3.7
* Switch flake8 to ruff
* Configure isort
* Configure isort
* Apply isort with line limit
* Put the right black version
* adapt black in check copies
* Fix copies
* Fix RESOURCE_EXHAUSTED error for large datasets on Flax example scripts
* using np.permutation for creating batch_idx
* train_samples_idx -> training_samples_idx
* fix type hints
* Add examples telemetry
* Alternative approach
* Add to all other examples
* Add to templates as well
* Put framework separately
* Same for TensorFlow
* Fix t5 shard on TPU Pods
The current script doesn't work properly on a TPU pod because the global batch is not divided correctly per host.
This pull request fixes this issue by dividing the global batch to each host before it is shared on each host.
* fix style
Co-authored-by: ahmed-elnaggar <ahmed.elnaggar@allianz.com>