diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 0bf7e59df22..9635ae09d73 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -162,14 +162,16 @@ You'll need **[Python 3.7]((https://github.com/huggingface/transformers/blob/mai it with `pip uninstall transformers` before reinstalling it in editable mode with the `-e` flag. - Depending on your OS, you may need to install some external libraries as well if the `pip` installation fails. - - For macOS, you will likely need [MeCab](https://taku910.github.io/mecab/) which can be installed from Homebrew: - + Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a + failure with this command. If that's the case make sure to install the Deep Learning framework you are working with + (PyTorch, TensorFlow and/or Flax) then do: + ```bash - brew install mecab + pip install -e ".[quality]" ``` + which should be enough for most use cases. + 5. Develop the features on your branch. As you work on your code, you should make sure the test suite diff --git a/docs/source/en/add_new_model.mdx b/docs/source/en/add_new_model.mdx index 56a130f14ec..49dce27600c 100644 --- a/docs/source/en/add_new_model.mdx +++ b/docs/source/en/add_new_model.mdx @@ -202,7 +202,15 @@ source .env/bin/activate pip install -e ".[dev]" ``` -and return to the parent directory +Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a +failure with this command. If that's the case make sure to install the Deep Learning framework you are working with +(PyTorch, TensorFlow and/or Flax) then do: + +```bash +pip install -e ".[quality]" +``` + +which should be enough for most use cases. You can then return to the parent directory ```bash cd .. diff --git a/docs/source/en/add_tensorflow_model.mdx b/docs/source/en/add_tensorflow_model.mdx index e145a7d0018..f59b318b3f4 100644 --- a/docs/source/en/add_tensorflow_model.mdx +++ b/docs/source/en/add_tensorflow_model.mdx @@ -119,6 +119,13 @@ source .env/bin/activate pip install -e ".[dev]" ``` +Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a +failure with this command. If that's the case make sure to install TensorFlow then do: + +```bash +pip install -e ".[quality]" +``` + **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 4. Create a branch with a descriptive name from your main branch diff --git a/docs/source/en/pr_checks.mdx b/docs/source/en/pr_checks.mdx index 6d7ea5d4d40..1e3f62b22a4 100644 --- a/docs/source/en/pr_checks.mdx +++ b/docs/source/en/pr_checks.mdx @@ -24,7 +24,7 @@ When you open a pull request on 🤗 Transformers, a fair number of checks will In this document, we will take a stab at explaining what those various checks are and the reason behind them, as well as how to debug them locally if one of them fails on your PR. -Note that they all require you to have a dev install: +Note that, ideally, they require you to have a dev install: ```bash pip install transformers[dev] @@ -36,7 +36,18 @@ or for an editable install: pip install -e .[dev] ``` -inside the Transformers repo. +inside the Transformers repo. Since the number of optional dependencies of Transformers has grown a lot, it's possible you don't manage to get all of them. If the dev install fails, make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do + +```bash +pip install transformers[quality] +``` + +or for an editable install: + +```bash +pip install -e .[quality] +``` + ## Tests diff --git a/templates/adding_a_new_model/README.md b/templates/adding_a_new_model/README.md index c8ee0ce667d..42c423c02e2 100644 --- a/templates/adding_a_new_model/README.md +++ b/templates/adding_a_new_model/README.md @@ -34,6 +34,14 @@ cd transformers pip install -e ".[dev]" ``` +Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a +failure with this command. If that's the case make sure to install the Deep Learning framework you are working with +(PyTorch, TensorFlow and/or Flax) then do: + +```bash +pip install -e ".[quality]" +``` + Once the installation is done, you can use the CLI command `add-new-model` to generate your models: ```shell script @@ -133,6 +141,14 @@ cd transformers pip install -e ".[dev]" ``` +Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a +failure with this command. If that's the case make sure to install the Deep Learning framework you are working with +(PyTorch, TensorFlow and/or Flax) then do: + +```bash +pip install -e ".[quality]" +``` + Once the installation is done, you can use the CLI command `add-new-model-like` to generate your models: ```shell script diff --git a/utils/check_inits.py b/utils/check_inits.py index d90db7733da..38ed362b96f 100644 --- a/utils/check_inits.py +++ b/utils/check_inits.py @@ -277,11 +277,20 @@ def check_submodules(): transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) + import_structure_keys = set(transformers._import_structure.keys()) + # This contains all the base keys of the _import_structure object defined in the init, but if the user is missing + # some optional dependencies, they may not have all of them. Thus we read the init to read all additions and + # (potentiall re-) add them. + with open(os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), "r") as f: + init_content = f.read() + import_structure_keys.update(set(re.findall(r"import_structure\[\"([^\"]*)\"\]", init_content))) + module_not_registered = [ module for module in get_transformers_submodules() - if module not in IGNORE_SUBMODULES and module not in transformers._import_structure.keys() + if module not in IGNORE_SUBMODULES and module not in import_structure_keys ] + if len(module_not_registered) > 0: list_of_modules = "\n".join(f"- {module}" for module in module_not_registered) raise ValueError(