transformers/docs
Yih-Dar 8b240a0661
Add TFEncoderDecoderModel + Add cross-attention to some TF models (#13222)
* Add cross attentions to TFGPT2Model

* Add TFEncoderDecoderModel

* Add TFBaseModelOutputWithPoolingAndCrossAttentions

* Add cross attentions to TFBertModel

* Fix past or past_key_values argument issue

* Fix generation

* Fix save and load

* Add some checks and comments

* Clean the code that deals with past keys/values

* Add kwargs to processing_inputs

* Add serving_output to TFEncoderDecoderModel

* Some cleaning + fix use_cache value issue

* Fix tests + add bert2bert/bert2gpt2 tests

* Fix more tests

* Ignore crossattention.bias when loading GPT2 weights into TFGPT2

* Fix return_dict_in_generate in tf generation

* Fix is_token_logit_eos_token bug in tf generation

* Finalize the tests after fixing some bugs

* Fix another is_token_logit_eos_token bug in tf generation

* Add/Update docs

* Add TFBertEncoderDecoderModelTest

* Clean test script

* Add TFEncoderDecoderModel to the library

* Add cross attentions to TFRobertaModel

* Add TFRobertaEncoderDecoderModelTest

* make style

* Change the way of position_ids computation

* bug fix

* Fix copies in tf_albert

* Remove some copied from and apply some fix-copies

* Remove some copied

* Add cross attentions to some other TF models

* Remove encoder_hidden_states from TFLayoutLMModel.call for now

* Make style

* Fix TFRemBertForCausalLM

* Revert the change to longformer + Remove copies

* Revert the change to albert and convbert + Remove copies

* make quality

* make style

* Add TFRembertEncoderDecoderModelTest

* make quality and fix-copies

* test TFRobertaForCausalLM

* Fixes for failed tests

* Fixes for failed tests

* fix more tests

* Fixes for failed tests

* Fix Auto mapping order

* Fix TFRemBertEncoder return value

* fix tf_rembert

* Check copies are OK

* Fix missing TFBaseModelOutputWithPastAndCrossAttentions is not defined

* Add TFEncoderDecoderModelSaveLoadTests

* fix tf weight loading

* check the change of use_cache

* Revert the change

* Add missing test_for_causal_lm for TFRobertaModelTest

* Try cleaning past

* fix _reorder_cache

* Revert some files to original versions

* Keep as many copies as possible

* Apply suggested changes - Use raise ValueError instead of assert

* Move import to top

* Fix wrong require_torch

* Replace more assert by raise ValueError

* Add test_pt_tf_model_equivalence (the test won't pass for now)

* add test for loading/saving

* finish

* finish

* Remove test_pt_tf_model_equivalence

* Update tf modeling template

* Remove pooling, added in the prev. commit, from MainLayer

* Update tf modeling test template

* Move inputs["use_cache"] = False to modeling_tf_utils.py

* Fix torch.Tensor in the comment

* fix use_cache

* Fix missing use_cache in ElectraConfig

* Add a note to from_pretrained

* Fix style

* Change test_encoder_decoder_save_load_from_encoder_decoder_from_pt

* Fix TFMLP (in TFGPT2) activation issue

* Fix None past_key_values value in serving_output

* Don't call get_encoderdecoder_model in TFEncoderDecoderModelTest.test_configuration_tie until we have a TF checkpoint on Hub

* Apply review suggestions - style for cross_attns in serving_output

* Apply review suggestions - change assert + docstrings

* break the error message to respect the char limit

* deprecate the argument past

* fix docstring style

* Update the encoder-decoder rst file

* fix Unknown interpreted text role "method"

* fix typo

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2021-10-13 00:10:34 +02:00
..
source Add TFEncoderDecoderModel + Add cross-attention to some TF models (#13222) 2021-10-13 00:10:34 +02:00
Makefile GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
README.md [docs] How to solve "Title level inconsistent" sphinx error (#10600) 2021-03-08 20:16:33 -08:00

Generating the documentation

To generate the documentation, you first have to build it. Several packages are necessary to build the doc, you can install them with the following command, at the root of the code repository:

pip install -e ".[docs]"

NOTE

You only need to generate the documentation to inspect it locally (if you're planning changes and want to check how they look like before committing for instance). You don't have to commit the built documentation.


Packages installed

Here's an overview of all the packages installed. If you ran the previous command installing all packages from requirements.txt, you do not need to run the following commands.

Building it requires the package sphinx that you can install using:

pip install -U sphinx

You would also need the custom installed theme by Read The Docs. You can install it using the following command:

pip install sphinx_rtd_theme

The third necessary package is the recommonmark package to accept Markdown as well as Restructured text:

pip install recommonmark

Building the documentation

Once you have setup sphinx, you can build the documentation by running the following command in the /docs folder:

make html

A folder called _build/html should have been created. You can now open the file _build/html/index.html in your browser.


NOTE

If you are adding/removing elements from the toc-tree or from any structural item, it is recommended to clean the build directory before rebuilding. Run the following command to clean and build:

make clean && make html

It should build the static app that will be available under /docs/_build/html

Adding a new element to the tree (toc-tree)

Accepted files are reStructuredText (.rst) and Markdown (.md). Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting the filename without the extension.

Preview the documentation in a pull request

Once you have made your pull request, you can check what the documentation will look like after it's merged by following these steps:

  • Look at the checks at the bottom of the conversation page of your PR (you may need to click on "show all checks" to expand them).
  • Click on "details" next to the ci/circleci: build_doc check.
  • In the new window, click on the "Artifacts" tab.
  • Locate the file "docs/_build/html/index.html" (or any specific page you want to check) and click on it to get a preview.

Writing Documentation - Specification

The huggingface/transformers documentation follows the Google documentation style. It is mostly written in ReStructuredText (Sphinx simple documentation, Sourceforge complete documentation).

Adding a new tutorial

Adding a new tutorial or section is done in two steps:

  • Add a new file under ./source. This file can either be ReStructuredText (.rst) or Markdown (.md).
  • Link that file in ./source/index.rst on the correct toc-tree.

Make sure to put your new file under the proper section. It's unlikely to go in the first section (Get Started), so depending on the intended targets (beginners, more advanced users or researchers) it should go in section two, three or four.

Adding a new model

When adding a new model:

  • Create a file xxx.rst under ./source/model_doc (don't hesitate to copy an existing file as template).
  • Link that file in ./source/index.rst on the model_doc toc-tree.
  • Write a short overview of the model:
    • Overview with paper & authors
    • Paper abstract
    • Tips and tricks and how to use it best
  • Add the classes that should be linked in the model. This generally includes the configuration, the tokenizer, and every model of that class (the base model, alongside models with additional heads), both in PyTorch and TensorFlow. The order is generally:
    • Configuration,
    • Tokenizer
    • PyTorch base model
    • PyTorch head models
    • TensorFlow base model
    • TensorFlow head models

These classes should be added using the RST syntax. Usually as follows:

XXXConfig
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.XXXConfig
    :members:

This will include every public method of the configuration that is documented. If for some reason you wish for a method not to be displayed in the documentation, you can do so by specifying which methods should be in the docs:

XXXTokenizer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.XXXTokenizer
    :members: build_inputs_with_special_tokens, get_special_tokens_mask,
        create_token_type_ids_from_sequences, save_vocabulary

Writing source documentation

Values that should be put in code should either be surrounded by double backticks: ``like so`` or be written as an object using the :obj: syntax: :obj:`like so`. Note that argument names and objects like True, None or any strings should usually be put in code.

When mentionning a class, it is recommended to use the :class: syntax as the mentioned class will be automatically linked by Sphinx: :class:`~transformers.XXXClass`

When mentioning a function, it is recommended to use the :func: syntax as the mentioned function will be automatically linked by Sphinx: :func:`~transformers.function`.

When mentioning a method, it is recommended to use the :meth: syntax as the mentioned method will be automatically linked by Sphinx: :meth:`~transformers.XXXClass.method`.

Links should be done as so (note the double underscore at the end): `text for the link <./local-link-or-global-link#loc>`__

Defining arguments in a method

Arguments should be defined with the Args: prefix, followed by a line return and an indentation. The argument should be followed by its type, with its shape if it is a tensor, and a line return. Another indentation is necessary before writing the description of the argument.

Here's an example showcasing everything so far:

    Args:
        input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
            Indices of input sequence tokens in the vocabulary.

            Indices can be obtained using :class:`~transformers.AlbertTokenizer`.
            See :meth:`~transformers.PreTrainedTokenizer.encode` and
            :meth:`~transformers.PreTrainedTokenizer.__call__` for details.

            `What are input IDs? <../glossary.html#input-ids>`__

For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the following signature:

def my_function(x: str = None, a: float = 1):

then its documentation should look like this:

    Args:
        x (:obj:`str`, `optional`):
            This argument controls ...
        a (:obj:`float`, `optional`, defaults to 1):
            This argument is used to ...

Note that we always omit the "defaults to :obj:`None`" when None is the default for any argument. Also note that even if the first line describing your argument type and its default gets long, you can't break it on several lines. You can however write as many lines as you want in the indented description (see the example above with input_ids).

Writing a multi-line code block

Multi-line code blocks can be useful for displaying examples. They are done like so:

Example::

    # first line of code
    # second line
    # etc

The Example string at the beginning can be replaced by anything as long as there are two semicolons following it.

We follow the doctest syntax for the examples to automatically test the results stay consistent with the library.

Writing a return block

Arguments should be defined with the Args: prefix, followed by a line return and an indentation. The first line should be the type of the return, followed by a line return. No need to indent further for the elements building the return.

Here's an example for tuple return, comprising several objects:

    Returns:
        :obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:
        loss (`optional`, returned when ``masked_lm_labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
            Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
        prediction_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`)
            Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

Here's an example for a single value return:

    Returns:
        :obj:`List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.

Adding a new section

In ReST section headers are designated as such with the help of a line of underlying characters, e.g.,:

Section 1
^^^^^^^^^^^^^^^^^^

Sub-section 1
~~~~~~~~~~~~~~~~~~

ReST allows the use of any characters to designate different section levels, as long as they are used consistently within the same document. For details see sections doc. Because there is no standard different documents often end up using different characters for the same levels which makes it very difficult to know which character to use when creating a new section.

Specifically, if when running make docs you get an error like:

docs/source/main_classes/trainer.rst:127:Title level inconsistent:

you picked an inconsistent character for some of the levels.

But how do you know which characters you must use for an already existing level or when adding a new level?

You can use this helper script:

perl -ne '/^(.)\1{100,}/ && do { $h{$1}=++$c if !$h{$1} }; END { %h = reverse %h ; print "$_ $h{$_}\n" for sort keys %h}' docs/source/main_classes/trainer.rst
1 -
2 ~
3 ^
4 =
5 "

This tells you which characters have already been assigned for each level.

So using this particular example's output -- if your current section's header uses = as its underline character, you now know you're at level 4, and if you want to add a sub-section header you know you want " as it'd level 5.

If you needed to add yet another sub-level, then pick a character that is not used already. That is you must pick a character that is not in the output of that script.

Here is the full list of characters that can be used in this context: = - : ' " ~ ^ _ * + # < >`