mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
92 lines
3.0 KiB
Plaintext
92 lines
3.0 KiB
Plaintext
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Models
|
|
|
|
The base classes [`PreTrainedModel`], [`TFPreTrainedModel`], and
|
|
[`FlaxPreTrainedModel`] implement the common methods for loading/saving a model either from a local
|
|
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
|
|
S3 repository).
|
|
|
|
[`PreTrainedModel`] and [`TFPreTrainedModel`] also implement a few methods which
|
|
are common among all the models to:
|
|
|
|
- resize the input token embeddings when new tokens are added to the vocabulary
|
|
- prune the attention heads of the model.
|
|
|
|
The other methods that are common to each model are defined in [`~modeling_utils.ModuleUtilsMixin`]
|
|
(for the PyTorch models) and [`~modeling_tf_utils.TFModuleUtilsMixin`] (for the TensorFlow models) or
|
|
for text generation, [`~generation_utils.GenerationMixin`] (for the PyTorch models),
|
|
[`~generation_tf_utils.TFGenerationMixin`] (for the TensorFlow models) and
|
|
[`~generation_flax_utils.FlaxGenerationMixin`] (for the Flax/JAX models).
|
|
|
|
|
|
## PreTrainedModel
|
|
|
|
[[autodoc]] PreTrainedModel
|
|
- push_to_hub
|
|
- all
|
|
|
|
<a id='from_pretrained-torch-dtype'></a>
|
|
|
|
### Model Instantiation dtype
|
|
|
|
Under Pytorch a model normally gets instantiated with `torch.float32` format. This can be an issue if one tries to
|
|
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
|
|
either explicitly pass the desired `dtype` using `torch_dtype` argument:
|
|
|
|
```python
|
|
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
|
|
```
|
|
|
|
or, if you want the model to always load in the most optimal memory pattern, you can use the special value `"auto"`,
|
|
and then `dtype` will be automatically derived from the model's weights:
|
|
|
|
```python
|
|
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
|
|
```
|
|
|
|
Models instantiated from scratch can also be told which `dtype` to use with:
|
|
|
|
```python
|
|
config = T5Config.from_pretrained("t5")
|
|
model = AutoModel.from_config(config)
|
|
```
|
|
|
|
Due to Pytorch design, this functionality is only available for floating dtypes.
|
|
|
|
|
|
|
|
## ModuleUtilsMixin
|
|
|
|
[[autodoc]] modeling_utils.ModuleUtilsMixin
|
|
|
|
## TFPreTrainedModel
|
|
|
|
[[autodoc]] TFPreTrainedModel
|
|
- push_to_hub
|
|
- all
|
|
|
|
## TFModelUtilsMixin
|
|
|
|
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
|
|
|
|
## FlaxPreTrainedModel
|
|
|
|
[[autodoc]] FlaxPreTrainedModel
|
|
- push_to_hub
|
|
- all
|
|
|
|
## Pushing to the Hub
|
|
|
|
[[autodoc]] utils.PushToHubMixin
|