# Models The base classes [`PreTrainedModel`], [`TFPreTrainedModel`], and [`FlaxPreTrainedModel`] implement the common methods for loading/saving a model either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS S3 repository). [`PreTrainedModel`] and [`TFPreTrainedModel`] also implement a few methods which are common among all the models to: - resize the input token embeddings when new tokens are added to the vocabulary - prune the attention heads of the model. The other methods that are common to each model are defined in [`~modeling_utils.ModuleUtilsMixin`] (for the PyTorch models) and [`~modeling_tf_utils.TFModuleUtilsMixin`] (for the TensorFlow models) or for text generation, [`~generation_utils.GenerationMixin`] (for the PyTorch models), [`~generation_tf_utils.TFGenerationMixin`] (for the TensorFlow models) and [`~generation_flax_utils.FlaxGenerationMixin`] (for the Flax/JAX models). ## PreTrainedModel [[autodoc]] PreTrainedModel - push_to_hub - all ### Model Instantiation dtype Under Pytorch a model normally gets instantiated with `torch.float32` format. This can be an issue if one tries to load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can either explicitly pass the desired `dtype` using `torch_dtype` argument: ```python model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16) ``` or, if you want the model to always load in the most optimal memory pattern, you can use the special value `"auto"`, and then `dtype` will be automatically derived from the model's weights: ```python model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto") ``` Models instantiated from scratch can also be told which `dtype` to use with: ```python config = T5Config.from_pretrained("t5") model = AutoModel.from_config(config) ``` Due to Pytorch design, this functionality is only available for floating dtypes. ## ModuleUtilsMixin [[autodoc]] modeling_utils.ModuleUtilsMixin ## TFPreTrainedModel [[autodoc]] TFPreTrainedModel - push_to_hub - all ## TFModelUtilsMixin [[autodoc]] modeling_tf_utils.TFModelUtilsMixin ## FlaxPreTrainedModel [[autodoc]] FlaxPreTrainedModel - push_to_hub - all ## Pushing to the Hub [[autodoc]] utils.PushToHubMixin