Merge branch 'main' into random-serve-fixes

This commit is contained in:
Joao Gante 2025-07-02 18:19:54 +01:00 committed by GitHub
commit 31ba925475
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 417 additions and 282 deletions

View File

@ -10,52 +10,39 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License. specific language governing permissions and limitations under the License.
--> -->
# ViTPose <div style="float: right;">
<div class="flex flex-wrap space-x-1">
<div class="flex flex-wrap space-x-1"> <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white"> </div>
</div> </div>
## Overview # ViTPose
The ViTPose model was proposed in [ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation](https://huggingface.co/papers/2204.12484) by Yufei Xu, Jing Zhang, Qiming Zhang, Dacheng Tao. ViTPose employs a standard, non-hierarchical [Vision Transformer](vit) as backbone for the task of keypoint estimation. A simple decoder head is added on top to predict the heatmaps from a given image. Despite its simplicity, the model gets state-of-the-art results on the challenging MS COCO Keypoint Detection benchmark. The model was further improved in [ViTPose++: Vision Transformer for Generic Body Pose Estimation](https://huggingface.co/papers/2212.04246) where the authors employ [ViTPose](https://huggingface.co/papers/2204.12484) is a vision transformer-based model for keypoint (pose) estimation. It uses a simple, non-hierarchical [ViT](./vit) backbone and a lightweight decoder head. This architecture simplifies model design, takes advantage of transformer scalability, and can be adapted to different training strategies.
a mixture-of-experts (MoE) module in the ViT backbone along with pre-training on more data, which further enhances the performance.
The abstract from the paper is the following: [ViTPose++](https://huggingface.co/papers/2212.04246) improves on ViTPose by incorporating a mixture-of-experts (MoE) module in the backbone and using more diverse pretraining data.
*Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitpose-architecture.png" <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitpose-architecture.png"
alt="drawing" width="600"/> alt="drawing" width="600"/>
<small> ViTPose architecture. Taken from the <a href="https://huggingface.co/papers/2204.12484">original paper.</a> </small> You can find all ViTPose and ViTPose++ checkpoints under the [ViTPose collection](https://huggingface.co/collections/usyd-community/vitpose-677fcfd0a0b2b5c8f79c4335).
This model was contributed by [nielsr](https://huggingface.co/nielsr) and [sangbumchoi](https://github.com/SangbumChoi). The example below demonstrates pose estimation with the [`VitPoseForPoseEstimation`] class.
The original code can be found [here](https://github.com/ViTAE-Transformer/ViTPose).
## Usage Tips
ViTPose is a so-called top-down keypoint detection model. This means that one first uses an object detector, like [RT-DETR](rt_detr.md), to detect people (or other instances) in an image. Next, ViTPose takes the cropped images as input and predicts the keypoints for each of them.
```py ```py
import torch import torch
import requests import requests
import numpy as np import numpy as np
import supervision as sv
from PIL import Image from PIL import Image
from transformers import AutoProcessor, RTDetrForObjectDetection, VitPoseForPoseEstimation from transformers import AutoProcessor, RTDetrForObjectDetection, VitPoseForPoseEstimation
device = "cuda" if torch.cuda.is_available() else "cpu" device = "cuda" if torch.cuda.is_available() else "cpu"
url = "http://images.cocodataset.org/val2017/000000000139.jpg" url = "https://www.fcbarcelona.com/fcbarcelona/photo/2021/01/31/3c55a19f-dfc1-4451-885e-afd14e890a11/mini_2021-01-31-BARCELONA-ATHLETIC-BILBAOI-30.JPG"
image = Image.open(requests.get(url, stream=True).raw) image = Image.open(requests.get(url, stream=True).raw)
# ------------------------------------------------------------------------ # Detect humans in the image
# Stage 1. Detect humans on the image
# ------------------------------------------------------------------------
# You can choose any detector of your choice
person_image_processor = AutoProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365") person_image_processor = AutoProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365", device_map=device) person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365", device_map=device)
@ -67,7 +54,7 @@ with torch.no_grad():
results = person_image_processor.post_process_object_detection( results = person_image_processor.post_process_object_detection(
outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3 outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3
) )
result = results[0] # take first image results result = results[0]
# Human label refers 0 index in COCO dataset # Human label refers 0 index in COCO dataset
person_boxes = result["boxes"][result["labels"] == 0] person_boxes = result["boxes"][result["labels"] == 0]
@ -77,10 +64,7 @@ person_boxes = person_boxes.cpu().numpy()
person_boxes[:, 2] = person_boxes[:, 2] - person_boxes[:, 0] person_boxes[:, 2] = person_boxes[:, 2] - person_boxes[:, 0]
person_boxes[:, 3] = person_boxes[:, 3] - person_boxes[:, 1] person_boxes[:, 3] = person_boxes[:, 3] - person_boxes[:, 1]
# ------------------------------------------------------------------------ # Detect keypoints for each person found
# Stage 2. Detect keypoints for each person found
# ------------------------------------------------------------------------
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-base-simple") image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-base-simple")
model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple", device_map=device) model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple", device_map=device)
@ -90,54 +74,7 @@ with torch.no_grad():
outputs = model(**inputs) outputs = model(**inputs)
pose_results = image_processor.post_process_pose_estimation(outputs, boxes=[person_boxes]) pose_results = image_processor.post_process_pose_estimation(outputs, boxes=[person_boxes])
image_pose_result = pose_results[0] # results for first image image_pose_result = pose_results[0]
```
### ViTPose++ models
The best [checkpoints](https://huggingface.co/collections/usyd-community/vitpose-677fcfd0a0b2b5c8f79c4335) are those of the [ViTPose++ paper](https://huggingface.co/papers/2212.04246). ViTPose++ models employ a so-called [Mixture-of-Experts (MoE)](https://huggingface.co/blog/moe) architecture for the ViT backbone, resulting in better performance.
The ViTPose+ checkpoints use 6 experts, hence 6 different dataset indices can be passed.
An overview of the various dataset indices is provided below:
- 0: [COCO validation 2017](https://cocodataset.org/#overview) dataset, using an object detector that gets 56 AP on the "person" class
- 1: [AiC](https://github.com/fabbrimatteo/AiC-Dataset) dataset
- 2: [MPII](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/software-and-datasets/mpii-human-pose-dataset) dataset
- 3: [AP-10K](https://github.com/AlexTheBad/AP-10K) dataset
- 4: [APT-36K](https://github.com/pandorgan/APT-36K) dataset
- 5: [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) dataset
Pass the `dataset_index` argument in the forward of the model to indicate which experts to use for each example in the batch. Example usage is shown below:
```python
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-plus-base")
model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-plus-base", device=device)
inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
dataset_index = torch.tensor([0], device=device) # must be a tensor of shape (batch_size,)
with torch.no_grad():
outputs = model(**inputs, dataset_index=dataset_index)
```
The ViTPose+ checkpoints use 6 experts, hence 6 different dataset indices can be passed.
An overview of the various dataset indices is provided below:
- 0: [COCO validation 2017](https://cocodataset.org/#overview) dataset, using an object detector that gets 56 AP on the "person" class
- 1: [AiC](https://github.com/fabbrimatteo/AiC-Dataset) dataset
- 2: [MPII](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/software-and-datasets/mpii-human-pose-dataset) dataset
- 3: [AP-10K](https://github.com/AlexTheBad/AP-10K) dataset
- 4: [APT-36K](https://github.com/pandorgan/APT-36K) dataset
- 5: [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) dataset
### Visualization
To visualize the various keypoints, one can either leverage the `supervision` [library](https://github.com/roboflow/supervision (requires `pip install supervision`):
```python
import supervision as sv
xy = torch.stack([pose_result['keypoints'] for pose_result in image_pose_result]).cpu().numpy() xy = torch.stack([pose_result['keypoints'] for pose_result in image_pose_result]).cpu().numpy()
scores = torch.stack([pose_result['scores'] for pose_result in image_pose_result]).cpu().numpy() scores = torch.stack([pose_result['scores'] for pose_result in image_pose_result]).cpu().numpy()
@ -162,119 +99,192 @@ annotated_frame = vertex_annotator.annotate(
scene=annotated_frame, scene=annotated_frame,
key_points=key_points key_points=key_points
) )
annotated_frame
``` ```
Alternatively, one can also visualize the keypoints using [OpenCV](https://opencv.org/) (requires `pip install opencv-python`): <div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitpose.png"/>
</div>
```python Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
import math
import cv2
def draw_points(image, keypoints, scores, pose_keypoint_color, keypoint_score_threshold, radius, show_keypoint_weight): The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4.
if pose_keypoint_color is not None:
assert len(pose_keypoint_color) == len(keypoints)
for kid, (kpt, kpt_score) in enumerate(zip(keypoints, scores)):
x_coord, y_coord = int(kpt[0]), int(kpt[1])
if kpt_score > keypoint_score_threshold:
color = tuple(int(c) for c in pose_keypoint_color[kid])
if show_keypoint_weight:
cv2.circle(image, (int(x_coord), int(y_coord)), radius, color, -1)
transparency = max(0, min(1, kpt_score))
cv2.addWeighted(image, transparency, image, 1 - transparency, 0, dst=image)
else:
cv2.circle(image, (int(x_coord), int(y_coord)), radius, color, -1)
def draw_links(image, keypoints, scores, keypoint_edges, link_colors, keypoint_score_threshold, thickness, show_keypoint_weight, stick_width = 2): ```py
height, width, _ = image.shape # pip install torchao
if keypoint_edges is not None and link_colors is not None: import torch
assert len(link_colors) == len(keypoint_edges) import requests
for sk_id, sk in enumerate(keypoint_edges): import numpy as np
x1, y1, score1 = (int(keypoints[sk[0], 0]), int(keypoints[sk[0], 1]), scores[sk[0]]) from PIL import Image
x2, y2, score2 = (int(keypoints[sk[1], 0]), int(keypoints[sk[1], 1]), scores[sk[1]]) from transformers import AutoProcessor, RTDetrForObjectDetection, VitPoseForPoseEstimation, TorchAoConfig
if (
x1 > 0 url = "https://www.fcbarcelona.com/fcbarcelona/photo/2021/01/31/3c55a19f-dfc1-4451-885e-afd14e890a11/mini_2021-01-31-BARCELONA-ATHLETIC-BILBAOI-30.JPG"
and x1 < width image = Image.open(requests.get(url, stream=True).raw)
and y1 > 0
and y1 < height person_image_processor = AutoProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
and x2 > 0 person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365", device_map=device)
and x2 < width
and y2 > 0 inputs = person_image_processor(images=image, return_tensors="pt").to(device)
and y2 < height
and score1 > keypoint_score_threshold with torch.no_grad():
and score2 > keypoint_score_threshold outputs = person_model(**inputs)
):
color = tuple(int(c) for c in link_colors[sk_id]) results = person_image_processor.post_process_object_detection(
outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3
)
result = results[0]
person_boxes = result["boxes"][result["labels"] == 0]
person_boxes = person_boxes.cpu().numpy()
person_boxes[:, 2] = person_boxes[:, 2] - person_boxes[:, 0]
person_boxes[:, 3] = person_boxes[:, 3] - person_boxes[:, 1]
quantization_config = TorchAoConfig("int4_weight_only", group_size=128)
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-plus-huge")
model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-plus-huge", device_map=device, quantization_config=quantization_config)
inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
pose_results = image_processor.post_process_pose_estimation(outputs, boxes=[person_boxes])
image_pose_result = pose_results[0]
```
## Notes
- Use [`AutoProcessor`] to automatically prepare bounding box and image inputs.
- ViTPose is a top-down pose estimator. It uses a object detector to detect individuals first before keypoint prediction.
- ViTPose++ has 6 different MoE expert heads (COCO validation `0`, AiC `1`, MPII `2`, AP-10K `3`, APT-36K `4`, COCO-WholeBody `5`) which supports 6 different datasets. Pass a specific value corresponding to the dataset to the `dataset_index` to indicate which expert to use.
```py
from transformers import AutoProcessor, VitPoseForPoseEstimation
device = "cuda" if torch.cuda.is_available() else "cpu"
image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-plus-base")
model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-plus-base", device=device)
inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
dataset_index = torch.tensor([0], device=device) # must be a tensor of shape (batch_size,)
with torch.no_grad():
outputs = model(**inputs, dataset_index=dataset_index)
```
- [OpenCV](https://opencv.org/) is an alternative option for visualizing the estimated pose.
```py
# pip install opencv-python
import math
import cv2
def draw_points(image, keypoints, scores, pose_keypoint_color, keypoint_score_threshold, radius, show_keypoint_weight):
if pose_keypoint_color is not None:
assert len(pose_keypoint_color) == len(keypoints)
for kid, (kpt, kpt_score) in enumerate(zip(keypoints, scores)):
x_coord, y_coord = int(kpt[0]), int(kpt[1])
if kpt_score > keypoint_score_threshold:
color = tuple(int(c) for c in pose_keypoint_color[kid])
if show_keypoint_weight: if show_keypoint_weight:
X = (x1, x2) cv2.circle(image, (int(x_coord), int(y_coord)), radius, color, -1)
Y = (y1, y2) transparency = max(0, min(1, kpt_score))
mean_x = np.mean(X)
mean_y = np.mean(Y)
length = ((Y[0] - Y[1]) ** 2 + (X[0] - X[1]) ** 2) ** 0.5
angle = math.degrees(math.atan2(Y[0] - Y[1], X[0] - X[1]))
polygon = cv2.ellipse2Poly(
(int(mean_x), int(mean_y)), (int(length / 2), int(stick_width)), int(angle), 0, 360, 1
)
cv2.fillConvexPoly(image, polygon, color)
transparency = max(0, min(1, 0.5 * (keypoints[sk[0], 2] + keypoints[sk[1], 2])))
cv2.addWeighted(image, transparency, image, 1 - transparency, 0, dst=image) cv2.addWeighted(image, transparency, image, 1 - transparency, 0, dst=image)
else: else:
cv2.line(image, (x1, y1), (x2, y2), color, thickness=thickness) cv2.circle(image, (int(x_coord), int(y_coord)), radius, color, -1)
def draw_links(image, keypoints, scores, keypoint_edges, link_colors, keypoint_score_threshold, thickness, show_keypoint_weight, stick_width = 2):
height, width, _ = image.shape
if keypoint_edges is not None and link_colors is not None:
assert len(link_colors) == len(keypoint_edges)
for sk_id, sk in enumerate(keypoint_edges):
x1, y1, score1 = (int(keypoints[sk[0], 0]), int(keypoints[sk[0], 1]), scores[sk[0]])
x2, y2, score2 = (int(keypoints[sk[1], 0]), int(keypoints[sk[1], 1]), scores[sk[1]])
if (
x1 > 0
and x1 < width
and y1 > 0
and y1 < height
and x2 > 0
and x2 < width
and y2 > 0
and y2 < height
and score1 > keypoint_score_threshold
and score2 > keypoint_score_threshold
):
color = tuple(int(c) for c in link_colors[sk_id])
if show_keypoint_weight:
X = (x1, x2)
Y = (y1, y2)
mean_x = np.mean(X)
mean_y = np.mean(Y)
length = ((Y[0] - Y[1]) ** 2 + (X[0] - X[1]) ** 2) ** 0.5
angle = math.degrees(math.atan2(Y[0] - Y[1], X[0] - X[1]))
polygon = cv2.ellipse2Poly(
(int(mean_x), int(mean_y)), (int(length / 2), int(stick_width)), int(angle), 0, 360, 1
)
cv2.fillConvexPoly(image, polygon, color)
transparency = max(0, min(1, 0.5 * (keypoints[sk[0], 2] + keypoints[sk[1], 2])))
cv2.addWeighted(image, transparency, image, 1 - transparency, 0, dst=image)
else:
cv2.line(image, (x1, y1), (x2, y2), color, thickness=thickness)
# Note: keypoint_edges and color palette are dataset-specific # Note: keypoint_edges and color palette are dataset-specific
keypoint_edges = model.config.edges keypoint_edges = model.config.edges
palette = np.array( palette = np.array(
[ [
[255, 128, 0], [255, 128, 0],
[255, 153, 51], [255, 153, 51],
[255, 178, 102], [255, 178, 102],
[230, 230, 0], [230, 230, 0],
[255, 153, 255], [255, 153, 255],
[153, 204, 255], [153, 204, 255],
[255, 102, 255], [255, 102, 255],
[255, 51, 255], [255, 51, 255],
[102, 178, 255], [102, 178, 255],
[51, 153, 255], [51, 153, 255],
[255, 153, 153], [255, 153, 153],
[255, 102, 102], [255, 102, 102],
[255, 51, 51], [255, 51, 51],
[153, 255, 153], [153, 255, 153],
[102, 255, 102], [102, 255, 102],
[51, 255, 51], [51, 255, 51],
[0, 255, 0], [0, 255, 0],
[0, 0, 255], [0, 0, 255],
[255, 0, 0], [255, 0, 0],
[255, 255, 255], [255, 255, 255],
] ]
) )
link_colors = palette[[0, 0, 0, 0, 7, 7, 7, 9, 9, 9, 9, 9, 16, 16, 16, 16, 16, 16, 16]] link_colors = palette[[0, 0, 0, 0, 7, 7, 7, 9, 9, 9, 9, 9, 16, 16, 16, 16, 16, 16, 16]]
keypoint_colors = palette[[16, 16, 16, 16, 16, 9, 9, 9, 9, 9, 9, 0, 0, 0, 0, 0, 0]] keypoint_colors = palette[[16, 16, 16, 16, 16, 9, 9, 9, 9, 9, 9, 0, 0, 0, 0, 0, 0]]
numpy_image = np.array(image) numpy_image = np.array(image)
for pose_result in image_pose_result: for pose_result in image_pose_result:
scores = np.array(pose_result["scores"]) scores = np.array(pose_result["scores"])
keypoints = np.array(pose_result["keypoints"]) keypoints = np.array(pose_result["keypoints"])
# draw each point on image # draw each point on image
draw_points(numpy_image, keypoints, scores, keypoint_colors, keypoint_score_threshold=0.3, radius=4, show_keypoint_weight=False) draw_points(numpy_image, keypoints, scores, keypoint_colors, keypoint_score_threshold=0.3, radius=4, show_keypoint_weight=False)
# draw links # draw links
draw_links(numpy_image, keypoints, scores, keypoint_edges, link_colors, keypoint_score_threshold=0.3, thickness=1, show_keypoint_weight=False) draw_links(numpy_image, keypoints, scores, keypoint_edges, link_colors, keypoint_score_threshold=0.3, thickness=1, show_keypoint_weight=False)
pose_image = Image.fromarray(numpy_image) pose_image = Image.fromarray(numpy_image)
pose_image pose_image
``` ```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitpose-coco.jpg" alt="drawing" width="600"/>
## Resources ## Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTPose. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Refer to resources below to learn more about using ViTPose.
- A demo of ViTPose on images and video can be found [here](https://huggingface.co/spaces/hysts/ViTPose-transformers). - This [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViTPose/Inference_with_ViTPose_for_body_pose_estimation.ipynb) demonstrates inference and visualization.
- A notebook illustrating inference and visualization can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViTPose/Inference_with_ViTPose_for_human_pose_estimation.ipynb). - This [Space](https://huggingface.co/spaces/hysts/ViTPose-transformers) demonstrates ViTPose on images and video.
## VitPoseImageProcessor ## VitPoseImageProcessor

View File

@ -829,6 +829,9 @@ class DabDetrPreTrainedModel(PreTrainedModel):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.weight.data.fill_(1.0)
module.bias.data.zero_()
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None: if module.padding_idx is not None:
@ -841,6 +844,8 @@ class DabDetrPreTrainedModel(PreTrainedModel):
prior_prob = self.config.initializer_bias_prior_prob or 1 / (self.config.num_labels + 1) prior_prob = self.config.initializer_bias_prior_prob or 1 / (self.config.num_labels + 1)
bias_value = -math.log((1 - prior_prob) / prior_prob) bias_value = -math.log((1 - prior_prob) / prior_prob)
module.class_embed.bias.data.fill_(bias_value) module.class_embed.bias.data.fill_(bias_value)
elif isinstance(module, nn.PReLU):
module.reset_parameters()
# Modified from transformers.models.detr.modeling_detr.DetrEncoder with Detr->DabDetr,DETR->ConditionalDETR # Modified from transformers.models.detr.modeling_detr.DetrEncoder with Detr->DabDetr,DETR->ConditionalDETR

View File

@ -480,6 +480,12 @@ class DacPreTrainedModel(PreTrainedAudioTokenizerBase):
if isinstance(module, nn.Conv1d): if isinstance(module, nn.Conv1d):
nn.init.trunc_normal_(module.weight, std=0.02) nn.init.trunc_normal_(module.weight, std=0.02)
nn.init.constant_(module.bias, 0) nn.init.constant_(module.bias, 0)
elif isinstance(module, Snake1d):
module.alpha.data.fill_(1.0)
elif isinstance(module, nn.ConvTranspose1d):
module.reset_parameters()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=0.02)
def apply_weight_norm(self): def apply_weight_norm(self):
weight_norm = nn.utils.weight_norm weight_norm = nn.utils.weight_norm

View File

@ -235,7 +235,7 @@ class EncodecLSTM(nn.Module):
LSTM without worrying about the hidden state, nor the layout of the data. Expects input as convolutional layout. LSTM without worrying about the hidden state, nor the layout of the data. Expects input as convolutional layout.
""" """
def __init__(self, config, dimension): def __init__(self, config: EncodecConfig, dimension: int):
super().__init__() super().__init__()
self.lstm = nn.LSTM(dimension, dimension, config.num_lstm_layers) self.lstm = nn.LSTM(dimension, dimension, config.num_lstm_layers)
@ -452,11 +452,7 @@ class EncodecPreTrainedModel(PreTrainedModel):
def _init_weights(self, module): def _init_weights(self, module):
"""Initialize the weights""" """Initialize the weights"""
if isinstance(module, nn.Linear): if isinstance(module, nn.GroupNorm):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, (nn.LayerNorm, nn.GroupNorm)):
module.bias.data.zero_() module.bias.data.zero_()
module.weight.data.fill_(1.0) module.weight.data.fill_(1.0)
elif isinstance(module, nn.Conv1d): elif isinstance(module, nn.Conv1d):
@ -464,10 +460,8 @@ class EncodecPreTrainedModel(PreTrainedModel):
if module.bias is not None: if module.bias is not None:
k = math.sqrt(module.groups / (module.in_channels * module.kernel_size[0])) k = math.sqrt(module.groups / (module.in_channels * module.kernel_size[0]))
nn.init.uniform_(module.bias, a=-k, b=k) nn.init.uniform_(module.bias, a=-k, b=k)
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.ConvTranspose1d):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) module.reset_parameters()
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
elif isinstance(module, nn.LSTM): elif isinstance(module, nn.LSTM):
for name, param in module.named_parameters(): for name, param in module.named_parameters():
if "weight" in name: if "weight" in name:
@ -659,7 +653,7 @@ class EncodecModel(EncodecPreTrainedModel):
def decode( def decode(
self, self,
audio_codes: torch.Tensor, audio_codes: torch.LongTensor,
audio_scales: torch.Tensor, audio_scales: torch.Tensor,
padding_mask: Optional[torch.Tensor] = None, padding_mask: Optional[torch.Tensor] = None,
return_dict: Optional[bool] = None, return_dict: Optional[bool] = None,
@ -708,10 +702,10 @@ class EncodecModel(EncodecPreTrainedModel):
@auto_docstring @auto_docstring
def forward( def forward(
self, self,
input_values: torch.Tensor, input_values: torch.FloatTensor,
padding_mask: Optional[torch.Tensor] = None, padding_mask: Optional[torch.BoolTensor] = None,
bandwidth: Optional[float] = None, bandwidth: Optional[float] = None,
audio_codes: Optional[torch.Tensor] = None, audio_codes: Optional[torch.LongTensor] = None,
audio_scales: Optional[torch.Tensor] = None, audio_scales: Optional[torch.Tensor] = None,
return_dict: Optional[bool] = None, return_dict: Optional[bool] = None,
) -> Union[tuple[torch.Tensor, torch.Tensor], EncodecOutput]: ) -> Union[tuple[torch.Tensor, torch.Tensor], EncodecOutput]:

View File

@ -445,9 +445,16 @@ class FalconMambaPreTrainedModel(PreTrainedModel):
def _init_weights(self, module): def _init_weights(self, module):
"""Initialize the weights.""" """Initialize the weights."""
std = self.config.initializer_range
if isinstance(module, FalconMambaMixer): if isinstance(module, FalconMambaMixer):
# S4D real initialization. These are not discretized!
# The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded
A = torch.arange(1, module.ssm_state_size + 1, dtype=torch.float32)[None, :]
A = A.expand(module.intermediate_size, -1).contiguous()
module.A_log.copy_(torch.log(A))
module.A_log._no_weight_decay = True module.A_log._no_weight_decay = True
module.D._no_weight_decay = True module.D._no_weight_decay = True
module.D.data.fill_(1.0)
dt_init_std = self.config.time_step_rank**-0.5 * self.config.time_step_scale dt_init_std = self.config.time_step_rank**-0.5 * self.config.time_step_scale
if self.config.time_step_init_scheme == "constant": if self.config.time_step_init_scheme == "constant":
@ -462,33 +469,39 @@ class FalconMambaPreTrainedModel(PreTrainedModel):
).clamp(min=self.config.time_step_floor) ).clamp(min=self.config.time_step_floor)
# # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759 # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
inv_dt = dt + torch.log(-torch.expm1(-dt)) inv_dt = dt + torch.log(-torch.expm1(-dt))
with torch.no_grad(): module.dt_proj.bias.copy_(inv_dt)
module.dt_proj.bias.copy_(inv_dt)
module.dt_proj.bias._no_reinit = True module.dt_proj.bias._no_reinit = True
nn.init.kaiming_uniform_(module.conv1d.weight, a=math.sqrt(5))
if module.conv1d.bias is not None:
if not getattr(module.conv1d.bias, "_no_reinit", False):
nn.init.zeros_(module.conv1d.bias)
nn.init.kaiming_uniform_(module.out_proj.weight, a=math.sqrt(5))
if self.config.rescale_prenorm_residual:
# Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
# > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
# > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
# > -- GPT-2 :: https://openai.com/blog/better-language-models/
#
# Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
# Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
# Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
# We need to reinit p since this code could be called multiple times
# Having just p *= scale would repeatedly scale it down
p = module.out_proj.weight
p /= math.sqrt(self.config.num_hidden_layers)
if isinstance(module, nn.Linear): if isinstance(module, nn.Linear):
if not getattr(module.weight, "_no_reinit", False):
nn.init.normal_(module.weight, std=std)
if module.bias is not None: if module.bias is not None:
if not getattr(module.bias, "_no_reinit", False): if not getattr(module.bias, "_no_reinit", False):
nn.init.zeros_(module.bias) nn.init.zeros_(module.bias)
elif isinstance(module, FalconMambaRMSNorm):
module.weight.data.fill_(1.0)
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
nn.init.normal_(module.weight, std=self.config.initializer_range) nn.init.normal_(module.weight, std=std)
if self.config.rescale_prenorm_residual:
# Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
# > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
# > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
# > -- GPT-2 :: https://openai.com/blog/better-language-models/
#
# Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
for name, p in module.named_parameters():
if name in ["out_proj.weight"]:
# Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
# Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
# We need to reinit p since this code could be called multiple times
# Having just p *= scale would repeatedly scale it down
nn.init.kaiming_uniform_(p, a=math.sqrt(5))
with torch.no_grad():
p /= math.sqrt(self.config.num_hidden_layers)
@dataclass @dataclass

View File

@ -1414,16 +1414,18 @@ class GroundingDinoPreTrainedModel(PreTrainedModel):
module.out_vision_proj.bias.data.fill_(0) module.out_vision_proj.bias.data.fill_(0)
nn.init.xavier_uniform_(module.out_text_proj.weight) nn.init.xavier_uniform_(module.out_text_proj.weight)
module.out_text_proj.bias.data.fill_(0) module.out_text_proj.bias.data.fill_(0)
elif isinstance(module, (GroundingDinoEncoderLayer, GroundingDinoDecoderLayer)): elif isinstance(module, GroundingDinoFusionLayer):
for p in module.parameters(): module.vision_param.data.fill_(1e-4)
if p.dim() > 1: module.text_param.data.fill_(1e-4)
nn.init.normal_(p, mean=0.0, std=std)
elif isinstance(module, (nn.Linear, nn.Conv2d, nn.BatchNorm2d)): elif isinstance(module, (nn.Linear, nn.Conv2d, nn.BatchNorm2d)):
# Slightly different from the TF version which uses truncated_normal for initialization # Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617 # cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, (nn.LayerNorm, nn.GroupNorm)):
module.weight.data.fill_(1.0)
module.bias.data.zero_()
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None: if module.padding_idx is not None:

View File

@ -382,9 +382,16 @@ class MambaPreTrainedModel(PreTrainedModel):
def _init_weights(self, module): def _init_weights(self, module):
"""Initialize the weights.""" """Initialize the weights."""
std = self.config.initializer_range
if isinstance(module, MambaMixer): if isinstance(module, MambaMixer):
# S4D real initialization. These are not discretized!
# The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded
A = torch.arange(1, module.ssm_state_size + 1, dtype=torch.float32)[None, :]
A = A.expand(module.intermediate_size, -1).contiguous()
module.A_log.copy_(torch.log(A))
module.A_log._no_weight_decay = True module.A_log._no_weight_decay = True
module.D._no_weight_decay = True module.D._no_weight_decay = True
module.D.data.fill_(1.0)
dt_init_std = self.config.time_step_rank**-0.5 * self.config.time_step_scale dt_init_std = self.config.time_step_rank**-0.5 * self.config.time_step_scale
if self.config.time_step_init_scheme == "constant": if self.config.time_step_init_scheme == "constant":
@ -399,33 +406,39 @@ class MambaPreTrainedModel(PreTrainedModel):
).clamp(min=self.config.time_step_floor) ).clamp(min=self.config.time_step_floor)
# # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759 # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
inv_dt = dt + torch.log(-torch.expm1(-dt)) inv_dt = dt + torch.log(-torch.expm1(-dt))
with torch.no_grad(): module.dt_proj.bias.copy_(inv_dt)
module.dt_proj.bias.copy_(inv_dt)
module.dt_proj.bias._no_reinit = True module.dt_proj.bias._no_reinit = True
nn.init.kaiming_uniform_(module.conv1d.weight, a=math.sqrt(5))
if module.conv1d.bias is not None:
if not getattr(module.conv1d.bias, "_no_reinit", False):
nn.init.zeros_(module.conv1d.bias)
nn.init.kaiming_uniform_(module.out_proj.weight, a=math.sqrt(5))
if self.config.rescale_prenorm_residual:
# Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
# > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
# > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
# > -- GPT-2 :: https://openai.com/blog/better-language-models/
#
# Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
# Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
# Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
# We need to reinit p since this code could be called multiple times
# Having just p *= scale would repeatedly scale it down
p = module.out_proj.weight
p /= math.sqrt(self.config.num_hidden_layers)
if isinstance(module, nn.Linear): if isinstance(module, nn.Linear):
if not getattr(module.weight, "_no_reinit", False):
nn.init.normal_(module.weight, std=std)
if module.bias is not None: if module.bias is not None:
if not getattr(module.bias, "_no_reinit", False): if not getattr(module.bias, "_no_reinit", False):
nn.init.zeros_(module.bias) nn.init.zeros_(module.bias)
elif isinstance(module, MambaRMSNorm):
module.weight.data.fill_(1.0)
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
nn.init.normal_(module.weight, std=self.config.initializer_range) nn.init.normal_(module.weight, std=std)
if self.config.rescale_prenorm_residual:
# Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
# > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
# > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
# > -- GPT-2 :: https://openai.com/blog/better-language-models/
#
# Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
for name, p in module.named_parameters():
if name in ["out_proj.weight"]:
# Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
# Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
# We need to reinit p since this code could be called multiple times
# Having just p *= scale would repeatedly scale it down
nn.init.kaiming_uniform_(p, a=math.sqrt(5))
with torch.no_grad():
p /= math.sqrt(self.config.num_hidden_layers)
@dataclass @dataclass

View File

@ -721,9 +721,15 @@ class Mamba2PreTrainedModel(PreTrainedModel):
def _init_weights(self, module): def _init_weights(self, module):
"""Initialize the weights.""" """Initialize the weights."""
std = self.config.initializer_range
if isinstance(module, Mamba2Mixer): if isinstance(module, Mamba2Mixer):
# S4D real initialization. These are not discretized!
# The core is to load them, compute the discrete states, then write the updated state. Keeps the memory bounded
A = torch.arange(1, self.config.num_heads + 1)
module.A_log.copy_(torch.log(A))
module.A_log._no_weight_decay = True module.A_log._no_weight_decay = True
module.D._no_weight_decay = True module.D._no_weight_decay = True
module.D.data.fill_(1.0)
dt = torch.exp( dt = torch.exp(
torch.rand(self.config.num_heads) torch.rand(self.config.num_heads)
@ -733,33 +739,39 @@ class Mamba2PreTrainedModel(PreTrainedModel):
# # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759 # # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759
inv_dt = dt + torch.log(-torch.expm1(-dt)) inv_dt = dt + torch.log(-torch.expm1(-dt))
with torch.no_grad(): module.dt_bias.copy_(inv_dt)
module.dt_bias.copy_(inv_dt)
module.dt_bias._no_reinit = True module.dt_bias._no_reinit = True
nn.init.kaiming_uniform_(module.conv1d.weight, a=math.sqrt(5))
if module.conv1d.bias is not None:
if not getattr(module.conv1d.bias, "_no_reinit", False):
nn.init.zeros_(module.conv1d.bias)
nn.init.kaiming_uniform_(module.out_proj.weight, a=math.sqrt(5))
if self.config.rescale_prenorm_residual:
# Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
# > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
# > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
# > -- GPT-2 :: https://openai.com/blog/better-language-models/
#
# Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
# Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
# Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
# We need to reinit p since this code could be called multiple times
# Having just p *= scale would repeatedly scale it down
p = module.out_proj.weight
p /= math.sqrt(self.config.num_hidden_layers)
if isinstance(module, nn.Linear): if isinstance(module, nn.Linear):
if not getattr(module.weight, "_no_reinit", False):
nn.init.normal_(module.weight, std=std)
if module.bias is not None: if module.bias is not None:
if not getattr(module.bias, "_no_reinit", False): if not getattr(module.bias, "_no_reinit", False):
nn.init.zeros_(module.bias) nn.init.zeros_(module.bias)
elif isinstance(module, (Mamba2RMSNorm, MambaRMSNormGated)):
module.weight.data.fill_(1.0)
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
nn.init.normal_(module.weight, std=self.config.initializer_range) nn.init.normal_(module.weight, std=std)
if self.config.rescale_prenorm_residual:
# Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
# > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
# > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
# > -- GPT-2 :: https://openai.com/blog/better-language-models/
#
# Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
for name, p in module.named_parameters():
if name in ["out_proj.weight"]:
# Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
# Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
# We need to reinit p since this code could be called multiple times
# Having just p *= scale would repeatedly scale it down
nn.init.kaiming_uniform_(p, a=math.sqrt(5))
with torch.no_grad():
p /= math.sqrt(self.config.num_hidden_layers)
@dataclass @dataclass

View File

@ -440,10 +440,13 @@ class MusicgenPreTrainedModel(PreTrainedModel):
def _init_weights(self, module): def _init_weights(self, module):
std = self.config.initializer_factor std = self.config.initializer_factor
if isinstance(module, (nn.Linear, nn.Conv1d)): if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.weight.data.fill_(1.0)
module.bias.data.zero_()
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None: if module.padding_idx is not None:

View File

@ -406,10 +406,13 @@ class MusicgenMelodyPreTrainedModel(PreTrainedModel):
def _init_weights(self, module): def _init_weights(self, module):
std = self.config.initializer_factor std = self.config.initializer_factor
if isinstance(module, (nn.Linear, nn.Conv1d)): if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.weight.data.fill_(1.0)
module.bias.data.zero_()
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None: if module.padding_idx is not None:
@ -1286,7 +1289,7 @@ class MusicgenMelodyForConditionalGeneration(PreTrainedModel, GenerationMixin):
The text encoder model that encodes text into hidden states for conditioning. The text encoder model that encodes text into hidden states for conditioning.
audio_encoder (`PreTrainedModel`, *optional*): audio_encoder (`PreTrainedModel`, *optional*):
The audio encoder model that encodes audio into hidden states for conditioning. The audio encoder model that encodes audio into hidden states for conditioning.
decoder (`MusicgenForCausalLM`, *optional*): decoder (`MusicgenMelodyForCausalLM`, *optional*):
The decoder model that generates audio tokens based on conditioning signals. The decoder model that generates audio tokens based on conditioning signals.
""" """
if config is None and None in (text_encoder, audio_encoder, decoder): if config is None and None in (text_encoder, audio_encoder, decoder):

View File

@ -1006,10 +1006,15 @@ class OmDetTurboPreTrainedModel(PreTrainedModel):
nn.init.xavier_uniform_(module.query_position_head.layers[1].weight) nn.init.xavier_uniform_(module.query_position_head.layers[1].weight)
for layer in module.channel_projection_layers: for layer in module.channel_projection_layers:
nn.init.xavier_uniform_(layer[0].weight) nn.init.xavier_uniform_(layer[0].weight)
elif isinstance(module, OmDetTurboLanguageBackbone):
nn.init.normal_(module.text_projection, std=self.config.text_projection_in_dim**-0.5)
elif isinstance(module, (nn.Linear, nn.Conv2d, nn.BatchNorm2d)): elif isinstance(module, (nn.Linear, nn.Conv2d, nn.BatchNorm2d)):
module.weight.data.normal_(mean=0.0, std=self.config.init_std) module.weight.data.normal_(mean=0.0, std=self.config.init_std)
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.weight.data.fill_(1.0)
module.bias.data.zero_()
def _set_gradient_checkpointing(self, module, value=False): def _set_gradient_checkpointing(self, module, value=False):
if isinstance(module, OmDetTurboDecoder): if isinstance(module, OmDetTurboDecoder):

View File

@ -283,6 +283,9 @@ class Qwen2AudioPreTrainedModel(PreTrainedModel):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.weight.data.fill_(1.0)
module.bias.data.zero_()
elif isinstance(module, nn.Embedding): elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=std) module.weight.data.normal_(mean=0.0, std=std)
if module.padding_idx is not None: if module.padding_idx is not None:

View File

@ -604,7 +604,7 @@ class SegGptPreTrainedModel(PreTrainedModel):
supports_gradient_checkpointing = True supports_gradient_checkpointing = True
_no_split_modules = ["SegGptEmbeddings", "SegGptLayer"] _no_split_modules = ["SegGptEmbeddings", "SegGptLayer"]
def _init_weights(self, module: Union[nn.Linear, nn.Conv2d, nn.LayerNorm]) -> None: def _init_weights(self, module: nn.Module) -> None:
"""Initialize the weights""" """Initialize the weights"""
std = self.config.initializer_range std = self.config.initializer_range
if isinstance(module, (nn.Linear, nn.Conv2d)): if isinstance(module, (nn.Linear, nn.Conv2d)):
@ -615,7 +615,7 @@ class SegGptPreTrainedModel(PreTrainedModel):
) )
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm): elif isinstance(module, (nn.LayerNorm, SegGptLayerNorm)):
module.bias.data.zero_() module.bias.data.zero_()
module.weight.data.fill_(1.0) module.weight.data.fill_(1.0)
elif isinstance(module, SegGptAttention): elif isinstance(module, SegGptAttention):

View File

@ -551,17 +551,18 @@ class SuperGluePreTrainedModel(PreTrainedModel):
def _init_weights(self, module: nn.Module) -> None: def _init_weights(self, module: nn.Module) -> None:
"""Initialize the weights""" """Initialize the weights"""
if isinstance(module, (nn.Linear, nn.Conv2d, nn.Conv1d)): if isinstance(module, (nn.Linear, nn.Conv2d)):
# Slightly different from the TF version which uses truncated_normal for initialization # Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617 # cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None: if module.bias is not None:
module.bias.data.zero_() module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm): elif isinstance(module, nn.BatchNorm1d):
module.bias.data.zero_() module.bias.data.zero_()
module.weight.data.fill_(1.0) module.weight.data.fill_(1.0)
elif isinstance(module, SuperGlueMultiLayerPerceptron):
nn.init.constant_(module.linear.bias, 0.0) if hasattr(module, "bin_score"):
module.bin_score.data.fill_(1.0)
@auto_docstring( @auto_docstring(

View File

@ -310,12 +310,13 @@ class EncodecModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase)
def test_feed_forward_chunking(self): def test_feed_forward_chunking(self):
(original_config, inputs_dict) = self.model_tester.prepare_config_and_inputs_for_common() (original_config, inputs_dict) = self.model_tester.prepare_config_and_inputs_for_common()
# original_config.norm_type = "time_group_norm"
for model_class in self.all_model_classes: for model_class in self.all_model_classes:
torch.manual_seed(0) torch.manual_seed(0)
config = copy.deepcopy(original_config) config = copy.deepcopy(original_config)
config.chunk_length_s = None config.chunk_length_s = None
config.overlap = None config.overlap = None
config.sampling_rate = 10 config.sampling_rate = 20
model = model_class(config) model = model_class(config)
model.to(torch_device) model.to(torch_device)
@ -326,9 +327,9 @@ class EncodecModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase)
hidden_states_no_chunk = model(**inputs)[1] hidden_states_no_chunk = model(**inputs)[1]
torch.manual_seed(0) torch.manual_seed(0)
config.chunk_length_s = 1 config.chunk_length_s = 2
config.overlap = 0 config.overlap = 0
config.sampling_rate = 10 config.sampling_rate = 20
model = model_class(config) model = model_class(config)
model.to(torch_device) model.to(torch_device)

View File

@ -33,7 +33,7 @@ from transformers.testing_utils import (
from ...generation.test_utils import GenerationTesterMixin from ...generation.test_utils import GenerationTesterMixin
from ...test_configuration_common import ConfigTester from ...test_configuration_common import ConfigTester
from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_modeling_common import ModelTesterMixin, _config_zero_init, ids_tensor
from ...test_pipeline_mixin import PipelineTesterMixin from ...test_pipeline_mixin import PipelineTesterMixin
@ -359,9 +359,11 @@ class FalconMambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTest
def test_initialization(self): def test_initialization(self):
config, _ = self.model_tester.prepare_config_and_inputs_for_common() config, _ = self.model_tester.prepare_config_and_inputs_for_common()
config.rescale_prenorm_residual = True
configs_no_init = _config_zero_init(config)
for model_class in self.all_model_classes: for model_class in self.all_model_classes:
model = model_class(config=config) model = model_class(config=configs_no_init)
for name, param in model.named_parameters(): for name, param in model.named_parameters():
if "dt_proj.bias" in name: if "dt_proj.bias" in name:
dt = torch.exp( dt = torch.exp(
@ -380,6 +382,19 @@ class FalconMambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTest
if param.requires_grad: if param.requires_grad:
# check if it's a ones like # check if it's a ones like
torch.testing.assert_close(param.data, torch.ones_like(param.data), rtol=1e-5, atol=1e-5) torch.testing.assert_close(param.data, torch.ones_like(param.data), rtol=1e-5, atol=1e-5)
else:
if param.requires_grad:
if (
"mixer.conv1d.weight" in name
or "mixer.dt_proj.weight" in name
or "mixer.out_proj.weight" in name
):
continue
self.assertIn(
((param.data.mean() * 1e9).round() / 1e9).item(),
[0.0, 1.0],
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
)
@slow @slow
# Ignore copy # Ignore copy

View File

@ -69,16 +69,15 @@ class Glm4vVisionText2TextModelTester:
is_training=True, is_training=True,
text_config={ text_config={
"vocab_size": 99, "vocab_size": 99,
"hidden_size": 32, "hidden_size": 16,
"intermediate_size": 37, "intermediate_size": 22,
"num_hidden_layers": 4, "num_hidden_layers": 2,
"num_attention_heads": 4, "num_attention_heads": 2,
"num_key_value_heads": 2, "num_key_value_heads": 1,
"output_channels": 64, "output_channels": 64,
"hidden_act": "silu", "hidden_act": "silu",
"max_position_embeddings": 512, "max_position_embeddings": 512,
"rope_scaling": {"type": "default", "mrope_section": [2, 1, 1]}, "rope_scaling": {"type": "default", "mrope_section": [2, 1, 1]},
"max_window_layers": 3,
"rope_theta": 10000, "rope_theta": 10000,
"tie_word_embeddings": True, "tie_word_embeddings": True,
"bos_token_id": 0, "bos_token_id": 0,
@ -87,11 +86,10 @@ class Glm4vVisionText2TextModelTester:
}, },
vision_config={ vision_config={
"depth": 2, "depth": 2,
"embed_dim": 32,
"hidden_act": "silu", "hidden_act": "silu",
"hidden_size": 32, "hidden_size": 48,
"mlp_ratio": 4, "out_hidden_size": 16,
"num_heads": 4, "intermediate_size": 22,
"patch_size": 14, "patch_size": 14,
"spatial_merge_size": 1, "spatial_merge_size": 1,
"temporal_patch_size": 2, "temporal_patch_size": 2,
@ -239,10 +237,6 @@ class Glm4vModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase)
def test_multi_gpu_data_parallel_forward(self): def test_multi_gpu_data_parallel_forward(self):
pass pass
@unittest.skip(reason="We cannot configure to output a smaller model.")
def test_model_is_small(self):
pass
@unittest.skip("Error with compilation") @unittest.skip("Error with compilation")
def test_generate_from_inputs_embeds_with_static_cache(self): def test_generate_from_inputs_embeds_with_static_cache(self):
pass pass

View File

@ -586,6 +586,8 @@ class GroundingDinoModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.Tes
or "value_proj" in name or "value_proj" in name
or "output_proj" in name or "output_proj" in name
or "reference_points" in name or "reference_points" in name
or "vision_proj" in name
or "text_proj" in name
): ):
continue continue
self.assertIn( self.assertIn(

View File

@ -24,7 +24,7 @@ from transformers.testing_utils import require_torch, require_torch_multi_gpu, s
from ...generation.test_utils import GenerationTesterMixin from ...generation.test_utils import GenerationTesterMixin
from ...test_configuration_common import ConfigTester from ...test_configuration_common import ConfigTester
from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_modeling_common import ModelTesterMixin, _config_zero_init, ids_tensor
from ...test_pipeline_mixin import PipelineTesterMixin from ...test_pipeline_mixin import PipelineTesterMixin
@ -326,9 +326,11 @@ class MambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixi
def test_initialization(self): def test_initialization(self):
config, _ = self.model_tester.prepare_config_and_inputs_for_common() config, _ = self.model_tester.prepare_config_and_inputs_for_common()
config.rescale_prenorm_residual = True
configs_no_init = _config_zero_init(config)
for model_class in self.all_model_classes: for model_class in self.all_model_classes:
model = model_class(config=config) model = model_class(config=configs_no_init)
for name, param in model.named_parameters(): for name, param in model.named_parameters():
if "dt_proj.bias" in name: if "dt_proj.bias" in name:
dt = torch.exp( dt = torch.exp(
@ -347,6 +349,19 @@ class MambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixi
if param.requires_grad: if param.requires_grad:
# check if it's a ones like # check if it's a ones like
torch.testing.assert_close(param.data, torch.ones_like(param.data), rtol=1e-5, atol=1e-5) torch.testing.assert_close(param.data, torch.ones_like(param.data), rtol=1e-5, atol=1e-5)
else:
if param.requires_grad:
if (
"mixer.conv1d.weight" in name
or "mixer.dt_proj.weight" in name
or "mixer.out_proj.weight" in name
):
continue
self.assertIn(
((param.data.mean() * 1e9).round() / 1e9).item(),
[0.0, 1.0],
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
)
@slow @slow
def test_model_from_pretrained(self): def test_model_from_pretrained(self):

View File

@ -13,6 +13,7 @@
# limitations under the License. # limitations under the License.
import math
import unittest import unittest
from transformers import AutoTokenizer, Mamba2Config, is_torch_available from transformers import AutoTokenizer, Mamba2Config, is_torch_available
@ -28,7 +29,7 @@ from transformers.utils.import_utils import is_causal_conv1d_available, is_mamba
from ...generation.test_utils import GenerationTesterMixin from ...generation.test_utils import GenerationTesterMixin
from ...test_configuration_common import ConfigTester from ...test_configuration_common import ConfigTester
from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_modeling_common import ModelTesterMixin, _config_zero_init, ids_tensor
from ...test_pipeline_mixin import PipelineTesterMixin from ...test_pipeline_mixin import PipelineTesterMixin
@ -276,14 +277,37 @@ class Mamba2ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMix
def test_initialization(self): def test_initialization(self):
config, _ = self.model_tester.prepare_config_and_inputs_for_common() config, _ = self.model_tester.prepare_config_and_inputs_for_common()
config.rescale_prenorm_residual = True
configs_no_init = _config_zero_init(config)
for model_class in self.all_model_classes: for model_class in self.all_model_classes:
model = model_class(config=config) model = model_class(config=configs_no_init)
for name, param in model.named_parameters(): for name, param in model.named_parameters():
if "D" in name: if "dt_proj.bias" in name:
dt = torch.exp(
torch.tensor([0, 1]) * (math.log(config.time_step_max) - math.log(config.time_step_min))
+ math.log(config.time_step_min)
).clamp(min=config.time_step_floor)
inv_dt = dt + torch.log(-torch.expm1(-dt))
if param.requires_grad:
self.assertTrue(param.data.max().item() <= inv_dt[1])
self.assertTrue(param.data.min().item() >= inv_dt[0])
elif "A_log" in name:
A = torch.arange(1, config.num_heads + 1)
torch.testing.assert_close(param.data, torch.log(A), rtol=1e-5, atol=1e-5)
elif "D" in name:
if param.requires_grad: if param.requires_grad:
# check if it's a ones like # check if it's a ones like
torch.testing.assert_close(param.data, torch.ones_like(param.data), rtol=1e-5, atol=1e-5) torch.testing.assert_close(param.data, torch.ones_like(param.data), rtol=1e-5, atol=1e-5)
else:
if param.requires_grad:
if "mixer.conv1d.weight" in name or "mixer.dt_bias" in name or "mixer.out_proj.weight" in name:
continue
self.assertIn(
((param.data.mean() * 1e9).round() / 1e9).item(),
[0.0, 1.0],
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
)
@unittest.skip(reason="Mamba 2 weights are not tied") @unittest.skip(reason="Mamba 2 weights are not tied")
def test_tied_weights_keys(self): def test_tied_weights_keys(self):

View File

@ -629,6 +629,7 @@ class OmDetTurboModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCa
or "decoder.channel_projection_layers" in name or "decoder.channel_projection_layers" in name
or "query_position_head" in name or "query_position_head" in name
or "decoder.encoder_vision_features" in name or "decoder.encoder_vision_features" in name
or "language_backbone.text_projection" in name
): ):
continue continue
self.assertIn( self.assertIn(

View File

@ -153,10 +153,18 @@ class TimmWrapperModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestC
def test_retain_grad_hidden_states_attentions(self): def test_retain_grad_hidden_states_attentions(self):
pass pass
@unittest.skip(reason="TimmWrapper initialization is managed on the timm side")
def test_can_init_all_missing_weights(self):
pass
@unittest.skip(reason="TimmWrapper initialization is managed on the timm side") @unittest.skip(reason="TimmWrapper initialization is managed on the timm side")
def test_initialization(self): def test_initialization(self):
pass pass
@unittest.skip(reason="TimmWrapper initialization is managed on the timm side")
def test_mismatched_shapes_have_properly_initialized_weights(self):
pass
@unittest.skip(reason="Need to use a timm model and there is no tiny model available.") @unittest.skip(reason="Need to use a timm model and there is no tiny model available.")
def test_model_is_small(self): def test_model_is_small(self):
pass pass

View File

@ -855,7 +855,7 @@ class ModelTesterMixin:
# For now, skip everything older than 2025 and "important models" (too much models to patch otherwise) # For now, skip everything older than 2025 and "important models" (too much models to patch otherwise)
# Use `supports_cache_class` as a proxy to judge "important" models in order to prioritize them # Use `supports_cache_class` as a proxy to judge "important" models in order to prioritize them
# TODO: relax this as we patch more and more models # TODO: relax this as we patch more and more models
if addition_year < 2025 and not model_class._supports_cache_class: if addition_year < 2024 and not model_class._supports_cache_class:
self.skipTest(reason=f"{model_class} is not a priorited model for now.") self.skipTest(reason=f"{model_class} is not a priorited model for now.")
# Monkey patch the method to add a seed (we do it on PreTrainedModel._initialize_weights, which wraps # Monkey patch the method to add a seed (we do it on PreTrainedModel._initialize_weights, which wraps
@ -895,6 +895,11 @@ class ModelTesterMixin:
model_from_config.state_dict().items(), model_from_pretrained.state_dict().items() model_from_config.state_dict().items(), model_from_pretrained.state_dict().items()
): ):
self.assertEqual(k1, k2, "The keys from each model should be the same") self.assertEqual(k1, k2, "The keys from each model should be the same")
# In case using torch.nn.utils.parametrizations on a module, we should skip the resulting keys
if re.search(r"\.parametrizations\..*?\.original[01]", k1):
continue
# Since we added the seed, they should be exactly the same (i.e. using allclose maybe be wrong due # Since we added the seed, they should be exactly the same (i.e. using allclose maybe be wrong due
# to very low std in init function) # to very low std in init function)
if not (v1 == v2).all(): if not (v1 == v2).all():