mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-03 03:31:05 +06:00
gpt-bigcode: avoid zero_
to support Core ML (#24755)
gpt-bigcode: avoid `zeros_` to support Core ML. In-place `zeros_` is not supported by the Core ML conversion process. This PR replaces it with `zeros_like` so conversion can proceed. The change only affects a workaround for a PyTorch bug on the `cpu` device.
This commit is contained in:
parent
0284285501
commit
395e566a42
@ -164,7 +164,7 @@ class GPTBigCodeAttention(nn.Module):
|
||||
# This is needed because of a bug in pytorch https://github.com/pytorch/pytorch/issues/80588.
|
||||
# The bug was fixed in https://github.com/pytorch/pytorch/pull/96086,
|
||||
# but the fix has not been released as of pytorch version 2.0.0.
|
||||
attn_weights.zero_()
|
||||
attn_weights = torch.zeros_like(attn_weights)
|
||||
beta = 1
|
||||
else:
|
||||
beta = 0
|
||||
|
Loading…
Reference in New Issue
Block a user