Merge pull request #718 from Rocketknight1/master

Incorrect docstring for BertForMaskedLM
This commit is contained in:
Thomas Wolf 2019-06-28 17:08:51 +02:00 committed by GitHub
commit c68b4eceed
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -997,10 +997,6 @@ class BertForMaskedLM(BertPreTrainedModel):
`masked_lm_labels`: masked language modeling labels: torch.LongTensor of shape [batch_size, sequence_length]
with indices selected in [-1, 0, ..., vocab_size]. All labels set to -1 are ignored (masked), the loss
is only computed for the labels set in [0, ..., vocab_size]
`head_mask`: an optional torch.LongTensor of shape [num_heads] with indices
selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max
input sequence length in the current batch. It's the mask that we typically use for attention when
a batch has varying length sentences.
`head_mask`: an optional torch.Tensor of shape [num_heads] or [num_layers, num_heads] with indices between 0 and 1.
It's a mask to be used to nullify some heads of the transformer. 1.0 => head is fully masked, 0.0 => head is not masked.