mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
commit
ca559826c4
@ -93,8 +93,8 @@ Let's see how we can use `BertModel` to encode our inputs in hidden-states:
|
||||
# Load pre-trained model (weights)
|
||||
model = BertModel.from_pretrained('bert-base-uncased')
|
||||
|
||||
# Set the model in evaluation mode to desactivate the DropOut modules
|
||||
# This is IMPORTANT to have reproductible results during evaluation!
|
||||
# Set the model in evaluation mode to deactivate the DropOut modules
|
||||
# This is IMPORTANT to have reproducible results during evaluation!
|
||||
model.eval()
|
||||
|
||||
# If you have a GPU, put everything on cuda
|
||||
@ -168,8 +168,8 @@ Let's see how to use `GPT2LMHeadModel` to generate the next token following our
|
||||
# Load pre-trained model (weights)
|
||||
model = GPT2LMHeadModel.from_pretrained('gpt2')
|
||||
|
||||
# Set the model in evaluation mode to desactivate the DropOut modules
|
||||
# This is IMPORTANT to have reproductible results during evaluation!
|
||||
# Set the model in evaluation mode to deactivate the DropOut modules
|
||||
# This is IMPORTANT to have reproducible results during evaluation!
|
||||
model.eval()
|
||||
|
||||
# If you have a GPU, put everything on cuda
|
||||
|
Loading…
Reference in New Issue
Block a user