Fixed typo in Llama configuration docstring (#35520)

Update configuration_llama.py

There is no `num_heads` parameter, only `num_attention_heads`
This commit is contained in:
Mukund Sudarshan 2025-01-06 12:54:08 -05:00 committed by GitHub
parent 3b1be043cd
commit 1650e0e514
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -124,7 +124,7 @@ class LlamaConfig(PretrainedConfig):
mlp_bias (`bool`, *optional*, defaults to `False`):
Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers.
head_dim (`int`, *optional*):
The attention head dimension. If None, it will default to hidden_size // num_heads
The attention head dimension. If None, it will default to hidden_size // num_attention_heads
```python
>>> from transformers import LlamaModel, LlamaConfig