mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 04:40:06 +06:00

* remove one of the last deps * update fast image processor after refactor * styling * more quality of life improvements * nit * update * cleanups * some cleanups * vllm updates * update fake image token * [convert] Fix typo * [convert] Strip extraneous bytes from shards * [convert] Minor fixes * [convert] Use num_experts * multi-image fixes in modeling + processor * fixup size * 128 experts * Use default rope * Unfuse mlp * simplify a lot inputs embeds merging * remove .item() 👀 * fix from review * Address feedback * Use None "default" for rope_scaling. Add eot. * set seed * return aspect ratios and bug fixes * Moe 128 rebased (#8) * 128 experts * Use default rope * Unfuse mlp * Address feedback * Use None "default" for rope_scaling. Add eot. * Meta/llama quant compat (#7) * add quant compatible model & conversion code for llama4 * fix a few issues * fix a few issues * minor type mapping fix --------- Co-authored-by: Lu Fang <fanglu@fb.com> * use a new config parameter to determine which model definition to use for MoE --------- Co-authored-by: Pedro Cuenca <pedro@huggingface.co> Co-authored-by: Lu Fang <fanglu@fb.com> * un-comment write_tokenizer from converting script * remove un-used imports * [llama4] Pop aspect_ratios from image processor output in Llama4Processor Signed-off-by: Jon Swenson <jmswen@gmail.com> * Fix parameter_count name * Update src/transformers/models/llama4/configuration_llama4.py * nit * Add changes for no_rope, moe_layers, chunked attention. Just need to test all * Update src/transformers/models/llama4/image_processing_llama4_fast.py * nit * fix post merge with main * support flex attention * fixes * fix * add layer * small updates * rebase and delete llm_compressor * nit * [llama4/mm] Add back <|image|> token that delimits global tile * [llama4/mm] Fix Llama 4 image processing unit tests * add explicit dtype Signed-off-by: Jon Swenson <jmswen@gmail.com> * sdpa works * comment todo small * fix model loading Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * revert * nits * small fix for TP on 1 node * Read new params from config * Add <|eom|> * lol don't know how this got here * adding fp8 * Save processor, fix chat template * style * Add boi/eoi tokens We don't use them. * fixes for now flex seems to work :) * updates * nits * updates * missking keys * add context parallel * update * update * fix * nits * add worldsize and make eager attn work for vision * Ignore new key present in base models * add tp_plan * fix nope Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * minor fix Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * Clean up Llama4 vision model * current updates * add support for `attn_temperature_tuning` * add floor scale * add missing attn scales * push what works, dirty trick for the device synch * oups * Fix pad_token_id See https://huggingface.co/ll-re/Llama-4-Scout-17B-16E/discussions/2/files Confirmed in the original codebase. * fix causallml loading * rm * fix tied-weights * fix sdpa * push current version * should work with both short and long * add compressed_tensos & fix fbgemm tp * Fix flex impl * style * chunking * try to revert the potentially breaking change * fix auto factory * fix shapes in general * rm processing * commit cache utils cleanup * Fix context length * fix * allocate * update tp_plan * fix SDPA! * Add support for sparse `Llama4TextMoe` layer from the kernel hub * cleanup * better merge * update * still broken fixing now * nits * revert print * Write max_position_embeddings and max_model_length * Update modeling_llama4.py * Save attention_chunk_size * Sync eos terminators * Read initializer_range * style * remove `dict` * fix * eager should use `chunked_attention_mask` * revert * fixup * fix config * Revert "Merge pull request #36 from huggingface/sparse-llama4-moe" This reverts commitccda19f050
, reversing changes made toa515579aed
. * Fix typo and remove warning with compiled flex and chunked prefill * Fix MoE vs FF (#41) * fix * Use correct no_rope_layers if provided one is empty list * update tests * fix * skipping some tests * fix fp8 loading Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * fix text geneartion pipeline Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * eager needs 4D mask * fix * Some cleanup * fix * update * fix * replace correctly module * patch * modulelist * update * update * clean up * Don't move to `cuda:0` in distributed mode * restrict to compressed tensors for now * rm print * Docs! * Fixes * Update docs/source/en/model_doc/llama4.md Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Fixes * cuda graph fix * revert some stuff * fixup * styling * Update src/transformers/models/llama4/modeling_llama4.py Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup * commit licence, cleanup here and there and style * more styling changes * fix dummies * fix and clean docstrings * remove comment * remove warning * Only fast image processor is supported * nit * trigger CI * fix issue with flex encoder * fix dynamic cache * Code quality * Code quality * fix more tests for now * Code quality * Code quality * Nuke bunch of failing stuff * Code quality * Code quality * cleanup removal of slow image processor * ruff fix fast image processor * fix * fix styling * Docs * Repo consistency * Repo consistency * fix sliding window issue * separate llama cache * styling * Repo consistency * Repo consistency * push waht works * L4 Repo consistency * Docs * fix last last alst alst alst alstsaltlsltlaslt --------- Signed-off-by: Jon Swenson <jmswen@gmail.com> Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> Co-authored-by: yonigozlan <yoni.gozlan10@gmail.com> Co-authored-by: Pedro Cuenca <pedro@huggingface.co> Co-authored-by: Pablo Montalvo <pablo.montalvo.leroux@gmail.com> Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com> Co-authored-by: Keyun Tong <tongkeyun@gmail.com> Co-authored-by: Zijing Liu <liuzijing2014@users.noreply.github.com> Co-authored-by: Lu Fang <fanglu@fb.com> Co-authored-by: Zijing Liu <liuzijing2014@gmail.com> Co-authored-by: Jon Swenson <jmswen@gmail.com> Co-authored-by: jmswen <jmswen@users.noreply.github.com> Co-authored-by: MekkCyber <mekk.cyber@gmail.com> Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> Co-authored-by: Mohit Sharma <mohit21sharma.ms@gmail.com> Co-authored-by: Yong Hoon Shin <yhshin@meta.com> Co-authored-by: Marc Sun <marc@huggingface.co> Co-authored-by: drisspg <drisspguessous@gmail.com> Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com> Co-authored-by: Daniël de Kok <me@danieldk.eu> Co-authored-by: Lysandre <hi@lysand.re> Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com> Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
258 lines
9.2 KiB
Python
258 lines
9.2 KiB
Python
# coding=utf-8
|
|
# Copyright 2020 The HuggingFace Inc. team.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
"""
|
|
This script is responsible for making sure the dummies in utils/dummies_xxx.py are up to date with the main init.
|
|
|
|
Why dummies? This is to make sure that a user can always import all objects from `transformers`, even if they don't
|
|
have the necessary extra libs installed. Those objects will then raise helpful error message whenever the user tries
|
|
to access one of their methods.
|
|
|
|
Usage (from the root of the repo):
|
|
|
|
Check that the dummy files are up to date (used in `make repo-consistency`):
|
|
|
|
```bash
|
|
python utils/check_dummies.py
|
|
```
|
|
|
|
Update the dummy files if needed (used in `make fix-copies`):
|
|
|
|
```bash
|
|
python utils/check_dummies.py --fix_and_overwrite
|
|
```
|
|
"""
|
|
|
|
import argparse
|
|
import os
|
|
import re
|
|
from typing import Dict, List, Optional
|
|
|
|
|
|
# All paths are set with the intent you should run this script from the root of the repo with the command
|
|
# python utils/check_dummies.py
|
|
PATH_TO_TRANSFORMERS = "src/transformers"
|
|
|
|
# Matches is_xxx_available()
|
|
_re_backend = re.compile(r"is\_([a-z_]*)_available()")
|
|
# Matches from xxx import bla
|
|
_re_single_line_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n")
|
|
# Matches if not is_xxx_available()
|
|
_re_test_backend = re.compile(r"^\s+if\s+not\s+\(?is\_[a-z_]*\_available\(\)")
|
|
|
|
|
|
# Template for the dummy objects.
|
|
DUMMY_CONSTANT = """
|
|
{0} = None
|
|
"""
|
|
|
|
|
|
DUMMY_CLASS = """
|
|
class {0}(metaclass=DummyObject):
|
|
_backends = {1}
|
|
|
|
def __init__(self, *args, **kwargs):
|
|
requires_backends(self, {1})
|
|
"""
|
|
|
|
|
|
DUMMY_FUNCTION = """
|
|
def {0}(*args, **kwargs):
|
|
requires_backends({0}, {1})
|
|
"""
|
|
|
|
|
|
def find_backend(line: str) -> Optional[str]:
|
|
"""
|
|
Find one (or multiple) backend in a code line of the init.
|
|
|
|
Args:
|
|
line (`str`): A code line in an init file.
|
|
|
|
Returns:
|
|
Optional[`str`]: If one (or several) backend is found, returns it. In the case of multiple backends (the line
|
|
contains `if is_xxx_available() and `is_yyy_available()`) returns all backends joined on `_and_` (so
|
|
`xxx_and_yyy` for instance).
|
|
"""
|
|
if _re_test_backend.search(line) is None:
|
|
return None
|
|
backends = [b[0] for b in _re_backend.findall(line)]
|
|
backends.sort()
|
|
return "_and_".join(backends)
|
|
|
|
|
|
def read_init() -> Dict[str, List[str]]:
|
|
"""
|
|
Read the init and extract backend-specific objects.
|
|
|
|
Returns:
|
|
Dict[str, List[str]]: A dictionary mapping backend name to the list of object names requiring that backend.
|
|
"""
|
|
with open(os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), "r", encoding="utf-8", newline="\n") as f:
|
|
lines = f.readlines()
|
|
|
|
# Get to the point we do the actual imports for type checking
|
|
line_index = 0
|
|
while not lines[line_index].startswith("if TYPE_CHECKING"):
|
|
line_index += 1
|
|
|
|
backend_specific_objects = {}
|
|
# Go through the end of the file
|
|
while line_index < len(lines):
|
|
# If the line is an if is_backend_available, we grab all objects associated.
|
|
backend = find_backend(lines[line_index])
|
|
if backend is not None:
|
|
while not lines[line_index].startswith(" else:"):
|
|
line_index += 1
|
|
line_index += 1
|
|
|
|
objects = []
|
|
# Until we unindent, add backend objects to the list
|
|
while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8):
|
|
line = lines[line_index]
|
|
single_line_import_search = _re_single_line_import.search(line)
|
|
if single_line_import_search is not None:
|
|
# Single-line imports
|
|
objects.extend(single_line_import_search.groups()[0].split(", "))
|
|
elif line.startswith(" " * 12):
|
|
# Multiple-line imports (with 3 indent level)
|
|
objects.append(line[12:-2])
|
|
line_index += 1
|
|
|
|
backend_specific_objects[backend] = objects
|
|
else:
|
|
line_index += 1
|
|
|
|
return backend_specific_objects
|
|
|
|
|
|
def create_dummy_object(name: str, backend_name: str) -> str:
|
|
"""
|
|
Create the code for a dummy object.
|
|
|
|
Args:
|
|
name (`str`): The name of the object.
|
|
backend_name (`str`): The name of the backend required for that object.
|
|
|
|
Returns:
|
|
`str`: The code of the dummy object.
|
|
"""
|
|
if name.isupper():
|
|
return DUMMY_CONSTANT.format(name)
|
|
elif name.islower():
|
|
return DUMMY_FUNCTION.format(name, backend_name)
|
|
else:
|
|
return DUMMY_CLASS.format(name, backend_name)
|
|
|
|
|
|
def create_dummy_files(backend_specific_objects: Optional[Dict[str, List[str]]] = None) -> Dict[str, str]:
|
|
"""
|
|
Create the content of the dummy files.
|
|
|
|
Args:
|
|
backend_specific_objects (`Dict[str, List[str]]`, *optional*):
|
|
The mapping backend name to list of backend-specific objects. If not passed, will be obtained by calling
|
|
`read_init()`.
|
|
|
|
Returns:
|
|
`Dict[str, str]`: A dictionary mapping backend name to code of the corresponding backend file.
|
|
"""
|
|
if backend_specific_objects is None:
|
|
backend_specific_objects = read_init()
|
|
|
|
dummy_files = {}
|
|
|
|
for backend, objects in backend_specific_objects.items():
|
|
backend_name = "[" + ", ".join(f'"{b}"' for b in backend.split("_and_")) + "]"
|
|
dummy_file = "# This file is autogenerated by the command `make fix-copies`, do not edit.\n"
|
|
dummy_file += "from ..utils import DummyObject, requires_backends\n\n"
|
|
dummy_file += "\n".join([create_dummy_object(o, backend_name) for o in objects])
|
|
dummy_files[backend] = dummy_file
|
|
|
|
return dummy_files
|
|
|
|
|
|
def check_dummies(overwrite: bool = False):
|
|
"""
|
|
Check if the dummy files are up to date and maybe `overwrite` with the right content.
|
|
|
|
Args:
|
|
overwrite (`bool`, *optional*, default to `False`):
|
|
Whether or not to overwrite the content of the dummy files. Will raise an error if they are not up to date
|
|
when `overwrite=False`.
|
|
"""
|
|
dummy_files = create_dummy_files()
|
|
# For special correspondence backend name to shortcut as used in utils/dummy_xxx_objects.py
|
|
short_names = {"torch": "pt"}
|
|
|
|
# Locate actual dummy modules and read their content.
|
|
path = os.path.join(PATH_TO_TRANSFORMERS, "utils")
|
|
dummy_file_paths = {
|
|
backend: os.path.join(path, f"dummy_{short_names.get(backend, backend)}_objects.py")
|
|
for backend in dummy_files.keys()
|
|
}
|
|
|
|
actual_dummies = {}
|
|
for backend, file_path in dummy_file_paths.items():
|
|
if os.path.isfile(file_path):
|
|
with open(file_path, "r", encoding="utf-8", newline="\n") as f:
|
|
actual_dummies[backend] = f.read()
|
|
else:
|
|
actual_dummies[backend] = ""
|
|
|
|
# Compare actual with what they should be.
|
|
for backend in dummy_files.keys():
|
|
if dummy_files[backend] != actual_dummies[backend]:
|
|
if overwrite:
|
|
print(
|
|
f"Updating transformers.utils.dummy_{short_names.get(backend, backend)}_objects.py as the main "
|
|
"__init__ has new objects."
|
|
)
|
|
with open(dummy_file_paths[backend], "w", encoding="utf-8", newline="\n") as f:
|
|
f.write(dummy_files[backend])
|
|
else:
|
|
# Temporary fix to help people identify which objects introduced are not correctly protected.
|
|
found = False
|
|
for _actual, _dummy in zip(
|
|
actual_dummies["torch"].split("class"), dummy_files["torch"].split("class")
|
|
):
|
|
if _actual != _dummy:
|
|
actual_broken = _actual
|
|
dummy_broken = _dummy
|
|
found = True
|
|
break
|
|
|
|
if not found:
|
|
print("A transient error was found with the dummies, please investigate.")
|
|
continue
|
|
|
|
raise ValueError(
|
|
"The main __init__ has objects that are not present in "
|
|
f"transformers.utils.dummy_{short_names.get(backend, backend)}_objects.py.\n"
|
|
f" It is likely the following objects are responsible, see these excerpts: \n"
|
|
f"---------------------------------- Actual -------------------------------------\n"
|
|
f" \n {actual_broken} \n"
|
|
f"---------------------------------- Dummy -------------------------------------\n"
|
|
f" \n {dummy_broken} \n"
|
|
"Run `make fix-copies` to fix this."
|
|
)
|
|
|
|
|
|
if __name__ == "__main__":
|
|
parser = argparse.ArgumentParser()
|
|
parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
|
|
args = parser.parse_args()
|
|
|
|
check_dummies(args.fix_and_overwrite)
|