mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
![]() * remove one of the last deps * update fast image processor after refactor * styling * more quality of life improvements * nit * update * cleanups * some cleanups * vllm updates * update fake image token * [convert] Fix typo * [convert] Strip extraneous bytes from shards * [convert] Minor fixes * [convert] Use num_experts * multi-image fixes in modeling + processor * fixup size * 128 experts * Use default rope * Unfuse mlp * simplify a lot inputs embeds merging * remove .item() 👀 * fix from review * Address feedback * Use None "default" for rope_scaling. Add eot. * set seed * return aspect ratios and bug fixes * Moe 128 rebased (#8) * 128 experts * Use default rope * Unfuse mlp * Address feedback * Use None "default" for rope_scaling. Add eot. * Meta/llama quant compat (#7) * add quant compatible model & conversion code for llama4 * fix a few issues * fix a few issues * minor type mapping fix --------- Co-authored-by: Lu Fang <fanglu@fb.com> * use a new config parameter to determine which model definition to use for MoE --------- Co-authored-by: Pedro Cuenca <pedro@huggingface.co> Co-authored-by: Lu Fang <fanglu@fb.com> * un-comment write_tokenizer from converting script * remove un-used imports * [llama4] Pop aspect_ratios from image processor output in Llama4Processor Signed-off-by: Jon Swenson <jmswen@gmail.com> * Fix parameter_count name * Update src/transformers/models/llama4/configuration_llama4.py * nit * Add changes for no_rope, moe_layers, chunked attention. Just need to test all * Update src/transformers/models/llama4/image_processing_llama4_fast.py * nit * fix post merge with main * support flex attention * fixes * fix * add layer * small updates * rebase and delete llm_compressor * nit * [llama4/mm] Add back <|image|> token that delimits global tile * [llama4/mm] Fix Llama 4 image processing unit tests * add explicit dtype Signed-off-by: Jon Swenson <jmswen@gmail.com> * sdpa works * comment todo small * fix model loading Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * revert * nits * small fix for TP on 1 node * Read new params from config * Add <|eom|> * lol don't know how this got here * adding fp8 * Save processor, fix chat template * style * Add boi/eoi tokens We don't use them. * fixes for now flex seems to work :) * updates * nits * updates * missking keys * add context parallel * update * update * fix * nits * add worldsize and make eager attn work for vision * Ignore new key present in base models * add tp_plan * fix nope Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * minor fix Signed-off-by: Zijing Liu <liuzijing2014@gmail.com> * Clean up Llama4 vision model * current updates * add support for `attn_temperature_tuning` * add floor scale * add missing attn scales * push what works, dirty trick for the device synch * oups * Fix pad_token_id See https://huggingface.co/ll-re/Llama-4-Scout-17B-16E/discussions/2/files Confirmed in the original codebase. * fix causallml loading * rm * fix tied-weights * fix sdpa * push current version * should work with both short and long * add compressed_tensos & fix fbgemm tp * Fix flex impl * style * chunking * try to revert the potentially breaking change * fix auto factory * fix shapes in general * rm processing * commit cache utils cleanup * Fix context length * fix * allocate * update tp_plan * fix SDPA! * Add support for sparse `Llama4TextMoe` layer from the kernel hub * cleanup * better merge * update * still broken fixing now * nits * revert print * Write max_position_embeddings and max_model_length * Update modeling_llama4.py * Save attention_chunk_size * Sync eos terminators * Read initializer_range * style * remove `dict` * fix * eager should use `chunked_attention_mask` * revert * fixup * fix config * Revert "Merge pull request #36 from huggingface/sparse-llama4-moe" This reverts commit |
||
---|---|---|
.. | ||
test_module | ||
tf_ops | ||
add_pipeline_model_mapping_to_test.py | ||
check_bad_commit.py | ||
check_build.py | ||
check_config_attributes.py | ||
check_config_docstrings.py | ||
check_copies.py | ||
check_doc_toc.py | ||
check_docstrings.py | ||
check_doctest_list.py | ||
check_dummies.py | ||
check_inits.py | ||
check_model_tester.py | ||
check_modular_conversion.py | ||
check_repo.py | ||
check_self_hosted_runner.py | ||
check_tf_ops.py | ||
create_dependency_mapping.py | ||
create_dummy_models.py | ||
custom_init_isort.py | ||
deprecate_models.py | ||
download_glue_data.py | ||
extract_warnings.py | ||
fetch_hub_objects_for_ci.py | ||
get_ci_error_statistics.py | ||
get_github_job_time.py | ||
get_modified_files.py | ||
get_previous_daily_ci.py | ||
get_test_info.py | ||
important_models.txt | ||
models_to_deprecate.py | ||
modular_model_converter.py | ||
not_doctested.txt | ||
notification_service_doc_tests.py | ||
notification_service_quantization.py | ||
notification_service.py | ||
past_ci_versions.py | ||
patch_helper.py | ||
pr_slow_ci_models.py | ||
print_env.py | ||
process_bad_commit_report.py | ||
process_circleci_workflow_test_reports.py | ||
process_test_artifacts.py | ||
release.py | ||
set_cuda_devices_for_ci.py | ||
slow_documentation_tests.txt | ||
sort_auto_mappings.py | ||
split_doctest_jobs.py | ||
split_model_tests.py | ||
tests_fetcher.py | ||
update_metadata.py | ||
update_tiny_models.py |