mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-16 11:08:23 +06:00
![]() * initial config and MLA layer Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * first pass at decoder Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * completion of layers Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * modeling class Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * adding hybrid class to imports Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix imports granitemoehybrid Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix granitehybrid imports Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix granitehybrid import Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix generated modeling file Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * add some comments Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * minor fixes in layers Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * add sharedMLP layer Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * correct layer names Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fixes in mamba config Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix mamba config Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * change name of MLP layer Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix seq mizer layers Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * correct mamba config Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fixes in param names Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * enable hybrid model Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * update config Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix config granite hybrid Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix attention layer Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * cleanup to re-use mamba code Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * keep layer types Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * attention bias cleanup Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * update mamba layer name Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * first pass at tests Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * first pass at tests Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * use granite attention Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * fix: self attn weights Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * pass at making pos_emb optional Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * initialize self_attn only as needed Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * overwrite forward to create HybridMambaCache Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> * Log invalid layer types * Add attention outputs test * Only emit attentions/logits if not None * Fix config test hidden size divisibility * mark granitmoehybrid as stateful * Initialize mamba convolutional layers * Formatting fixes * config docstring, removed some unused attrs * Fix missing arg in models test * Fix create and check decoder model test * support logits to keep in granitemoe * regen to pass logits_to_keep * Allow None or rope * Fix gradient checkpointing * Add granitemoehybrid as special cache for generate check * Remove unused MLA refs * Fix mamba layer mask * Remove logits to keep from config * Minor docstring nits * Update licenses * Enable cache by default * map layer types to layer block type * First pass at granite moe hybrid docs * Ignore granite moe hybrid in valid checkpoint check * Align attention interfaces * regenerate modular granitemoeshared attention interface * Align granite moe hybrid attn interface * run formatting * Handle mamba initialization * avoid conditional attr defs * Move hybrid layer validation to config * Add placeholder integration tests * Docs nits / Update model names * Clean up forward conditions * Use gradient checkpointing layer * Remove some copied bamba tests + inherit align test init delete more tests Use common layer init with bamba tests finish test consolidation * avoid redundant intermediate std var * use @can_return_tuple * Remove unused moe state * make skipped test names consistent * Fix docstring order * Add missing toc * Always create the shared mlp * Fix name in docstring * link preview model in docs --------- Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com> Co-authored-by: Alex-Brooks <Alex.Brooks@ibm.com> |
||
---|---|---|
.. | ||
internal | ||
main_classes | ||
model_doc | ||
quantization | ||
tasks | ||
_config.py | ||
_redirects.yml | ||
_toctree.yml | ||
accelerate.md | ||
add_new_model.md | ||
add_new_pipeline.md | ||
agents.md | ||
attention_interface.md | ||
attention.md | ||
backbones.md | ||
cache_explanation.md | ||
chat_extras.md | ||
chat_templating_multimodal.md | ||
chat_templating_writing.md | ||
chat_templating.md | ||
community.md | ||
contributing.md | ||
conversations.md | ||
custom_models.md | ||
debugging.md | ||
deepspeed.md | ||
executorch.md | ||
fast_tokenizers.md | ||
feature_extractors.md | ||
fsdp.md | ||
generation_features.md | ||
generation_strategies.md | ||
gguf.md | ||
glossary.md | ||
gpu_selection.md | ||
how_to_hack_models.md | ||
hpo_train.md | ||
image_processors.md | ||
index.md | ||
installation.md | ||
kv_cache.md | ||
llm_optims.md | ||
llm_tutorial_optimization.md | ||
llm_tutorial.md | ||
model_memory_anatomy.md | ||
model_sharing.md | ||
model_summary.md | ||
models.md | ||
modular_transformers.md | ||
notebooks.md | ||
optimizers.md | ||
pad_truncation.md | ||
peft.md | ||
perf_hardware.md | ||
perf_infer_cpu.md | ||
perf_infer_gpu_multi.md | ||
perf_infer_gpu_one.md | ||
perf_torch_compile.md | ||
perf_train_cpu_many.md | ||
perf_train_cpu.md | ||
perf_train_gaudi.md | ||
perf_train_gpu_many.md | ||
perf_train_gpu_one.md | ||
perf_train_special.md | ||
perf_train_tpu_tf.md | ||
perplexity.md | ||
philosophy.md | ||
pipeline_gradio.md | ||
pipeline_tutorial.md | ||
pipeline_webserver.md | ||
pr_checks.md | ||
processors.md | ||
quicktour.md | ||
run_scripts.md | ||
serialization.md | ||
serving.md | ||
task_summary.md | ||
tasks_explained.md | ||
testing.md | ||
tf_xla.md | ||
tflite.md | ||
tokenizer_summary.md | ||
tools.md | ||
torchscript.md | ||
trainer.md | ||
training.md | ||
troubleshooting.md |