mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-19 12:38:23 +06:00
![]() * add watermarking processor
* remove the other hashing (context width=1 always)
* make style
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* update watermarking process
* add detector
* update tests to use detector
* fix failing tests
* rename `input_seq`
* make style
* doc for processor
* minor fixes
* docs
* make quality
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* add PR suggestions
* let's use lru_cache's default max size (128)
* import processor if torch available
* maybe like this
* lets move the config to torch independet file
* add docs
* tiny docs fix to make the test happy
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation/watermarking.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* PR suggestions
* add docs
* fix test
* fix docs
* address pr comments
* style
* Revert "style"
This reverts commit
|
||
---|---|---|
.. | ||
internal | ||
main_classes | ||
model_doc | ||
tasks | ||
_config.py | ||
_redirects.yml | ||
_toctree.yml | ||
accelerate.md | ||
add_new_model.md | ||
add_new_pipeline.md | ||
agents.md | ||
attention.md | ||
autoclass_tutorial.md | ||
benchmarks.md | ||
bertology.md | ||
big_models.md | ||
chat_templating.md | ||
community.md | ||
contributing.md | ||
conversations.md | ||
create_a_model.md | ||
custom_models.md | ||
debugging.md | ||
deepspeed.md | ||
fast_tokenizers.md | ||
fsdp.md | ||
generation_strategies.md | ||
glossary.md | ||
hf_quantizer.md | ||
hpo_train.md | ||
index.md | ||
installation.md | ||
llm_optims.md | ||
llm_tutorial_optimization.md | ||
llm_tutorial.md | ||
model_memory_anatomy.md | ||
model_sharing.md | ||
model_summary.md | ||
multilingual.md | ||
notebooks.md | ||
pad_truncation.md | ||
peft.md | ||
perf_hardware.md | ||
perf_infer_cpu.md | ||
perf_infer_gpu_one.md | ||
perf_torch_compile.md | ||
perf_train_cpu_many.md | ||
perf_train_cpu.md | ||
perf_train_gpu_many.md | ||
perf_train_gpu_one.md | ||
perf_train_special.md | ||
perf_train_tpu_tf.md | ||
performance.md | ||
perplexity.md | ||
philosophy.md | ||
pipeline_tutorial.md | ||
pipeline_webserver.md | ||
pr_checks.md | ||
preprocessing.md | ||
quantization.md | ||
quicktour.md | ||
run_scripts.md | ||
sagemaker.md | ||
serialization.md | ||
task_summary.md | ||
tasks_explained.md | ||
testing.md | ||
tf_xla.md | ||
tflite.md | ||
tokenizer_summary.md | ||
torchscript.md | ||
trainer.md | ||
training.md | ||
troubleshooting.md |