* mvp
* added test (a few models need fixes)
* fix a few test cases
* test nits
* harder test 😈
* revert changes in stablelm
* test with improved condition
* add todo
* tmp commit
* merged with main
* nits
* add todo
* final corrections
* add docs for generation compilation
* docs nits
* add tip
* PR suggestions
* add more details to the compilation docs
* fix cache positions
* cache is now init in generate; update docs
* tag test as flaky
* docs
* post rebase make fixup and other nits
* remove unintended changes
* whisper (encoder-decoder) not supported
* move token default updates to ; add tests for token defaults
* push changes
* manual rebase
* chameleon doesn't support this
* fix test_static_cache_mha_mqa_gqa (broken in another PR)
* docs: dynamic is better with end-to-end compilation
* Fix single letter stop strings
* Change the 0 to a 1 to avoid potential empty vector headaches later
* Restructure for clarity
* Update tests/generation/test_stopping_criteria.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add the unsqueeze
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* token healing impl + trie with extensions
* make fixup
* prefix-robust space tokenization
* examples readme and requirements
* make fixup
* allow input prompt and model
* redundant defaults
* Specialized Trie
* make fixup
* updated tests with new inherited Tree
* input ids to auto device_map
* rm unused import
* Update src/transformers/generation/utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* naming convention
* Revert "naming convention"
This reverts commit dd39d9c5b7a969e2d8a8d2a8e54f121b82dc44f0.
* naming convention
* last -hopefully- changes
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* clean-up
* Update src/transformers/cache_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/cache_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/cache_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fixup
* Update tests/quantization/quanto_integration/test_quanto.py
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* more suggestions
* mapping if torch available
* run tests & add 'support_quantized' flag
* fix jamba test
* revert, will be fixed by another PR
* codestyle
* HQQ and versatile cache classes
* final update
* typo
* make tests happy
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* stash commit (will discard all of this)
* stash commit
* First commit - needs a lot of testing!
* Add a test
* Fix imports and make the tests actually test something
* Tests pass!
* Rearrange test
* Add comments (but it's still a bit confusing)
* Stop storing the tokenizer
* Comment fixup
* Fix for input_ids with a single sequence
* Update tests to test single sequences
* make fixup
* Fix incorrect use of isin()
* Expand tests to catch more cases
* Expand tests to catch more cases
* make fixup
* Fix length calculation and update tests
* Handle Ġ as a space replacement too
* Update src/transformers/generation/stopping_criteria.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Add optimizations from Joao's suggestion
* Remove TODO
* Update src/transformers/generation/stopping_criteria.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/generation/test_stopping_criteria.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* make fixup
* Rename some variables and remove some debugging clauses for clarity
* Add tests for the sub-methods
* Clarify one test slightly
* Add stop_strings to GenerationConfig
* generate() supports stop_string arg, asks for tokenizer if not provided
* make fixup
* Cleanup code and rename variables for clarity
* Update tokenizer error
* Update tokenizer passing, handle generation on GPU
* Slightly more explanation cleanup
* More comment cleanup
* Factor out the token cleanup so it's more obvious what we're doing, and we can change it later
* Careful with that cleanup!
* Cleanup + optimizations to _get_matching_positions
* More minor performance tweaks
* Implement caching and eliminate some expensive ops (startup time: 200ms -> 9ms)
* Remove the pin_memory call
* Parallelize across all stop strings!
* Quick fix for tensor devices
* Update embeddings test for the new format
* Fix test imports
* Manual patching for BERT-like tokenizers
* Return a bool vector instead of a single True/False
* Better comment
* Better comment
* Add tests from @zucchini-nlp
* Amy's list creation nit
* tok_list -> token_list
* Push a big expanded docstring (should we put it somewhere else?)
* Expand docstrings
* Docstring fixups
* Rebase
* make fixup
* Make a properly general method for figuring out token strings
* Fix naming throughout the functions
* Move cache, refactor, fix tests
* Add comment
* Remove finished TODO
* Remove finished TODO
* make fixup
* Update src/transformers/generation/stopping_criteria.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update and shorten docstring
* Update tests to be shorter/clearer and test specific cases
---------
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add jamba arch
* apply "make fix-copies" changes
* fix link to model in JambaConfig docstring
* Add n_ctx in modeling file because repo-consistency wants that
* Add jamba to flash attention and sdpa documentation
* mamba dt_proj quant fix now works for LoRA as well
* override test_left_padding_compatibility and use a more permissive tolerance. left padding numerical difference are accentuated by mamba layers
* add jamba to tokenization auto
* fix comments of shape (PR #24 in the model page: https://huggingface.co/ai21labs/Jamba-v0.1/discussions/24)
* simple PR fixes
* remove unnecessary kwargs from JambaAttentionDecoderLayer and JambaMambaDecoderLayer
* remove the LoRA hack for the mamba dt_proj bias. It was solved in huggingface/peft#1530 (https://github.com/huggingface/peft/pull/1530)
* Add copied comment on JambaMLP (it's the same as MixtralMLP)
* remove padding_mask warnings. It's not supported anymore
* fix docstring. Float instead of int
* A few more minor PR fixes
* (1) lowercase names for mamba layernorms (2) remove _apply_inner_layernorms and do it directly in the forward pass
* Return None attention weights from mamba layers. Append to all attentions only if not None.
* remove some leftover jamba archive lists
* Better separation between expert vs non-expert layers. non-expert layers return None as router_logits, and it is not concatenated to all_router_logits returned from JambaModel
* no need to take router_logits at config.expert_layer_offset anymore. result.router_logits now holds results only for expert layers
* Add Jamba paper on READMEs
* (1) rename n_ctx -> max_position_embeddings (2) don't use it in the modeling file since it's not needed (set it as an exception to check_config_attributes)
* Add copied from comment
* remove the code path for apply_inner_layernorms=False. Jamba always has the inner mamba layernorms
* clearer docstring for _convert_to_standard_cache
* style fixes
* Change calc_logits_for_entire_prompt (bool) to num_logits_to_keep (int). Adapt assisted decoding code tp use it. Also small change in low memory beam search decoding path to support this new int value in model_inputs
* rename test so it still overrides what its meant to override
* draft
* oups
* nit
* remove more complexe logic
* fix names used in config
* fix fix fix
* style
* fix some more failing tests
* generate did not init the cache 🙃
* more small nits
* typo
* config.mamba_expand * config.hidden_size for the intermediate size of the mamba shapes
* fix init of pkv with torch.tensor()
* empty tensor
* fix some init issues
* stupid changes required by generate because it does not even support it's own DynamicCache class
* more fixes
* fix general assisted gen cache_position bug
* tests passing
* Add offsets and periods as SPECIAL_CASES_TO_ALLOW in check_config_attributes.py
* fix reorder_cache to reorder mamba states and override some more functions in HybridMambaAttentionDynamicCache
* no need to override test_past_key_values_format() and _check_past_key_values_for_generate() in tests anymore
* fix docstrings and typehints for past_key_values
* style fixes
* fix docs
* change typehint due to copy from Mixtral
* forgot import
* import order
* Add configuration_jamba and modeling_jamba to not_doctested because the model is too big to download (in docstring of JambaForCausalLM.forward)
* Add integration test with tiny tandom Jamba model on hub
* fix flash attention cache shapes
* bring back forgotten hidden states
* rename HybridMambaAttentionDynamicCache.seqlen_offset to has_previous_state (and make bool) and bugfix - it should be set to True after a finished forward pass of the entire model
* align integration test after modeling fixes
* bugfix - mamba can use precomputed states only of forward pass is on a single token
* bugfix - mamba can use precomputed states only if they match the batch size
* typo
* remove making _prepare_4d_causal_attention_mask a leaf function
* stop using past_seq_len.get_seq_length(). Use cache positions instead. Adjust test (test_decoder_model_past_with_large_inputs) accordingly
---------
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
Co-authored-by: Joao Gante <joao@huggingface.co>
* See if we can get tests to pass with the fixed weights
* See if we can get tests to pass with the fixed weights
* Replace the revisions now that we don't need them anymore
* fix issue with logit processor in beam search in Flax
* adding FlaxNoRepeatNGramLogitsProcessor class + unit test
* style correction and code verification
* add FlaxNoRepeatNGramLogitsProcessor to the test_processor_list and test_processor_list_jitted tests
* fix an issue where ngrams are banned only if they appear ==1 time + update description of get_previous_ngrams
* replace non-jit compatible masking of ngrams that are not yet generated with jittable version
* Revert "fix issue with logit processor in beam search in Flax"
This reverts commit 09b70d7e4d.
* add FlaxNoRepeatNGramLogitsProcessor to _get_logits_processor
* change the method of casting to boolean of banned tokens indices
* fix code style
* remove some useless operations + significantly faster computation of update indices using jax.lax.fori_loop
* remove useless loop iterations
* set some variables that were calculated and used multiple times
* fix format
* fix bug and add tests
* nit
* otherway to get the cur len instead of attention mask
* more places where this might have been broken
* nit
* oups
* inputs_embeds vs input_embeds
* test generated outptus
* style
* nit
* fix
* skip failing biogpt
* left-padding test revisited
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
output_logits option behaves like output_scores, but returns the raw, unprocessed prediction logit scores,
ie. the values before they undergo logit processing and/or warping. The latter happens by default for the
regular output scores.
It's useful to have the unprocessed logit scores in certain circumstances. For example, unprocessed logit scores
are very useful with causallm models when one wants to determine the probability of a certain answer, e.g.
when asking a question with a yes/no answer. In that case getting the next-token probabilities of both "yes" and
"no" (and/or their relative ratio) is of interest for classification. The reason for getting these _before_ logit
processing and/or warping is b/c a) that can change the probabilities or b) reject the tokens of interest / reduce
the number of tokens to just 1.
For an example use-case see paper TabLLM: Few-shot Classification of Tabular Data with Large Language Models
by Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag.
https://arxiv.org/abs/2210.10723
In addition:
- added dedicated unit test: tests/generation/test_utils/test_return_unprocessed_logit_scores
which tests return of logics with output_logits=True in generation.
- set output_logits=True in all other generation unit tests, that also have output_scores=True.
Implemented @gante's and @amyeroberts review feedback
Co-authored-by: kx79wq <max.baak@ing.com>
* Port core files + ESM (because ESM code is odd)
* Search-replace in modelling code
* Fix up transfo_xl as well
* Fix other core files + tests (still need to add correct import to tests)
* Fix cookiecutter
* make fixup, fix imports in some more core files
* Auto-add imports to tests
* Cleanup, add imports to sagemaker tests
* Use correct exception for importing tf_keras
* Fixes in modeling_tf_utils
* make fixup
* Correct version parsing code
* Ensure the pipeline tests correctly revert to float32 after each test
* Ensure the pipeline tests correctly revert to float32 after each test
* More tf.keras -> keras
* Add dtype cast
* Better imports of tf_keras
* Add a cast for tf.assign, just in case
* Fix callback imports
* Fix issues in add and is_done for BeamHypotheses
* make newly added arguments optional for better compatibility
* Directly use cur_len as generated_len, add note for retrocompatibility
* update test expectation
* make cur_len represents the length of the entire sequence including the decoder prompt
* remove redundant if/else in testing
* Draft version of new KV Caching
This should allow Attention Sinks (https://github.com/tomaarsen/attention_sinks)
/ StreamingLLM (https://arxiv.org/abs/2309.17453) to be easily implemented
in a third-party or in transformers directly
* Address numerous PR suggestions
1. Move layer_idx from cache to ...Attention. Removes confusing set_layer_idx magic.
2. Always convert past_key_values to Cache instance at the start of ...Attention, removes all other isinstance calls.
3. Remove __bool__ and __getitem__ magic as they're confusing.
4. past_key_values.update(key, value, idx) now returns key, value.
5. Add use_legacy_cache flag, defaults to None, i.e. Falsey. This breaks generate for now, until 1) the cache is used is generate() or 2) use_legacy_cache is defaulted to True in generate() until we change it in another PR.
6. Separate key_cache and value_cache.
Some work is still needed to see if the SinkCache can conveniently be implemented with just one update method.
* Implement the SinkCache through backward+forward rotations
* Integrate (Sink)Cache with Llama FA2
* Set use_legacy_cache=True as default, allows for test passes
* Move from/to_legacy_cache to ...Model class
* Undo unnecessary newline change
* Remove copy utility from deprecated OpenLlama
* Match import style
* manual rebase with main
* Cache class working with generate (#1)
* Draft version of new KV Caching
This should allow Attention Sinks (https://github.com/tomaarsen/attention_sinks)
/ StreamingLLM (https://arxiv.org/abs/2309.17453) to be easily implemented
in a third-party or in transformers directly
* Address numerous PR suggestions
1. Move layer_idx from cache to ...Attention. Removes confusing set_layer_idx magic.
2. Always convert past_key_values to Cache instance at the start of ...Attention, removes all other isinstance calls.
3. Remove __bool__ and __getitem__ magic as they're confusing.
4. past_key_values.update(key, value, idx) now returns key, value.
5. Add use_legacy_cache flag, defaults to None, i.e. Falsey. This breaks generate for now, until 1) the cache is used is generate() or 2) use_legacy_cache is defaulted to True in generate() until we change it in another PR.
6. Separate key_cache and value_cache.
Some work is still needed to see if the SinkCache can conveniently be implemented with just one update method.
* Integrate (Sink)Cache with Llama FA2
* Move from/to_legacy_cache to ...Model class
* Undo unnecessary newline change
* Match import style
* working generate
* Add tests; Simplify code; Apply changes to Mistral and Persimmon
* fix rebase mess
* a few more manual fixes
* last manual fix
* propagate changes to phi
* upgrade test
* add use_legacy_cache docstring; beef up tests
* reintroduce unwanted deletes
---------
Co-authored-by: Tom Aarsen <Cubiegamedev@gmail.com>
* move import
* add default to model_kwargs.get('use_legacy_cache')
* correct failing test
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* apply PR suggestions
* fix failing test
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Tom Aarsen <37621491+tomaarsen@users.noreply.github.com>
* PR comments
* tmp commit
* add docstrings
* more tests, more docstrings, add to docs
* derp
* tmp commit
* tmp dbg
* more dbg
* fix beam search bug
* cache can be a list of tuples in some models
* fix group beam search
* all but sinkcache integration tests
* fix sink cache and add hard integration test
* now also compatible with input_embeds input
* PR comments
* add Cache support to Phi+FA2
* make fixup
---------
Co-authored-by: Joao Gante <joao@huggingface.co>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* skip 4 tests
* nits
* style
* wow it's not my day
* skip new failing tests
* style
* skip for NLLB MoE as well
* skip `test_assisted_decoding_sample` for everyone
* add early stopping logits processor
* black formmated
* indent
* follow method signature
* actual logic
* check for None
* address comments on docstrings and method signature
* add unit test under `LogitsProcessorTest` wip
* unit test passing
* black formatted
* condition per sample
* add to BarkModelIntegrationTests
* wip BarkSemanticModelTest
* rename and add to kwargs handling
* not add to BarkSemanticModelTest
* correct logic and assert last outputs tokens different in test
* doc-builder style
* read from kwargs as well
* assert len of with less than that of without
* ruff
* add back seed and test case
* add original impl default suggestion
* doc-builder
* rename and use softmax
* switch back to LogitsProcessor and update docs wording
* camelCase and spelling and saving compute
* assert strictly less than
* assert less than
* expand test_generate_semantic_early_stop instead
* first raw commit
* still POC
* tentative convert script
* almost working speech encoder conversion scripts
* intermediate code for encoder/decoders
* add modeling code
* first version of speech encoder
* make style
* add new adapter layer architecture
* add adapter block
* add first tentative config
* add working speech encoder conversion
* base model convert works now
* make style
* remove unnecessary classes
* remove unecessary functions
* add modeling code speech encoder
* rework logics
* forward pass of sub components work
* add modeling codes
* some config modifs and modeling code modifs
* save WIP
* new edits
* same output speech encoder
* correct attention mask
* correct attention mask
* fix generation
* new generation logics
* erase comments
* make style
* fix typo
* add some descriptions
* new state
* clean imports
* add tests
* make style
* make beam search and num_return_sequences>1 works
* correct edge case issue
* correct SeamlessM4TConformerSamePadLayer copied from
* replace ACT2FN relu by nn.relu
* remove unecessary return variable
* move back a class
* change name conformer_attention_mask ->conv_attention_mask
* better nit code
* add some Copied from statements
* small nits
* small nit in dict.get
* rename t2u model -> conditionalgeneration
* ongoing refactoring of structure
* update models architecture
* remove SeamlessM4TMultiModal classes
* add tests
* adapt tests
* some non-working code for vocoder
* add seamlessM4T vocoder
* remove buggy line
* fix some hifigan related bugs
* remove hifigan specifc config
* change
* add WIP tokenization
* add seamlessM4T working tokenzier
* update tokenization
* add tentative feature extractor
* Update converting script
* update working FE
* refactor input_values -> input_features
* update FE
* changes in generation, tokenizer and modeling
* make style and add t2u_decoder_input_ids
* add intermediate outputs for ToSpeech models
* add vocoder to speech models
* update valueerror
* update FE with languages
* add vocoder convert
* update config docstrings and names
* update generation code and configuration
* remove todos and update config.pad_token_id to generation_config.pad_token_id
* move block vocoder
* remove unecessary code and uniformize tospeech code
* add feature extractor import
* make style and fix some copies from
* correct consistency + make fix-copies
* add processor code
* remove comments
* add fast tokenizer support
* correct pad_token_id in M4TModel
* correct config
* update tests and codes + make style
* make some suggested correstion - correct comments and change naming
* rename some attributes
* rename some attributes
* remove unecessary sequential
* remove option to use dur predictor
* nit
* refactor hifigan
* replace normalize_mean and normalize_var with do_normalize + save lang ids to generation config
* add tests
* change tgt_lang logic
* update generation ToSpeech
* add support import SeamlessM4TProcessor
* fix generate
* make tests
* update integration tests, add option to only return text and update tokenizer fast
* fix wrong function call
* update import and convert script
* update integration tests + update repo id
* correct paths and add first test
* update how new attention masks are computed
* update tests
* take first care of batching in vocoder code
* add batching with the vocoder
* add waveform lengths to model outputs
* make style
* add generate kwargs + forward kwargs of M4TModel
* add docstrings forward methods
* reformate docstrings
* add docstrings t2u model
* add another round of modeling docstrings + reformate speaker_id -> spkr_id
* make style
* fix check_repo
* make style
* add seamlessm4t to toctree
* correct check_config_attributes
* write config docstrings + some modifs
* make style
* add docstrings tokenizer
* add docstrings to processor, fe and tokenizers
* make style
* write first version of model docs
* fix FE + correct FE test
* fix tokenizer + add correct integration tests
* fix most tokenization tests
* make style
* correct most processor test
* add generation tests and fix num_return_sequences > 1
* correct integration tests -still one left
* make style
* correct position embedding
* change numbeams to 1
* refactor some modeling code and correct one test
* make style
* correct typo
* refactor intermediate fnn
* refactor feedforward conformer
* make style
* remove comments
* make style
* fix tokenizer tests
* make style
* correct processor tests
* make style
* correct S2TT integration
* Apply suggestions from Sanchit code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* correct typo
* replace torch.nn->nn + make style
* change Output naming (waveforms -> waveform) and ordering
* nit renaming and formating
* remove return None when not necessary
* refactor SeamlessM4TConformerFeedForward
* nit typo
* remove almost copied from comments
* add a copied from comment and remove an unecessary dropout
* remove inputs_embeds from speechencoder
* remove backward compatibiliy function
* reformate class docstrings for a few components
* remove unecessary methods
* split over 2 lines smthg hard to read
* make style
* replace two steps offset by one step as suggested
* nice typo
* move warnings
* remove useless lines from processor
* make generation non-standard test more robusts
* remove torch.inference_mode from tests
* split integration tests
* enrich md
* rename control_symbol_vocoder_offset->vocoder_offset
* clean convert file
* remove tgt_lang and src_lang from FE
* change generate docstring of ToText models
* update generate docstring of tospeech models
* unify how to deal withtext_decoder_input_ids
* add default spkr_id
* unify tgt_lang for t2u_model
* simplify tgt_lang verification
* remove a todo
* change config docstring
* make style
* simplify t2u_tgt_lang_id
* make style
* enrich/correct comments
* enrich .md
* correct typo in docstrings
* add torchaudio dependency
* update tokenizer
* make style and fix copies
* modify SeamlessM4TConverter with new tokenizer behaviour
* make style
* correct small typo docs
* fix import
* update docs and add requirement to tests
* add convert_fairseq2_to_hf in utils/not_doctested.txt
* update FE
* fix imports and make style
* remove torchaudio in FE test
* add seamless_m4t.md to utils/not_doctested.txt
* nits and change the way docstring dataset is loaded
* move checkpoints from ylacombe/ to facebook/ orga
* refactor warning/error to be in the 119 line width limit
* round overly precised floats
* add stereo audio behaviour
* refactor .md and make style
* enrich docs with more precised architecture description
* readd undocumented models
* make fix-copies
* apply some suggestions
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* correct bug from previous commit
* refactor a parameter allowing to clean the code + some small nits
* clean tokenizer
* make style and fix
* make style
* clean tokenizers arguments
* add precisions for some tests
* move docs from not_tested to slow
* modify tokenizer according to last comments
* add copied from statements in tests
* correct convert script
* correct parameter docstring style
* correct tokenization
* correct multi gpus
* make style
* clean modeling code
* make style
* add copied from statements
* add copied statements
* add support with ASR pipeline
* remove file added inadvertently
* fix docstrings seamlessM4TModel
* add seamlessM4TConfig to OBJECTS_TO_IGNORE due of unconventional markdown
* add seamlessm4t to assisted generation ignored models
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* In assisted decoding, pass model_kwargs to model's forward call
Previously, assisted decoding would ignore any additional kwargs
that it doesn't explicitly handle. This was inconsistent with other
generation methods, which pass the model_kwargs through
prepare_inputs_for_generation and forward the returned dict to the
model's forward call.
The prepare_inputs_for_generation method needs to be amended in all
models, as previously it only kept the last input ID when a past_key_values
was passed.
* Improve variable names in _extend_attention_mask
* Refactor extending token_type_ids into a function
* Replace deepcopy with copy to optimize performance
* Update new persimmon model with llama changes for assisted generation
* Update new mistral model for assisted generation with prepare_inputs_for_generation
* Update position_ids creation in falcon prepare_inputs_for_generation to support assisted generation
* Fix GPTNeoX beam search when using parallelize
* Fix beam search idx device when using model parallel
* remove onnx related stuff
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fix: move test_beam_search_on_multi_gpu to GenerationTesterMixin
* fix: add right item to _no_split_modules of MegaPreTrainedModel
* fix: add num_beams within parallelized beam_search test
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Fix issues in test_exponential_decay_length_penalty
Fix tests which were broken and add validation of negative scores.
Current test didn't take into account that ExponentialDecayLengthPenalty updates the score inplace, resulting in updates to base tested Tensor.
In addition, the gt assert had empty Tensors due to indexing along the batch dimension.
Test is currently expected to fail to show ExponentialDecayLengthPenalty issues with negative scores
* Fix ExponentialDecayLengthPenalty negative logits issue
In cases where the scores are negative, ExponentialDecayLengthPenalty decreases the score of eos_token_id instead of increasing it.
To fix this issue we compute the penalty of the absolute value and add it to the original score.
* Add examples for ExponentialDecayLengthPenalty
* Fix styling issue in ExponentialDecayLengthPenalty doc
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Style and quality fix
* Fix example outputs
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Fix TypeError: Object of type int64 is not JSON serializable
* Convert numpy.float64 and numpy.int64 to float and int for json serialization
* Black reformatted examples/pytorch/token-classification/run_ner_no_trainer.py
* * make style
* Replace python random with torch.rand to enable dynamo.export
* revert changes to flax model code
* Remove unused random import
* Fix torch template
* Move torch.manual_seed(0) to right location
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Don't forget the imports
* Add the imports to tests too
* make fixup
* Refactor tests that depended on get_type_hints
* Better test refactor
* Fix an old hidden bug in the test_keras_fit input creation code
* Fix for the Deit tests
* time to say goodbye, torch 1.7 and 1.8
* clean up torch_int_div
* clean up is_torch_less_than_1_8-9
* update
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* rounding_mode = "floor" instead of // to prevent behavioral change
* add other TODO
* use `torch_int_div` from pytrch_utils
* same for tests
* fix copies
* style
* use relative imports when needed
* Co-authored-by: sgugger <sylvain.gugger@gmail.com>
* add tests with multiple eos_token_ids
* make math.prod instead of sum
* make fixup
* fix long and also use np.prod since math.prod does not exist <python 3.8
* make fixup
* add prod util
* use prod util instead of np.prod
* make fixup
* previous .long location
* use tensor ops
* remove prod
* remove prod
* update device
* make fixup
* fix none
* Result of black 23.1
* Update target to Python 3.7
* Switch flake8 to ruff
* Configure isort
* Configure isort
* Apply isort with line limit
* Put the right black version
* adapt black in check copies
* Fix copies
* add additional kwargs handling
* fix issue when serializing
* correct order of kwargs removal for serialization in from dict
* add `dict_torch_dtype_to_str` in case a dtype is needed for generation
* add condition when adding the kwargs : not from config
* Add comment based on review
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* add test function
* default None when poping arg
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Add epsilon- and eta-sampling.
Add epsilon- and eta-sampling, following the official code from https://github.com/john-hewitt/truncation-sampling and adapting to be more configurable, as required by Huggingface transformers.
* Add unit tests for epsilon- and eta-sampling.
* Black: fix code formatting.
* Fix docstring spacing.
* Clean up newlines.
* Fix implementation bugs and their associated tests.
* Remove epsilon- and eta-sampling parameters from PretrainedConfig.
* Clarify and clean up the documentation.
* Remove parameters for PretrainedConfig test.
* Add StopIdStoppingCriteria
* add a working test for stop id criteria
* add to global scope
* add stop_ids to generate
* add pipeline test
* use tokenizer encode in test
* add test to generation utils
* reformat
* fixup
* make-fix-copies
* rename to stop_token_id
* use stop_tokens instead
* add to text to text generation
* make fixup
* make repo-consistency
* Add support for list of ints for eos_token_id inside generation/utils.py
* Instead of having if elses, cast the eos_token_id into a List[int]
* Add List[int] support for logits_process.py
* add List[int] for beam_search.py
* add List[int] for forced_eos_token_id
* revert stop token id stopping criteria changes
* make fixup
* fix tests
* add eos_token_id to generation/utils.py and added tests test_utils.py
* add eos_token_id type hints and fix for pad tokens
* add comments
* remove some prints and remove forced false test
* fix
* put back test_stop_sequence_stopping_criteria
* remove unused import and make fixup
* add a none check
* update docstring
* add more docstring for list ints
* make fixup
* generate from config mvp
* fix failing tests
* max_time test
* Load default gen config at model load time; Update docs
* further documentation; add tests
* adapt rag to the new structure
* handle models not instantiated with from_pretained (like in tests)
* better default generation config
* add can_generate fn
* handle legacy use case of ad hoc model config changes
* initialize gen config from config in individual methods, if gen config is none
* fix _get_decoder_start_token_id when called outside GenerationMixin
* correct model config load order (set attr > model config > decoder config)
* update rag to match latest changes
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* load gen config from model config in model.from_pretrained
* fix can_generate fn
* handle generate calls without a previous from_pretrained (e.g. tests)
* add legacy behavior (and a warning)
* lower logger severity
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* move generation_*.py src files into generation/*.py
* populate generation.__init__ with lazy loading
* move imports and references from generation.xxx.object to generation.object
* add: the contrastive search for generaton_utils
* add: testing scripts for contrastive search under examples/text-generation
* update the quality of codes
* revise the docstring; make the generation_contrastive_search.py scripts;
* revise the examples/pytorch/text-generation/run_generation_contrastive_search.py to the auto-APIs format
* revise the necessary documents
* fix: revise the docstring of generation_contrastive_search.py
* Fix the code indentation
* fix: revise the nits and examples in contrastive_search docstring.
* fix the copyright
* delete generation_contrastive_search.py
* revise the logic in contrastive_search
* update the intergration test and the docstring
* run the tests over
* add the slow decorate to the contrastive_search intergrate test
* add more test
* do the style, quality, consistency checks
* init PR
* optimize top p and add edge case
* styling
* style
* revert tf and flax test
* add edge case test for FLAX and TF
* update doc with smallest set sampling for top p
* make style
- Fix `top_k_top_p_filtering` not passing `filter_value` to
`TopPLogitsWarper` causing any top-p filtered logits to be -inf
instead of specified value
- Add corresponding test
* add possibility to softly regulate length when using sampling method in model.generate() function
* fix test config, fix formatting
* fix rag integration, fix docstyling
* fix wrong docstring
* change param to tuple, add test
* fix old param in rag_model, remove unused import
* change test according to new param
* fix formatting
* fix test case
* fix doc style
* move start_length calculation to Logitprocessor
* add possibility to softly regulate length when using sampling method in model.generate() function
* fix rag integration, fix docstyling
* fix test config, fix formatting
* change param to tuple, add test
* fix old param in rag_model, remove unused import
* add possibility to softly regulate length when using sampling method in model.generate() function
* change param to tuple, add test
* fix old param in rag_model, remove unused import
* remove unused import
* fix small errors
* fix test
* add possibility to softly regulate length when using sampling method in model.generate() function
* fix test config, fix formatting
* fix rag integration, fix docstyling
* change param to tuple, add test
* fix old param in rag_model, remove unused import
* change test according to new param
* fix test case
* move start_length calculation to Logitprocessor
* add possibility to softly regulate length when using sampling method in model.generate() function
* fix rag integration, fix docstyling
* fix test config, fix formatting
* change param to tuple, add test
* fix old param in rag_model, remove unused import
* add possibility to softly regulate length when using sampling method in model.generate() function
* fix test config, fix formatting
* fix rag integration, fix docstyling
* add possibility to softly regulate length when using sampling method in model.generate() function
* fix rag integration, fix docstyling
* change param to tuple, add test
* fix old param in rag_model, remove unused import
* fix small errors
* Update src/transformers/generation_utils.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/generation_utils.py
* Update src/transformers/generation_utils.py
* fix docstring, add type ind model rag
* fix docstrings
* introduce seq_length variable for cleaner code
* fix black formatting
* add input_ids_seq_length to modeling_rag
* add input_ids_seq_length to test
* retrigger checks
* retrigger checks
Co-authored-by: Kevin Bondzio <kev@AIM-LAP-02.local>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Kevin Bondzio <kev@AIM-LAP-02.fritz.box>
* added classes to get started with constrained beam search
* in progress, think i can directly force tokens now but not yet with the round robin
* think now i have total control, now need to code the bank selection
* technically works as desired, need to optimize and fix design choices leading to undersirable outputs
* complete PR #1 without disjunctive decoding
* removed incorrect tests
* Delete k.txt
* Delete test.py
* Delete test.sh
* revert changes to test scripts
* genutils
* full implementation with testing, no disjunctive yet
* shifted docs
* passing all tests realistically ran locally
* removing accidentally included print statements
* fixed source of error in initial PR test
* fixing the get_device() vs device trap
* fixed documentation docstrings about constrained_beam_search
* fixed tests having failing for Speech2TextModel's floating point inputs
* fix cuda long tensor
* added examples and testing for them and founx & fixed a bug in beam_search and constrained_beam_search
* deleted accidentally added test halting code with assert False
* code reformat
* Update tests/test_generation_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update tests/test_generation_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update tests/test_generation_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update tests/test_generation_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update tests/test_generation_utils.py
* fixing based on comments on PR
* took out the testing code that should but work fails without the beam search moditification ; style changes
* fixing comments issues
* docstrings for ConstraintListState
* typo in PhrsalConstraint docstring
* docstrings improvements
* finished adding what is sort of an opinionated implementation of disjunctive generation, but it revealed errors in inner beam search logic during testing.
* fixed bug found in constrained beam search that used beam_idx that were not global across all the batches
* disjunctive constraint working 100% correctly
* passing all tests
* Accidentally included mlruns
* Update src/transformers/generation_beam_constraints.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/generation_beam_constraints.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* complete overhaul of type complexities and other nits
* strict type checks in generate()
* fixing second round of feedback by narsil
* fixed failing generation test because of type check overhaul
* generation test fail fix
* fixing test fails
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Add TF logits wrappers
* Add sample method
* add tests for TF logit wrappers
* TF generate sample tests now run on CPU
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>