* fix compatibility
* working version
* cleanup
* sanity checks
* more sanity
* working version WITH refactor
* working without API change
* cleanup & tests pass
* more cleaning
* fix test
* fix tests
* Update src/transformers/generation/utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* smaller comment
* update comment
* update comment
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* draft processor arg capture
* add missing vivit model
* add new common test for image preprocess signature
* fix quality
* fix up
* add back missing validations
* quality
* move info level to warning for unused kwargs
* Revert "Add tie_weights() to LM heads and set bias in set_output_embeddings() (#28948)"
This reverts commit 725f4ad1cc.
* Revert "Patch to skip failing `test_save_load_low_cpu_mem_usage` tests (#29043)"
This reverts commit 4156f517ce.
* add add_dummy_prefix_space option to slow
* checking kwargs might be better. Should be there for all spm tokenizer IMO
* nits
* fix copies
* more copied
* nits
* add prefix space
* nit
* nits
* Update src/transformers/convert_slow_tokenizer.py
* fix inti
* revert wrong styling
* fix
* nits
* style
* updates
* make sure we use slow tokenizer for conversion instead of looking for the decoder
* support llama ast well
* update llama tokenizer fast
* nits
* nits nits nits
* update the doc
* update
* update to fix tests
* skip unrelated tailing test
* Update src/transformers/convert_slow_tokenizer.py
* add proper testing
* test decode as well
* more testing
* format
* fix llama test
* Apply suggestions from code review
* Fixed nll with label_smoothing to nll
* Resolved conflict by rebase
* Fixed nll with label_smoothing to nll
* Resolved conflict by rebase
* Added label_smoothing to config file
* Fixed nits
* generated text on A10G
* generated text in CI
* Apply suggestions from code review
add explanatory comments
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
output_logits option behaves like output_scores, but returns the raw, unprocessed prediction logit scores,
ie. the values before they undergo logit processing and/or warping. The latter happens by default for the
regular output scores.
It's useful to have the unprocessed logit scores in certain circumstances. For example, unprocessed logit scores
are very useful with causallm models when one wants to determine the probability of a certain answer, e.g.
when asking a question with a yes/no answer. In that case getting the next-token probabilities of both "yes" and
"no" (and/or their relative ratio) is of interest for classification. The reason for getting these _before_ logit
processing and/or warping is b/c a) that can change the probabilities or b) reject the tokens of interest / reduce
the number of tokens to just 1.
For an example use-case see paper TabLLM: Few-shot Classification of Tabular Data with Large Language Models
by Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag.
https://arxiv.org/abs/2210.10723
In addition:
- added dedicated unit test: tests/generation/test_utils/test_return_unprocessed_logit_scores
which tests return of logics with output_logits=True in generation.
- set output_logits=True in all other generation unit tests, that also have output_scores=True.
Implemented @gante's and @amyeroberts review feedback
Co-authored-by: kx79wq <max.baak@ing.com>
* change version
* nuke
* this doesn't make sense
* update some requirements.py
* revert + no main
* nits
* change cache number
* more pin
* revert
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
The link in evaluation was missing a hyphen between post and processing. I fixed this, for English only. Someone with the ability to do a global search/replace should fix the other languages (if indeed they have this issue)/
* Add task_summary to es/_toctree.yml
* Add task_summary.md to docs/es
* Change title of task_summary.md
* Translate firsts paragraphs
* Translate middle paragraphs
* Translte the rest of the doc
* Edit firts paragraph
* Add chat support to text generation pipeline
* Better handling of single elements
* Deprecate ConversationalPipeline
* stash commit
* Add missing add_special_tokens kwarg
* Update chat templating docs to refer to TextGenerationPipeline instead of ConversationalPipeline
* Add ✨TF✨ tests
* @require_tf
* Add type hint
* Add specific deprecation version
* Remove unnecessary do_sample
* Remove todo - the discrepancy has been resolved
* Update src/transformers/tokenization_utils_base.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/pipelines/text_generation.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* pass through trust_remote_code for dynamically loading unregistered tokenizers specified by config
add test
* change directories back to previous directory after test
* fix ruff check
* Add a note to that block for future in case we want to remove it later
---------
Co-authored-by: Matt <rocketknight1@gmail.com>