transformers/examples/pplm
Aymeric Augustin a8d34e534e Remove [--editable] in install instructions.
Use -e only in docs targeted at contributors.

If a user copy-pastes  command line with [--editable], they will hit
an error. If they don't know the --editable option, we're giving them
a choice to make before they can move forwards, but this isn't a choice
they need to make right now.
2019-12-24 08:46:08 +01:00
..
imgs readme 2019-12-03 10:14:02 -05:00
pplm_classification_head.py Reformat source code with black. 2019-12-21 17:52:29 +01:00
README.md Remove [--editable] in install instructions. 2019-12-24 08:46:08 +01:00
run_pplm_discrim_train.py Sort imports for optional third-party libraries. 2019-12-22 11:19:13 +01:00
run_pplm.py Fix E722 flake8 warnings (x26). 2019-12-22 10:59:07 +01:00

Plug and Play Language Models: a Simple Approach to Controlled Text Generation

Authors: Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu

This folder contains the original code used to run the Plug and Play Language Model (PPLM).

Paper link: https://arxiv.org/abs/1912.02164

Blog link: https://eng.uber.com/pplm

Please check out the repo under uber-research for more information: https://github.com/uber-research/PPLM

Setup

git clone https://github.com/huggingface/transformers && cd transformers
pip install .
pip install nltk torchtext # additional requirements.
cd examples/pplm

PPLM-BoW

Example command for bag-of-words control

python run_pplm.py -B military --cond_text "The potato" --length 50 --gamma 1.5 --num_iterations 3 --num_samples 10 --stepsize 0.03 --window_length 5 --kl_scale 0.01 --gm_scale 0.99 --colorama --sample

Tuning hyperparameters for bag-of-words control

  1. Increase --stepsize to intensify topic control, and decrease its value to soften the control. --stepsize 0 recovers the original uncontrolled GPT-2 model.

  2. If the language being generated is repetitive (For e.g. "science science experiment experiment"), there are several options to consider:
    a) Reduce the --stepsize
    b) Increase --kl_scale (the KL-loss coefficient) or decrease --gm_scale (the gm-scaling term)
    c) Add --grad-length xx where xx is an (integer <= length, e.g. --grad-length 30).

PPLM-Discrim

Example command for discriminator based sentiment control

python run_pplm.py -D sentiment --class_label 2 --cond_text "My dog died" --length 50 --gamma 1.0 --num_iterations 10 --num_samples 10 --stepsize 0.04 --kl_scale 0.01 --gm_scale 0.95 --sample

Tuning hyperparameters for discriminator control

  1. Increase --stepsize to intensify topic control, and decrease its value to soften the control. --stepsize 0 recovers the original uncontrolled GPT-2 model.

  2. Use --class_label 3 for negative, and --class_label 2 for positive