transformers/examples/multiple-choice
2021-02-08 05:03:55 -05:00
..
README.md Add new run_swag example (#9175) 2020-12-18 14:19:24 -05:00
requirements.txt Reorganize examples (#9010) 2020-12-11 10:07:02 -05:00
run_swag.py Truncate max length if needed in all examples (#10034) 2021-02-08 05:03:55 -05:00
run_tf_multiple_choice.py [examples] make run scripts executable (#10037) 2021-02-05 15:51:18 -08:00
utils_multiple_choice.py Black 20 release 2020-08-26 17:20:22 +02:00

Multiple Choice

Based on the script run_swag.py.

Fine-tuning on SWAG

python examples/multiple-choice/run_swag.py \
--model_name_or_path roberta-base \
--do_train \
--do_eval \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/swag_base \
--per_gpu_eval_batch_size=16 \
--per_device_train_batch_size=16 \
--overwrite_output

Training with the defined hyper-parameters yields the following results:

***** Eval results *****
eval_acc = 0.8338998300509847
eval_loss = 0.44457291918821606

Tensorflow

export SWAG_DIR=/path/to/swag_data_dir
python ./examples/multiple-choice/run_tf_multiple_choice.py \
--task_name swag \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_device_train_batch_size=16 \
--logging-dir logs \
--gradient_accumulation_steps 2 \
--overwrite_output

Run it in colab

Open In Colab