mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 06:10:04 +06:00

* Make forward pass work * More improvements * Remove unused imports * Remove timm dependency * Improve loss calculation of token classifier * Fix most tests * Add docs * Add model integration test * Make all tests pass * Add LayoutLMv3FeatureExtractor * Improve integration test + make fixup * Add example script * Fix style * Add LayoutLMv3Processor * Fix style * Add option to add visual labels * Make more tokenizer tests pass * Fix more tests * Make more tests pass * Fix bug and improve docs * Fix import of processors * Improve docstrings * Fix toctree and improve docs * Fix auto tokenizer * Move tests to model folder * Move tests to model folder * change default behavior add_prefix_space * add prefix space for fast * add_prefix_spcae set to True for Fast * no space before `unique_no_split` token * add test to hightligh special treatment of added tokens * fix `test_batch_encode_dynamic_overflowing` by building a long enough example * fix `test_full_tokenizer` with add_prefix_token * Fix tokenizer integration test * Make the code more readable * Add tests for LayoutLMv3Processor * Fix style * Add model to README and update init * Apply suggestions from code review * Replace asserts by value errors * Add suggestion by @ducviet00 * Add model to doc tests * Simplify script * Improve README * a step ahead to fix * Update pair_input_test * Make all tokenizer tests pass - phew * Make style * Add LayoutLMv3 to CI job * Fix auto mapping * Fix CI job name * Make all processor tests pass * Make tests of LayoutLMv2 and LayoutXLM consistent * Add copied from statements to fast tokenizer * Add copied from statements to slow tokenizer * Remove add_visual_labels attribute * Fix tests * Add link to notebooks * Improve docs of LayoutLMv3Processor * Fix reference to section Co-authored-by: SaulLu <lucilesaul.com@gmail.com> Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
69 lines
2.9 KiB
Markdown
69 lines
2.9 KiB
Markdown
<!---
|
|
Copyright 2022 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
you may not use this file except in compliance with the License.
|
|
You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License.
|
|
-->
|
|
|
|
# Token classification with LayoutLMv3 (PyTorch version)
|
|
|
|
This directory contains a script, `run_funsd_cord.py`, that can be used to fine-tune (or evaluate) LayoutLMv3 on form understanding datasets, such as [FUNSD](https://guillaumejaume.github.io/FUNSD/) and [CORD](https://github.com/clovaai/cord).
|
|
|
|
The script `run_funsd_cord.py` leverages the 🤗 Datasets library and the Trainer API. You can easily customize it to your needs.
|
|
|
|
## Fine-tuning on FUNSD
|
|
|
|
Fine-tuning LayoutLMv3 for token classification on [FUNSD](https://guillaumejaume.github.io/FUNSD/) can be done as follows:
|
|
|
|
```bash
|
|
python run_funsd_cord.py \
|
|
--model_name_or_path microsoft/layoutlmv3-base \
|
|
--dataset_name funsd \
|
|
--output_dir layoutlmv3-test \
|
|
--do_train \
|
|
--do_eval \
|
|
--max_steps 1000 \
|
|
--evaluation_strategy steps \
|
|
--eval_steps 100 \
|
|
--learning_rate 1e-5 \
|
|
--load_best_model_at_end \
|
|
--metric_for_best_model "eval_f1" \
|
|
--push_to_hub \
|
|
--push_to_hub°model_id layoutlmv3-finetuned-funsd
|
|
```
|
|
|
|
👀 The resulting model can be found here: https://huggingface.co/nielsr/layoutlmv3-finetuned-funsd. By specifying the `push_to_hub` flag, the model gets uploaded automatically to the hub (regularly), together with a model card, which includes metrics such as precision, recall and F1. Note that you can easily update the model card, as it's just a README file of the respective repo on the hub.
|
|
|
|
There's also the "Training metrics" [tab](https://huggingface.co/nielsr/layoutlmv3-finetuned-funsd/tensorboard), which shows Tensorboard logs over the course of training. Pretty neat, huh?
|
|
|
|
## Fine-tuning on CORD
|
|
|
|
Fine-tuning LayoutLMv3 for token classification on [CORD](https://github.com/clovaai/cord) can be done as follows:
|
|
|
|
```bash
|
|
python run_funsd_cord.py \
|
|
--model_name_or_path microsoft/layoutlmv3-base \
|
|
--dataset_name cord \
|
|
--output_dir layoutlmv3-test \
|
|
--do_train \
|
|
--do_eval \
|
|
--max_steps 1000 \
|
|
--evaluation_strategy steps \
|
|
--eval_steps 100 \
|
|
--learning_rate 5e-5 \
|
|
--load_best_model_at_end \
|
|
--metric_for_best_model "eval_f1" \
|
|
--push_to_hub \
|
|
--push_to_hub°model_id layoutlmv3-finetuned-cord
|
|
```
|
|
|
|
👀 The resulting model can be found here: https://huggingface.co/nielsr/layoutlmv3-finetuned-cord. Note that a model card gets generated automatically in case you specify the `push_to_hub` flag. |