mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-13 01:30:04 +06:00

* REALM initial commit * Retriever OK (Update new_gelu). * Encoder prediction score OK * Encoder pretrained model OK * Update retriever comments * Update docs, tests, and imports * Prune unused models * Make embedder as a module `RealmEmbedder` * Add RealmRetrieverOutput * Update tokenization * Pass all tests in test_modeling_realm.py * Prune RealmModel * Update docs * Add training test. * Remove completed TODO * Style & Quality * Prune `RealmModel` * Fixup * Changes: 1. Remove RealmTokenizerFast 2. Update docstrings 3. Add a method to RealmTokenizer to handle candidates tokenization. * Fix up * Style * Add tokenization tests * Update `from_pretrained` tests * Apply suggestions * Style & Quality * Copy BERT model * Fix comment to avoid docstring copying * Make RealmBertModel private * Fix bug * Style * Basic QA * Save * Complete reader logits * Add searcher * Complete searcher & reader * Move block records init to constructor * Fix training bug * Add some outputs to RealmReader * Add finetuned checkpoint variable names parsing * Fix bug * Update REALM config * Add RealmForOpenQA * Update convert_tfrecord logits * Fix bugs * Complete imports * Update docs * Update naming * Add brute-force searcher * Pass realm model tests * Style * Exclude RealmReader from common tests * Fix * Fix * convert docs * up * up * more make style * up * upload * up * Fix * Update src/transformers/__init__.py * adapt testing * change modeling code * fix test * up * up * up * correct more * make retriever work * update * make style * finish main structure * Resolve merge conflict * Make everything work * Style * Fixup * Fixup * Update training test * fix retriever * remove hardcoded path * Fix * Fix modeling test * Update model links * Initial retrieval test * Fix modeling test * Complete retrieval tests * Fix * style * Fix tests * Fix docstring example * Minor fix of retrieval test * Update license headers and docs * Apply suggestions from code review * Style * Apply suggestions from code review * Add an example to RealmEmbedder * Fix Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
80 lines
3.1 KiB
Plaintext
80 lines
3.1 KiB
Plaintext
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# REALM
|
|
|
|
## Overview
|
|
|
|
The REALM model was proposed in `REALM: Retrieval-Augmented Language Model Pre-Training
|
|
<https://arxiv.org/abs/2002.08909>`__ by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It's a
|
|
retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then
|
|
utilizes retrieved documents to process question answering tasks.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks
|
|
such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network,
|
|
requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we
|
|
augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend
|
|
over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the
|
|
first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language
|
|
modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We
|
|
demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the
|
|
challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both
|
|
explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous
|
|
methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as
|
|
interpretability and modularity.*
|
|
|
|
This model was contributed by `qqaatw <https://huggingface.co/qqaatw>`__. The original code can be found `here
|
|
<https://github.com/google-research/language/tree/master/language/realm>`__.
|
|
|
|
## RealmConfig
|
|
|
|
[[autodoc]] RealmConfig
|
|
|
|
## RealmTokenizer
|
|
|
|
[[autodoc]] RealmTokenizer
|
|
- build_inputs_with_special_tokens
|
|
- get_special_tokens_mask
|
|
- create_token_type_ids_from_sequences
|
|
- save_vocabulary
|
|
- batch_encode_candidates
|
|
|
|
## RealmRetriever
|
|
|
|
[[autodoc]] RealmRetriever
|
|
|
|
## RealmEmbedder
|
|
|
|
[[autodoc]] RealmEmbedder
|
|
- forward
|
|
|
|
## RealmScorer
|
|
|
|
[[autodoc]] RealmScorer
|
|
- forward
|
|
|
|
## RealmKnowledgeAugEncoder
|
|
|
|
[[autodoc]] RealmKnowledgeAugEncoder
|
|
- forward
|
|
|
|
## RealmReader
|
|
|
|
[[autodoc]] RealmReader
|
|
- forward
|
|
|
|
## RealmForOpenQA
|
|
|
|
[[autodoc]] RealmForOpenQA
|
|
- forward |