mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
create model cards for qg models (#5610)
This commit is contained in:
parent
d6b6ab11f0
commit
82ce8488bb
38
model_cards/valhalla/t5-base-e2e-qg/README.md
Normal file
38
model_cards/valhalla/t5-base-e2e-qg/README.md
Normal file
@ -0,0 +1,38 @@
|
||||
---
|
||||
datasets:
|
||||
- squad
|
||||
tags:
|
||||
- question-generation
|
||||
widget:
|
||||
- text: "Python is a programming language. It is developed by Guido Van Rossum and released in 1991. </s>"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
## T5 for question-generation
|
||||
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions.
|
||||
|
||||
You can play with the model using the inference API, just put the text and see the results!
|
||||
|
||||
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
||||
|
||||
### Model in action 🚀
|
||||
|
||||
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
|
||||
|
||||
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
|
||||
|
||||
```python3
|
||||
from pipelines import pipeline
|
||||
|
||||
text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \
|
||||
and first released in 1991, Python's design philosophy emphasizes code \
|
||||
readability with its notable use of significant whitespace."
|
||||
|
||||
nlp = pipeline("e2e-qg", model="valhalla/t5-base-e2e-qg")
|
||||
nlp(text)
|
||||
=> [
|
||||
'Who created Python?',
|
||||
'When was Python first released?',
|
||||
"What is Python's design philosophy?"
|
||||
]
|
||||
```
|
50
model_cards/valhalla/t5-base-qa-qg-hl/README.md
Normal file
50
model_cards/valhalla/t5-base-qa-qg-hl/README.md
Normal file
@ -0,0 +1,50 @@
|
||||
---
|
||||
datasets:
|
||||
- squad
|
||||
tags:
|
||||
- question-generation
|
||||
widget:
|
||||
- text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
|
||||
- text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
## T5 for multi-task QA and QG
|
||||
This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
|
||||
|
||||
For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
|
||||
|
||||
You can play with the model using the inference API. Here's how you can use it
|
||||
|
||||
For QG
|
||||
|
||||
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
|
||||
|
||||
For QA
|
||||
|
||||
`question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
|
||||
|
||||
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
||||
|
||||
|
||||
### Model in action 🚀
|
||||
|
||||
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
|
||||
|
||||
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
|
||||
|
||||
```python3
|
||||
from pipelines import pipeline
|
||||
nlp = pipeline("multitask-qa-qg", model="valhalla/t5-base-qa-qg-hl")
|
||||
|
||||
# to generate questions simply pass the text
|
||||
nlp("42 is the answer to life, the universe and everything.")
|
||||
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
|
||||
|
||||
# for qa pass a dict with "question" and "context"
|
||||
nlp({
|
||||
"question": "What is 42 ?",
|
||||
"context": "42 is the answer to life, the universe and everything."
|
||||
})
|
||||
=> 'the answer to life, the universe and everything'
|
||||
```
|
33
model_cards/valhalla/t5-base-qg-hl/README.md
Normal file
33
model_cards/valhalla/t5-base-qg-hl/README.md
Normal file
@ -0,0 +1,33 @@
|
||||
---
|
||||
datasets:
|
||||
- squad
|
||||
tags:
|
||||
- question-generation
|
||||
widget:
|
||||
- text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"
|
||||
- text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"
|
||||
- text: "Although <hl> practicality <hl> beats purity </s>"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
## T5 for question-generation
|
||||
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
|
||||
|
||||
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example
|
||||
|
||||
`<hl> 42 <hl> is the answer to life, the universe and everything. </s>`
|
||||
|
||||
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
||||
|
||||
### Model in action 🚀
|
||||
|
||||
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
|
||||
|
||||
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
|
||||
|
||||
```python3
|
||||
from pipelines import pipeline
|
||||
nlp = pipeline("question-generation", model="valhalla/t5-base-qg-hl")
|
||||
nlp("42 is the answer to life, universe and everything.")
|
||||
=> [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]
|
||||
```
|
36
model_cards/valhalla/t5-samll-qg-prepend/README.md
Normal file
36
model_cards/valhalla/t5-samll-qg-prepend/README.md
Normal file
@ -0,0 +1,36 @@
|
||||
---
|
||||
datasets:
|
||||
- squad
|
||||
tags:
|
||||
- question-generation
|
||||
widget:
|
||||
- text: "answer: 42 context: 42 is the answer to life, the universe and everything. </s>"
|
||||
- text: "answer: Guido Van Rossum context: Python is a programming language. It is developed by Guido Van Rossum. </s>"
|
||||
- text: "answer: Explicit context: Explicit is better than implicit </s>"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
## T5 for question-generation
|
||||
This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer text is prepended before the context text.
|
||||
|
||||
You can play with the model using the inference API, just get the input text in this format and see the results!
|
||||
`answer: answer_text context: context_text </s>`
|
||||
|
||||
For example
|
||||
|
||||
`answer: 42 context: 42 is the answer to life, the universe and everything. </s>`
|
||||
|
||||
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
||||
|
||||
### Model in action 🚀
|
||||
|
||||
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
|
||||
|
||||
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
|
||||
|
||||
```python3
|
||||
from pipelines import pipeline
|
||||
nlp = pipeline("question-generation", qg_format="prepend")
|
||||
nlp("42 is the answer to life, universe and everything.")
|
||||
=> [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]
|
||||
```
|
38
model_cards/valhalla/t5-small-e2e-qg/README.md
Normal file
38
model_cards/valhalla/t5-small-e2e-qg/README.md
Normal file
@ -0,0 +1,38 @@
|
||||
---
|
||||
datasets:
|
||||
- squad
|
||||
tags:
|
||||
- question-generation
|
||||
widget:
|
||||
- text: "Python is developed by Guido Van Rossum and released in 1991. </s>"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
## T5 for question-generation
|
||||
This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions.
|
||||
|
||||
You can play with the model using the inference API, just put the text and see the results!
|
||||
|
||||
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
||||
|
||||
### Model in action 🚀
|
||||
|
||||
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
|
||||
|
||||
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
|
||||
|
||||
```python3
|
||||
from pipelines import pipeline
|
||||
|
||||
text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \
|
||||
and first released in 1991, Python's design philosophy emphasizes code \
|
||||
readability with its notable use of significant whitespace."
|
||||
|
||||
nlp = pipeline("e2e-qg")
|
||||
nlp(text)
|
||||
=> [
|
||||
'Who created Python?',
|
||||
'When was Python first released?',
|
||||
"What is Python's design philosophy?"
|
||||
]
|
||||
```
|
49
model_cards/valhalla/t5-small-qa-qg-hl/README.md
Normal file
49
model_cards/valhalla/t5-small-qa-qg-hl/README.md
Normal file
@ -0,0 +1,49 @@
|
||||
---
|
||||
datasets:
|
||||
- squad
|
||||
tags:
|
||||
- question-generation
|
||||
widget:
|
||||
- text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
|
||||
- text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
## T5 for multi-task QA and QG
|
||||
This is multi-task [t5-small](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
|
||||
|
||||
For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
|
||||
|
||||
You can play with the model using the inference API. Here's how you can use it
|
||||
|
||||
For QG
|
||||
|
||||
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
|
||||
|
||||
For QA
|
||||
|
||||
`question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
|
||||
|
||||
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
||||
|
||||
### Model in action 🚀
|
||||
|
||||
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
|
||||
|
||||
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
|
||||
|
||||
```python3
|
||||
from pipelines import pipeline
|
||||
nlp = pipeline("multitask-qa-qg")
|
||||
|
||||
# to generate questions simply pass the text
|
||||
nlp("42 is the answer to life, the universe and everything.")
|
||||
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
|
||||
|
||||
# for qa pass a dict with "question" and "context"
|
||||
nlp({
|
||||
"question": "What is 42 ?",
|
||||
"context": "42 is the answer to life, the universe and everything."
|
||||
})
|
||||
=> 'the answer to life, the universe and everything'
|
||||
```
|
33
model_cards/valhalla/t5-small-qg-hl/README.md
Normal file
33
model_cards/valhalla/t5-small-qg-hl/README.md
Normal file
@ -0,0 +1,33 @@
|
||||
---
|
||||
datasets:
|
||||
- squad
|
||||
tags:
|
||||
- question-generation
|
||||
widget:
|
||||
- text: "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"
|
||||
- text: "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"
|
||||
- text: "Simple is better than <hl> complex <hl>. </s>"
|
||||
license: "MIT"
|
||||
---
|
||||
|
||||
## T5 for question-generation
|
||||
This is [t5-small](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
|
||||
|
||||
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example
|
||||
|
||||
`<hl> 42 <hl> is the answer to life, the universe and everything. </s>`
|
||||
|
||||
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
||||
|
||||
### Model in action 🚀
|
||||
|
||||
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
|
||||
|
||||
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
|
||||
|
||||
```python3
|
||||
from pipelines import pipeline
|
||||
nlp = pipeline("question-generation")
|
||||
nlp("42 is the answer to life, universe and everything.")
|
||||
=> [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]
|
||||
```
|
Loading…
Reference in New Issue
Block a user