File size: 7,380 Bytes
9302ce1 8c3ffcb 9302ce1 8c3ffcb 9302ce1 8c3ffcb 9302ce1 0f3b775 9302ce1 8c3ffcb 9302ce1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
---
license: mit
language:
- en
pipeline_tag: text2text-generation
arxiv: 2310.04921
model-index:
- name: crystal-11b
results:
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: openbookqa
name: OpenBookQA
metrics:
- type: accuracy
value: 84.58
name: Accuracy
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: ai2_arc
name: ARC (easy)
config: ARC-Easy
metrics:
- type: accuracy
value: 87.54
name: Accuracy
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: ai2_arc
name: ARC (challenge)
config: ARC-Challenge
metrics:
- type: accuracy
value: 73.24
name: Accuracy
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: commonsense_qa
name: CommonsenseQA
metrics:
- type: accuracy
value: 82.31
name: Accuracy
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: qasc
name: QASC
metrics:
- type: accuracy
value: 81.97
name: Accuracy
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: piqa
name: Physical IQA
metrics:
- type: accuracy
value: 88.08
name: Accuracy
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: social_i_qa
name: Social IQA
metrics:
- type: accuracy
value: 82.24
name: Accuracy
- task:
type: question-answering
name: Commonsense Question Answering
dataset:
type: winogrande
name: Winogrande
config: winogrande_xl
metrics:
- type: accuracy
value: 90.77
name: Accuracy
---
# Model Card for Crystal
<!-- Provide a quick summary of what the model is/does. -->
Crystal is an introspective reasoning model commonsense QA. See our paper at: <https://arxiv.org/abs/2310.04921>.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Crystal can answer a given commonsense question by first generating a relevant knowledge statement, and then predict the final answer by referencing the generated knowledge.
We call this process "introspective reasoning", and it improves both the prediction accuracy and the interpretability of neural models at reasoning tasks.
- **Developed by:** Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, Asli Celikyilmaz
- **Shared by [optional]:** Jiacheng Liu
- **Model type:** Transformers
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model [optional]:** t5-11b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** <https://github.com/liujch1998/crystal>
- **Paper [optional]:** <https://arxiv.org/abs/2310.04921>
- **Demo [optional]:** <https://huggingface.co/spaces/liujch1998/crystal>
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Crystal is intended to answer commonsense questions via an "introspective reasoning" process.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Crystal is a research prototype and may give incorrect answers or reasoning process. Do not use for making critical decisions. It is intended to answer questions about commonsense, and may be unreliable when taking input out of this scope.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
See the **Limitations** section of our paper.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained('liujch1998/crystal-11b')
model = AutoModelForSeq2SeqLM.from_pretrained('liujch1998/crystal-11b')
model.eval()
max_question_len, max_knowledge_len, max_answer_len = 128, 32, 2
k = 1 # number of knowledge statements to generate
top_p = 0.0001
question = 'If the mass of an object gets bigger what will happen to the amount of matter contained within it? \\n (A) gets bigger (B) gets smaller'
choices = ['A', 'B']
choices_ids = tokenizer(choices, return_tensors='pt', padding='max_length', truncation='longest_first', max_length=max_answer_len).input_ids # (C, AL)
prompt = question + ' \\n Knowledge: '
prompt_tok = tokenizer(prompt, return_tensors='pt', padding='max_length', truncation='longest_first', max_length=max_question_len) # (1, QL)
knowledges_ids = self.model.generate(
input_ids=prompt_tok.input_ids,
attention_mask=prompt_tok.attention_mask,
max_length=max_knowledge_len + 1,
min_length=3,
do_sample=True,
num_return_sequences=k,
top_p=top_p,
) # (K, KL); begins with 0 ([BOS]); ends with 1 ([EOS])
knowledges_ids = knowledges_ids[:, 1:].contiguous() # no beginning; ends with 1 ([EOS])
knowledges = tokenizer.batch_decode(knowledges_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
prompts = [question + (f' \\n Knowledge: {knowledge} \\n Answer: ' if knowledge != '' else ' \\n Answer:') for knowledge in knowledges]
prompts_tok = self.tokenizer(prompts, return_tensors='pt', padding='max_length', truncation='longest_first', max_length=max_question_len + max_knowledge_len) # (K, QL+KL)
output = model(
input_ids=prompts_tok.input_ids,
attention_mask=prompts_tok.attention_mask,
labels=choices_ids[0].unsqueeze(0).repeat(len(knowledges), 1),
)
logitsss = output.logits # (K, AL, V)
logitss = logitsss[:, 0, :] # (K, V)
choice_ids = choices_ids[:, 0] # (C)
answer_logitss = logitss.gather(dim=1, index=choice_ids.unsqueeze(0).expand(len(knowledges), -1)) # (K, C)
answer_probss = answer_logitss.softmax(dim=1) # (K, C)
answer_probs = answer_probss.max(dim=0).values # (C)
pred = answer_probs.argmax(dim=0).item()
pred = choices[pred]
print(f'Question: {question}\nKnowledge: {knowledges[0]}\nAnswer: {pred}')
```
You may also refer to <https://huggingface.co/spaces/liujch1998/crystal/blob/main/app.py#L10-L86> for implementation.
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{Liu2023CrystalIR,
title={Crystal: Introspective Reasoners Reinforced with Self-Feedback},
author={Jiacheng Liu and Ramakanth Pasunuru and Hannaneh Hajishirzi and Yejin Choi and Asli Celikyilmaz},
journal={ArXiv},
year={2023},
volume={abs/2310.04921}
}
```
## Model Card Contact
Jiacheng Liu |