nielsr's picture
nielsr HF staff
Improve Model Card: Add Paper Link, Code Link, and Usage Instructions
26b9ab1 verified
|
raw
history blame
3.54 kB
---
base_model: BitStarWalkin/SuperCorrect-7B
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
tags:
- llama-cpp
- gguf-my-repo
pipeline_tag: question-answering
---
# Triangle104/SuperCorrect-7B-Q4_K_S-GGUF
This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the original model. This version is specifically designed for use with `llama.cpp`.
## SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
[Paper](https://hf.co/papers/2410.09008) | [Code](https://github.com/YangLing0818/SuperCorrect-llm)
This model uses a novel two-stage fine-tuning method to improve reasoning accuracy and self-correction ability for LLMs, particularly in mathematical reasoning. It incorporates hierarchical thought templates ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) for more deliberate reasoning.
Notably, SuperCorrect-7B significantly surpasses DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving state-of-the-art performance among 7B models.
## Usage
This model can be used with `transformers` or `vLLM`. See examples below.
### Usage with `transformers`
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "BitStarWalkin/SuperCorrect-7B"
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Find the distance between the foci of the ellipse \[9x^2 + \frac{y^2}{9} = 99.\]"
hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
messages = [
{"role": "system", "content": hierarchical_prompt},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Usage with `vLLM`
(Example code from the Github README)
## Use with llama.cpp
(Instructions from the original README - retained)
## Evaluation
(Evaluation information from the original README - retained)
## Citation
(Citation information from the original README - retained)
## Acknowledgements
(Acknowledgements from the original README - retained)