We performed QLoRA fine-tuning on the Baichuan2 Chat 7B model using our self-constructed mathematical reasoning dataset, resulting in its performance on GSM8K improving from 3% to 10%.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.