DeepMath-7B-L

Model Overview

DeepMath-7B-L are fine-tuned versions of DeepSeek-R1-Distill-Qwen-1.5B on the GSM8K dataset. These models are designed for mathematical reasoning and problem-solving, excelling in arithmetic, algebra, and word problems.

Model Details

  • Base Model: DeepSeek-R1-Distill-Qwen-1.5B
  • Fine-Tuning Dataset: GSM8K
  • Parameters: 1.5 Billion
  • Task: Mathematical Question Answering (Math QA)
  • Repositories:
  • Commit Messages:
    • "Full merged model for math QA"
    • "Added LoRA adapters for math reasoning"

Training Details

  • Dataset: GSM8K (Grade School Math 8K) - a high-quality dataset for mathematical reasoning
  • Fine-Tuning Framework: Hugging Face Transformers & PyTorch
  • Optimization Techniques:
    • AdamW Optimizer
    • Learning rate scheduling
    • Gradient accumulation
    • Mixed precision training (FP16)
  • Training Steps: Multiple epochs on a high-performance GPU cluster

Capabilities & Performance

DeepMath-7B-L excel in:

  • Solving word problems with step-by-step reasoning
  • Performing algebraic and arithmetic computations
  • Understanding complex problem structures
  • Generating structured solutions with explanations

DeepMath-7B-L (LoRA Adapter-Enhanced Model)

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("codewithdark/deepmath-7b-l")
model = AutoModelForCausalLM.from_pretrained("codewithdark/deepmath-7b-l")

input_text = "Solve: 2x + 3 = 7"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • May struggle with extremely complex mathematical proofs
  • Performance is limited to the scope of GSM8K-type problems
  • Potential biases in training data

Future Work

  • Extending training to more diverse math datasets
  • Exploring larger models for improved accuracy
  • Fine-tuning on physics and higher-level mathematical reasoning datasets

License

This model is released under the Apache 2.0 License.

Citation

If you use these models, please cite:

@misc{DeepMath-7B-L,
  author = {Ahsan},
  title = {DeepMath-7B-L: LoRA Adapter Enhanced Model for Math Reasoning},
  year = {2025},
  url = {https://huggingface.co/codewithdark/deepmath-7b-l}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for codewithdark/deepmath-7b-l

Finetuned
(32)
this model

Dataset used to train codewithdark/deepmath-7b-l