File size: 2,527 Bytes
9fc3eb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9614041
 
9fc3eb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---

# Uploaded  model

- **Developed by:** thanhkt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Math-1.5B-Instruct-bnb-4bit

This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

## Dataset

  The model was trained on the Nvidia-mathinstuct dataset, which consists of 100,000 rows. This dataset was specifically chosen to enhance the model's mathematical reasoning and instruction-following capabilities.

### 🤗 Hugging Face Transformers

Qwen2.5-Math can be deployed and infered in the same way as [Qwen2.5](https://github.com/QwenLM/Qwen2.5). Here we show a code snippet to show you how to use the chat model with `transformers`:

```python

from unsloth import FastLanguageModel
import torch
max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.


model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "thanhkt/Qwen2.5-1.5B-MathInstruct", 
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
alpaca_prompt = """Below...

### Instruct:
{}

### Input:
{}

### Output:
{}"""

FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        """A company wants to make a pipeline from a point A on shore to a point B on an island. The island is 6km from the coast. The price to build an onshore pipeline is $50,000 per kilometer, and $130,000 per kilometer to build

underwater. B' is the point on the coast so that BB' is perpendicular to the coast. The distance from A to B' is 9km. Position C on section AB' so that when connecting pipes according to ACB, the amount is minimal. At that time, C is one paragraph away from A by:

A. 6.5km B. 6km C. 0km D.9km""", # instruction
        "", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
```