pedromatias97 commited on
Commit
ce94105
1 Parent(s): f4a06cb

Update Model Card with inference script

Browse files
Files changed (1) hide show
  1. README.md +147 -80
README.md CHANGED
@@ -1,80 +1,147 @@
1
- ---
2
- license: apache-2.0
3
- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
- tags:
5
- - generated_from_trainer
6
- model-index:
7
- - name: little-llama2-ft-qa
8
- results: []
9
- library_name: peft
10
- ---
11
-
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- # little-llama2-ft-qa
16
-
17
- This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 1.5732
20
-
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
-
35
-
36
- The following `bitsandbytes` quantization config was used during training:
37
- - quant_method: bitsandbytes
38
- - _load_in_8bit: False
39
- - _load_in_4bit: True
40
- - llm_int8_threshold: 6.0
41
- - llm_int8_skip_modules: None
42
- - llm_int8_enable_fp32_cpu_offload: False
43
- - llm_int8_has_fp16_weight: False
44
- - bnb_4bit_quant_type: nf4
45
- - bnb_4bit_use_double_quant: False
46
- - bnb_4bit_compute_dtype: float16
47
- - load_in_4bit: True
48
- - load_in_8bit: False
49
- ### Training hyperparameters
50
-
51
- The following hyperparameters were used during training:
52
- - learning_rate: 0.0001
53
- - train_batch_size: 1
54
- - eval_batch_size: 8
55
- - seed: 42
56
- - gradient_accumulation_steps: 4
57
- - total_train_batch_size: 4
58
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
- - lr_scheduler_type: cosine
60
- - lr_scheduler_warmup_ratio: 0.05
61
- - num_epochs: 1
62
-
63
- ### Training results
64
-
65
- | Training Loss | Epoch | Step | Validation Loss |
66
- |:-------------:|:-----:|:----:|:---------------:|
67
- | 1.2789 | 0.2 | 250 | 1.5908 |
68
- | 0.9655 | 0.4 | 500 | 1.5828 |
69
- | 0.9788 | 0.6 | 750 | 1.5764 |
70
- | 1.3064 | 0.8 | 1000 | 1.5739 |
71
- | 1.0251 | 1.0 | 1250 | 1.5732 |
72
-
73
-
74
- ### Framework versions
75
-
76
- - PEFT 0.5.0
77
- - Transformers 4.38.2
78
- - Pytorch 2.2.1+cu121
79
- - Datasets 2.19.0
80
- - Tokenizers 0.15.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: little-llama2-ft-qa
8
+ results: []
9
+ library_name: peft
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # little-llama2-ft-qa
16
+
17
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.5732
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+
36
+ The following `bitsandbytes` quantization config was used during training:
37
+ - quant_method: bitsandbytes
38
+ - _load_in_8bit: False
39
+ - _load_in_4bit: True
40
+ - llm_int8_threshold: 6.0
41
+ - llm_int8_skip_modules: None
42
+ - llm_int8_enable_fp32_cpu_offload: False
43
+ - llm_int8_has_fp16_weight: False
44
+ - bnb_4bit_quant_type: nf4
45
+ - bnb_4bit_use_double_quant: False
46
+ - bnb_4bit_compute_dtype: float16
47
+ - load_in_4bit: True
48
+ - load_in_8bit: False
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 0.0001
53
+ - train_batch_size: 1
54
+ - eval_batch_size: 8
55
+ - seed: 42
56
+ - gradient_accumulation_steps: 4
57
+ - total_train_batch_size: 4
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: cosine
60
+ - lr_scheduler_warmup_ratio: 0.05
61
+ - num_epochs: 1
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss |
66
+ |:-------------:|:-----:|:----:|:---------------:|
67
+ | 1.2789 | 0.2 | 250 | 1.5908 |
68
+ | 0.9655 | 0.4 | 500 | 1.5828 |
69
+ | 0.9788 | 0.6 | 750 | 1.5764 |
70
+ | 1.3064 | 0.8 | 1000 | 1.5739 |
71
+ | 1.0251 | 1.0 | 1250 | 1.5732 |
72
+
73
+
74
+ ## Inference
75
+ ```python
76
+ import torch
77
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
78
+
79
+ # prompt template
80
+ def generate_inference_prompt(context, question):
81
+ return f"""### Instruction: Please answer to the question based on the context information provided. If you don't know the answer, please just say you don't know it, don't try to make an answer from that.\n
82
+ ### Context:
83
+ {context.strip()}\n
84
+
85
+ ### Question:
86
+ {question.strip()}
87
+
88
+ ### Answer:
89
+
90
+ """.strip()
91
+
92
+ # context to answer
93
+ context = """
94
+ Great Britain (commonly shortened to Britain) is an island in the North Atlantic Ocean off the north-west coast of continental Europe, consisting of England, Scotland and Wales. With an area of 209,331 km2 (80,823 sq mi), it is the largest of the British Isles, the largest European island and the ninth-largest island in the world. It is dominated by a maritime climate with narrow temperature differences between seasons. The island of Ireland, with an area 40 per cent that of Great Britain, is to the west—these islands, along with over 1,000 smaller surrounding islands and named substantial rocks, form the British Isles archipelago.
95
+ """
96
+
97
+ # question to ask
98
+ question = """
99
+ What is the % of area occupied by Ireland in Great Britain?
100
+ """
101
+
102
+ # loading model
103
+ model = AutoModelForCausalLM.from_pretrained(
104
+ 'pedromatias97/little-llama2-ft-qa'
105
+ )
106
+
107
+ # load tokenizer
108
+ tokenizer = AutoTokenizer.from_pretrained(
109
+ 'pedromatias97/little-llama2-ft-qa'
110
+ )
111
+
112
+ # pipeline
113
+ pipe = pipeline(
114
+ "text-generation",
115
+ model=model,
116
+ tokenizer = tokenizer,
117
+ torch_dtype=torch.bfloat16,
118
+ device_map="auto"
119
+ )
120
+
121
+ # generate prompt
122
+ prompt = generate_inference_prompt(context, question)
123
+
124
+ # generate text
125
+ sequences = pipe(
126
+ prompt,
127
+ do_sample=True,
128
+ max_new_tokens=10,
129
+ temperature=0.7,
130
+ top_k=50,
131
+ top_p=0.95,
132
+ num_return_sequences=1,
133
+ )
134
+
135
+ # print result
136
+ print(sequences[0]['generated_text'])
137
+
138
+ ### output: 40 per cent that of Great Britain
139
+ ```
140
+
141
+ ### Framework versions
142
+
143
+ - PEFT 0.5.0
144
+ - Transformers 4.38.2
145
+ - Pytorch 2.2.1+cu121
146
+ - Datasets 2.19.0
147
+ - Tokenizers 0.15.2