File size: 2,053 Bytes
a11c21b
 
 
9d91e42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a11c21b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
library_name: peft
---
## Examples
As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning. 

Quality issues:
- the corrected sentences are containing major errors
- the explanations of corrections are wrong
The low quality is probably caused by very low volume of training data, and small LLM size.

Interesting observation: The LLM react 100% on the learned example 1 (exactly from training data)

### Example 1
#### Input
''### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Mir es geht gut, danke#### AI:Correct version of the sentence:
#### Output
"Mir geht es gut, danke."

Repairs:
- "Mir es geht" the verb should be placed in the second position "Mir geht es" - because of the correct word order.
- The sentence lacks a period at the end.#### End of repair list for this sentence.#### End of topic, repair list and the sentence

### Example 2
#### Input
''### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:
#### Output
Er macht dort dorte, aber ich muss einmal dorte.

Repairs:
- Instead of "Er gehen dort" it should be "Er macht dort" - the correct verb form is "macht"
- Instead of "ich muss" it should be "ich muss einmal" - the verb "einmal" is used in the future in German.
- The verb "dorte" must be conjugated in the third person singular "dorte".#### End of Repairs:
- Instead of "Er gehen dort" it should be "Er macht dort" - the correct verb form is "macht".
  
## Training procedure


The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions


- PEFT 0.4.0