Remove LLM instruction from the README :)
Browse files
README.md
CHANGED
@@ -89,7 +89,6 @@ LLaMA-3-8B-Instruct-TR-DPO is a finetuned version of [Meta-LLaMA-3-8B-Instruct](
|
|
89 |
- lora_dropout: 0.05
|
90 |
- lora_target_linear: true
|
91 |
|
92 |
-
<!-- talk about the aim of the finetuning, use passive voice -->
|
93 |
The aim was to finetune the model to enhance the output format and content quality for the Turkish language. It is not necessarily smarter than the base model, but its outputs are more likable and preferable.
|
94 |
|
95 |
Compared to the base model, LLaMA-3-8B-Instruct-TR-DPO is more fluent and coherent in Turkish. It can generate more informative and detailed answers for a given instruction.
|
|
|
89 |
- lora_dropout: 0.05
|
90 |
- lora_target_linear: true
|
91 |
|
|
|
92 |
The aim was to finetune the model to enhance the output format and content quality for the Turkish language. It is not necessarily smarter than the base model, but its outputs are more likable and preferable.
|
93 |
|
94 |
Compared to the base model, LLaMA-3-8B-Instruct-TR-DPO is more fluent and coherent in Turkish. It can generate more informative and detailed answers for a given instruction.
|