Mubin1917 commited on
Commit
29127e5
1 Parent(s): ecc9ed0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -11,12 +11,65 @@ tags:
11
  - trl
12
  ---
13
 
14
- # Uploaded model
15
 
16
  - **Developed by:** Mubin1917
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
11
  - trl
12
  ---
13
 
14
+ # Uploaded Model: LORA Adapter
15
 
16
  - **Developed by:** Mubin1917
17
  - **License:** apache-2.0
18
+ - **Finetuned from model:** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
19
 
20
+ This LORA adapter is based on the `unsloth/meta-llama-3.1-8b-instruct-bnb-4bit` model and has been fine-tuned on the [**Lamini_docs QnA**](https://huggingface.co/datasets/lamini/lamini_docs) dataset. The fine-tuning process was optimized using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library, resulting in a 2x faster training time.
21
+
22
+ ### Training Configuration
23
+
24
+ The model was trained with the following configuration:
25
+
26
+ ```python
27
+ training_args = TrainingArguments(
28
+ num_train_epochs=6,
29
+ per_device_train_batch_size=4,
30
+ gradient_accumulation_steps=4,
31
+ per_device_eval_batch_size=4,
32
+ eval_accumulation_steps=4,
33
+ warmup_steps=50,
34
+ learning_rate=2e-4,
35
+ fp16=not torch.cuda.is_bf16_supported(),
36
+ bf16=torch.cuda.is_bf16_supported(),
37
+ eval_steps=25, # Evaluate every 25 steps
38
+ logging_steps=25,
39
+ optim="adamw_8bit",
40
+ weight_decay=0.01,
41
+ lr_scheduler_type="linear",
42
+ seed=3407,
43
+ output_dir="/kaggle/temp/results",
44
+ report_to="wandb",
45
+ save_total_limit=1, # Save the best one and the last one
46
+ metric_for_best_model="val_loss",
47
+ eval_strategy="steps",
48
+ load_best_model_at_end=True,
49
+ )
50
+ ```
51
+
52
+ ### Evaluation Results
53
+
54
+ - **SacreBLEU Test:**
55
+ Score: **73.55**
56
+ Detailed Metrics:
57
+ - Counts: [20894, 19191, 18504, 18029]
58
+ - Totals: [26214, 26074, 25934, 25794]
59
+ - Precisions: [79.71%, 73.60%, 71.35%, 69.90%]
60
+ - Brevity Penalty: **1.0**
61
+ - System Length: **26214**
62
+ - Reference Length: **24955**
63
+
64
+ - **BLEU Test:**
65
+ BLEU Score: **0.767**
66
+ Detailed Metrics:
67
+ - Precisions: [79.71%, 73.73%]
68
+ - Brevity Penalty: **1.0**
69
+ - Length Ratio: **1.05**
70
+ - Translation Length: **26299**
71
+ - Reference Length: **24955**
72
+
73
+ For a detailed comparison between the predicted and actual QnA responses on the test dataset, please visit the [evaluation dataset](https://huggingface.co/datasets/Mubin1917/lamini_docs_evaluation).
74
 
75
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)