Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,12 @@ To address overfitting, we implemented LoRA fine-tuning (rank 8, DeepSpeed), tar
|
|
33 |
|
34 |
## Training Details
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
### Training Parameters
|
37 |
```python
|
38 |
training_args = TrainingArguments(
|
|
|
33 |
|
34 |
## Training Details
|
35 |
|
36 |
+
### Training Progress
|
37 |
+
|
38 |
+
We used [Weights & Biases (W&B)](https://wandb.ai/) for tracking training metrics such as loss and evaluation performance. Below is the training loss curve, illustrating the model's progression over time:
|
39 |
+
|
40 |
+
![Training Loss](./W&B Chart 2_2_2025, 10_47_32 PM.svg)
|
41 |
+
|
42 |
### Training Parameters
|
43 |
```python
|
44 |
training_args = TrainingArguments(
|