End of training
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ More information needed
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
-
- VRAM Use: 15.
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
@@ -75,7 +75,7 @@ More information needed
|
|
75 |
<br/>
|
76 |
|
77 |
# Train Dataset
|
78 |
-
Trained on 521,
|
79 |
|
80 |
- Num Samples: `990,000`
|
81 |
- Subset: `20231101.en`
|
@@ -85,7 +85,7 @@ Trained on 521,366,153 tokens from the [wikimedia/wikipedia](https://huggingface
|
|
85 |
# Training Objective
|
86 |
|
87 |
```
|
88 |
-
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=
|
89 |
```
|
90 |
|
91 |
# Hyperparameters
|
@@ -101,8 +101,8 @@ The following hyperparameters were used during training:
|
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
-
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=
|
105 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
106 |
- student_model_name_or_path: `None`
|
107 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
108 |
- student_model_config: `None`
|
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
+
- VRAM Use: 15.7092 GB
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
|
|
75 |
<br/>
|
76 |
|
77 |
# Train Dataset
|
78 |
+
Trained on 521,394,320 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
|
79 |
|
80 |
- Num Samples: `990,000`
|
81 |
- Subset: `20231101.en`
|
|
|
85 |
# Training Objective
|
86 |
|
87 |
```
|
88 |
+
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=mlp))
|
89 |
```
|
90 |
|
91 |
# Hyperparameters
|
|
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
+
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only_affine, projector=mlp))`
|
105 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f9227c352a0>`
|
106 |
- student_model_name_or_path: `None`
|
107 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
108 |
- student_model_config: `None`
|
logs/attn_norm=layernorm_teacher_only_affine, attn_projector=mlp, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=16, warmup_ratio=0/events.out.tfevents.1725435353.e3f806ea38c9
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:db90b083af9509b4ea492b530c588953c257fafbddb1f49fa9b546da7e7cbf96
|
3 |
+
size 529
|