End of training
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ More information needed
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
-
- VRAM Use: 15.
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
@@ -85,7 +85,7 @@ Trained on 521,413,804 tokens from the [wikimedia/wikipedia](https://huggingface
|
|
85 |
# Training Objective
|
86 |
|
87 |
```
|
88 |
-
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instancenorm, projector=
|
89 |
```
|
90 |
|
91 |
# Hyperparameters
|
@@ -101,9 +101,9 @@ The following hyperparameters were used during training:
|
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
-
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instancenorm, projector=
|
105 |
- train_embeddings: `True`
|
106 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
107 |
- student_model_name_or_path: `None`
|
108 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
109 |
- student_model_config: `None`
|
@@ -135,4 +135,4 @@ The following hyperparameters were used during training:
|
|
135 |
- Distily 0.4.1
|
136 |
- Transformers 4.44.2
|
137 |
- Pytorch 2.4.0+cu121
|
138 |
-
- Datasets 2.
|
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
+
- VRAM Use: 15.6974 GB
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
|
|
85 |
# Training Objective
|
86 |
|
87 |
```
|
88 |
+
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instancenorm, projector=mlp))
|
89 |
```
|
90 |
|
91 |
# Hyperparameters
|
|
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
+
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instancenorm, projector=mlp))`
|
105 |
- train_embeddings: `True`
|
106 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f45bc70b520>`
|
107 |
- student_model_name_or_path: `None`
|
108 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
109 |
- student_model_config: `None`
|
|
|
135 |
- Distily 0.4.1
|
136 |
- Transformers 4.44.2
|
137 |
- Pytorch 2.4.0+cu121
|
138 |
+
- Datasets 2.21.0
|
logs/attn_norm=instancenorm, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=16, warmup_ratio=0/events.out.tfevents.1725235313.cfb07cadeb51
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f38bc1150584ce0d22d07c77205df6600be81ae8ea6a8e8d7f567d33ce86aedf
|
3 |
+
size 529
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 163832792
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef8287f443b8637c0fda9600281c6fc5d80702b0855aeebbf9cfde766abffece
|
3 |
size 163832792
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5624
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d5ed9ec9f8d1abd0f55c438cfbaa3466344c3dcfa376191ff7772bba5d1224f
|
3 |
size 5624
|