End of training
Browse files
README.md
CHANGED
@@ -75,7 +75,7 @@ More information needed
|
|
75 |
<br/>
|
76 |
|
77 |
# Train Dataset
|
78 |
-
Trained on 6,
|
79 |
|
80 |
- Num Samples: `9,900`
|
81 |
- Subset: `20231101.en`
|
@@ -102,7 +102,7 @@ The following hyperparameters were used during training:
|
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))`
|
105 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
106 |
- student_model_name_or_path: `None`
|
107 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
108 |
- student_model_config: `None`
|
|
|
75 |
<br/>
|
76 |
|
77 |
# Train Dataset
|
78 |
+
Trained on 6,814,337 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
|
79 |
|
80 |
- Num Samples: `9,900`
|
81 |
- Subset: `20231101.en`
|
|
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))`
|
105 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f28b3dda890>`
|
106 |
- student_model_name_or_path: `None`
|
107 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
108 |
- student_model_config: `None`
|
logs/attn_projector=orthogonal, attn_weight=5, extra_grad_stats=False, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/events.out.tfevents.1725314623.261a4d6fb516
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a8ad701ace49b87335c06a6a46de4e0913ba87aff3106ac62a2b347db03e6c27
|
3 |
+
size 249
|