lapp0 commited on
Commit
2e10264
1 Parent(s): c5b52a1

Training in progress, step 123750

Browse files
README.md CHANGED
@@ -44,7 +44,7 @@ More information needed
44
 
45
  # Resource Usage Comparison
46
 
47
- - VRAM Use: 15.7006 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
@@ -85,7 +85,7 @@ Trained on 521,413,804 tokens from the [wikimedia/wikipedia](https://huggingface
85
  # Training Objective
86
 
87
  ```
88
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instance_teacher_only, projector=orthogonal))
89
  ```
90
 
91
  # Hyperparameters
@@ -101,9 +101,9 @@ The following hyperparameters were used during training:
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
103
  - num_epochs: `1.0`
104
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instance_teacher_only, projector=orthogonal))`
105
  - train_embeddings: `True`
106
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f3be9049960>`
107
  - student_model_name_or_path: `None`
108
  - student_config_name_or_path: `distilbert/distilgpt2`
109
  - student_model_config: `None`
@@ -135,4 +135,4 @@ The following hyperparameters were used during training:
135
  - Distily 0.4.1
136
  - Transformers 4.44.2
137
  - Pytorch 2.4.0+cu121
138
- - Datasets 2.18.0
 
44
 
45
  # Resource Usage Comparison
46
 
47
+ - VRAM Use: 15.6974 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
 
85
  # Training Objective
86
 
87
  ```
88
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instance_teacher_only, projector=mlp))
89
  ```
90
 
91
  # Hyperparameters
 
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
103
  - num_epochs: `1.0`
104
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=instance_teacher_only, projector=mlp))`
105
  - train_embeddings: `True`
106
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f460c9e5c00>`
107
  - student_model_name_or_path: `None`
108
  - student_config_name_or_path: `distilbert/distilgpt2`
109
  - student_model_config: `None`
 
135
  - Distily 0.4.1
136
  - Transformers 4.44.2
137
  - Pytorch 2.4.0+cu121
138
+ - Datasets 2.21.0
logs/attn_norm=instance_teacher_only, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=16, warmup_ratio=0/completed.flag ADDED
File without changes
logs/attn_norm=layernorm_teacher_only, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=8, warmup_ratio=0/events.out.tfevents.1725254715.cfb07cadeb51 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34e0f5e981ad2d720b8fca7013c24cd578eeda68e205c0490887be4790b36189
3
+ size 59393288
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c27c12dd373815186446dd17dca22b6e22a1ed70937e3d16eec9e14fa437e03a
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe0c7367062b9930c2af270d67ccf35573c5171c85157ac9fe730fc8a6658c8f
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1a35b789b65b72c979ea09891a897c066221bc1b218528892213765fe3266f14
3
  size 5624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:282c83cda9341b422f4189a5a7127f6cba3be665f892404c9302b13ccc5176bb
3
  size 5624