lapp0 commited on
Commit
2004adb
1 Parent(s): 2f0d70e

Training in progress, step 61875

Browse files
README.md CHANGED
@@ -44,7 +44,7 @@ More information needed
44
 
45
  # Resource Usage Comparison
46
 
47
- - VRAM Use: 15.7177 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
@@ -75,7 +75,7 @@ More information needed
75
  <br/>
76
 
77
  # Train Dataset
78
- Trained on 521,375,364 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
79
 
80
  - Num Samples: `990,000`
81
  - Subset: `20231101.en`
@@ -85,7 +85,7 @@ Trained on 521,375,364 tokens from the [wikimedia/wikipedia](https://huggingface
85
  # Training Objective
86
 
87
  ```
88
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only, projector=orthogonal))
89
  ```
90
 
91
  # Hyperparameters
@@ -101,8 +101,8 @@ The following hyperparameters were used during training:
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
103
  - num_epochs: `1.0`
104
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only, projector=orthogonal))`
105
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f407f7e64a0>`
106
  - student_model_name_or_path: `None`
107
  - student_config_name_or_path: `distilbert/distilgpt2`
108
  - student_model_config: `None`
@@ -131,6 +131,6 @@ The following hyperparameters were used during training:
131
 
132
  # Framework Versions
133
  - Distily 0.5.0
134
- - Transformers 4.44.2
135
  - Pytorch 2.4.0+cu121
136
- - Datasets 2.18.0
 
44
 
45
  # Resource Usage Comparison
46
 
47
+ - VRAM Use: 15.7087 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
 
75
  <br/>
76
 
77
  # Train Dataset
78
+ Trained on 521,366,153 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
79
 
80
  - Num Samples: `990,000`
81
  - Subset: `20231101.en`
 
85
  # Training Objective
86
 
87
  ```
88
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only, projector=mlp))
89
  ```
90
 
91
  # Hyperparameters
 
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
103
  - num_epochs: `1.0`
104
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm_teacher_only, projector=mlp))`
105
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f6d9cec8700>`
106
  - student_model_name_or_path: `None`
107
  - student_config_name_or_path: `distilbert/distilgpt2`
108
  - student_model_config: `None`
 
131
 
132
  # Framework Versions
133
  - Distily 0.5.0
134
+ - Transformers 4.44.1
135
  - Pytorch 2.4.0+cu121
136
+ - Datasets 2.21.0
config.json CHANGED
@@ -40,7 +40,7 @@
40
  }
41
  },
42
  "torch_dtype": "bfloat16",
43
- "transformers_version": "4.44.2",
44
  "use_cache": true,
45
  "vocab_size": 50257
46
  }
 
40
  }
41
  },
42
  "torch_dtype": "bfloat16",
43
+ "transformers_version": "4.44.1",
44
  "use_cache": true,
45
  "vocab_size": 50257
46
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
- "transformers_version": "4.44.2"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
+ "transformers_version": "4.44.1"
6
  }
logs/attn_norm=layernorm_teacher_only, attn_projector=mlp, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=16, warmup_ratio=0/completed.flag ADDED
File without changes
logs/attn_norm=layernorm_teacher_only_affine, attn_projector=mlp, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=16, warmup_ratio=0/events.out.tfevents.1725417759.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:858435876c27bb92e9b17e0e6cc3cb3a002f9cd89ed19547cfa2a8f8a25b597a
3
+ size 5809
logs/attn_norm=layernorm_teacher_only_affine, attn_projector=mlp, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=16, warmup_ratio=0/events.out.tfevents.1725419052.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd9d9e75ad1555345befb3c1f70a41f08fa960b0251b52fdc18b675e6cfc4946
3
+ size 31109632
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5e7e86672f0405fb567dc092805df18c00d667d31fced82924308feaa00f74ef
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbeefd226b436e4591c03da38081b8a737c739d471a6c27f640a1c275a57aa9a
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:60cce71e9a8864919f58d30f48c3d75459e74c524126b069ebdf50d2a91681dd
3
  size 5624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f44c08f1839b95c2e50f13d12d74873601728ffcbe2662209a30cbf75542b5c4
3
  size 5624