|
--- |
|
base_model: distilbert/distilgpt2 |
|
datasets: |
|
- wikimedia/wikipedia |
|
library_name: Distily |
|
license: apache-2.0 |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: distily_validate_extra_grad_stats |
|
results: [] |
|
--- |
|
|
|
|
|
# Summary |
|
|
|
Distilled with [Distily](https://github.com/lapp0/distily) library |
|
using teacher model [gpt2](https://huggingface.co/gpt2) |
|
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. |
|
|
|
# Model description |
|
|
|
More information needed |
|
|
|
# Intended uses & limitations |
|
|
|
More information needed |
|
--> |
|
|
|
# Model Architecture: |
|
- **Architecture**: `GPT2LMHeadModel` |
|
- **Total Parameters**: 81,912,576 |
|
- **Data Type (dtype)**: torch.bfloat16 |
|
- **Model Size**: 0.16 GB |
|
|
|
|
|
# Benchmark Metrics Comparison |
|
|
|
| Metric | | |
|
| :--- | |
|
|
|
# Resource Usage Comparison |
|
|
|
- VRAM Use: 7.4259 GB |
|
|
|
# Distillation (Teacher -> Student) Architecture Difference: |
|
|
|
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` |
|
- **Total Parameters**: 124,439,808 -> 81,912,576 |
|
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 |
|
- **Model Size**: 0.24 GB -> 0.16 GB |
|
|
|
<details> |
|
<summary>Module Diff Details</summary> |
|
|
|
```diff |
|
--- teacher model modules |
|
+++ student model modules |
|
@@ -4,7 +4,7 @@ |
|
(wpe): Embedding(1024, 768) |
|
(drop): Dropout(p=0.1, inplace=False) |
|
(h): ModuleList( |
|
- (0-11): 12 x GPT2Block( |
|
+ (0-5): 6 x GPT2Block( |
|
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
|
(attn): GPT2FlashAttention2( |
|
(c_attn): Conv1D() |
|
|
|
``` |
|
|
|
</details> |
|
<br/> |
|
|
|
# Train Dataset |
|
Trained on 6,813,447 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. |
|
|
|
- Num Samples: `9,900` |
|
- Subset: `20231101.en` |
|
- Split: `train` |
|
|
|
|
|
# Training Objective |
|
|
|
``` |
|
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal)) |
|
``` |
|
|
|
# Hyperparameters |
|
The following hyperparameters were used during training: |
|
|
|
<details> |
|
<summary>Expand</summary> |
|
|
|
- learning_rate: `0.0002` |
|
- train_batch_size: `4` |
|
- eval_batch_size: `8` |
|
- seed: `42` |
|
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` |
|
- lr_scheduler_type: `polynomial` |
|
- num_epochs: `1.0` |
|
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))` |
|
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7feee8b1d330>` |
|
- student_model_name_or_path: `None` |
|
- student_config_name_or_path: `distilbert/distilgpt2` |
|
- student_model_config: `None` |
|
- reinitialize_weights: `None` |
|
- copy_teacher_modules: `[('lm_head', False)]` |
|
- student_model_as_bitnet: `False` |
|
- teacher_model_name_or_path: `gpt2` |
|
- teacher_load_in_8bit: `False` |
|
- teacher_load_in_4bit: `False` |
|
- dataset_uri: `wikimedia/wikipedia` |
|
- dataset_subset: `20231101.en` |
|
- dataset_split: `train` |
|
- dataset_column_name: `text` |
|
- dataset_sample_size: `10000` |
|
- dataset_test_size: `0.01` |
|
- gradient_accumulation_steps: `1` |
|
- weight_decay: `0.0` |
|
- max_grad_norm: `1.0` |
|
- warmup_ratio: `0` |
|
- warmup_steps: `0` |
|
- gradient_checkpointing: `True` |
|
|
|
</details> |
|
<br/> |
|
|
|
|
|
# Framework Versions |
|
- Distily 0.4.1 |
|
- Transformers 4.44.2 |
|
- Pytorch 2.3.0 |
|
- Datasets 2.21.0 |
|
|