dmcooller commited on
Commit
1d97ff5
1 Parent(s): 597f29c

dmcooller/neural-schevchenko-ft

Browse files
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- license: mit
3
  library_name: peft
4
  tags:
5
  - generated_from_trainer
6
- base_model: microsoft/phi-2
7
  model-index:
8
  - name: neural-matia-ft
9
  results: []
@@ -14,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # neural-matia-ft
16
 
17
- This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.3371
20
 
21
  ## Model description
22
 
@@ -36,25 +36,27 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
- - train_batch_size: 16
40
- - eval_batch_size: 16
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 64
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
- - num_epochs: 5
 
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 2.7573 | 1.0 | 9 | 1.8932 |
54
- | 1.3209 | 2.0 | 18 | 0.6054 |
55
- | 0.5213 | 3.0 | 27 | 0.3828 |
56
- | 0.3903 | 4.0 | 36 | 0.3464 |
57
- | 0.3588 | 5.0 | 45 | 0.3371 |
 
58
 
59
 
60
  ### Framework versions
@@ -62,5 +64,5 @@ The following hyperparameters were used during training:
62
  - PEFT 0.10.0
63
  - Transformers 4.38.2
64
  - Pytorch 2.1.2
65
- - Datasets 2.16.0
66
  - Tokenizers 0.15.2
 
1
  ---
2
+ license: apache-2.0
3
  library_name: peft
4
  tags:
5
  - generated_from_trainer
6
+ base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
7
  model-index:
8
  - name: neural-matia-ft
9
  results: []
 
14
 
15
  # neural-matia-ft
16
 
17
+ This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 2.5362
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
+ - train_batch_size: 8
40
+ - eval_batch_size: 8
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 32
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
+ - num_epochs: 6
48
+ - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 3.2884 | 0.93 | 7 | 2.8684 |
55
+ | 2.5646 | 2.0 | 15 | 2.6798 |
56
+ | 2.7782 | 2.93 | 22 | 2.6071 |
57
+ | 2.3583 | 4.0 | 30 | 2.5528 |
58
+ | 2.6542 | 4.93 | 37 | 2.5394 |
59
+ | 2.2268 | 5.6 | 42 | 2.5362 |
60
 
61
 
62
  ### Framework versions
 
64
  - PEFT 0.10.0
65
  - Transformers 4.38.2
66
  - Pytorch 2.1.2
67
+ - Datasets 2.1.0
68
  - Tokenizers 0.15.2
adapter_config.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "alpha_pattern": {},
3
  "auto_mapping": null,
4
- "base_model_name_or_path": "microsoft/phi-2",
5
- "bias": "all",
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
8
  "init_lora_weights": true,
@@ -16,14 +16,11 @@
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": null,
18
  "peft_type": "LORA",
19
- "r": 32,
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "k_proj",
24
- "q_proj",
25
- "v_proj",
26
- "dense"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
1
  {
2
  "alpha_pattern": {},
3
  "auto_mapping": null,
4
+ "base_model_name_or_path": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
5
+ "bias": "none",
6
  "fan_in_fan_out": false,
7
  "inference_mode": true,
8
  "init_lora_weights": true,
 
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": null,
18
  "peft_type": "LORA",
19
+ "r": 11,
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "q_proj"
 
 
 
24
  ],
25
  "task_type": "CAUSAL_LM",
26
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:88dd44df2c6fa668f634807a17ff6f6553c84daa1a8d911d87c613bde513327c
3
- size 87440736
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f6ce7c477223d096b826f49d9c4e3bbb70f69a1efa10b0f61474476f38b0b7b
3
+ size 11542872
runs/Apr08_13-16-04_52ebaf0dfa60/events.out.tfevents.1712582215.52ebaf0dfa60.35.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8a31acd266ee84bf9bbde29c33888e2294a61a534f6a79a2bdeeb4f1f47af3d
3
+ size 5235
runs/Apr08_13-18-45_52ebaf0dfa60/events.out.tfevents.1712582330.52ebaf0dfa60.35.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96da9462d9591e028894e772ad086b1cf1f3d1f7ce78e34897a0e085eaa94d28
3
+ size 8421
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2223e83f9336243cc114fdf3e44ef4335fdff8b0d59b1f406f227096db3d23ae
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfa77db5615744e4e59540a3862059b59e4eeaa792e3ae2d2dacea2bbee5afb2
3
  size 4920