AugustGislerudRolfsen commited on
Commit
78d25a9
1 Parent(s): eb5787e

Model save

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: llama3.2
4
+ base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
5
+ tags:
6
+ - trl
7
+ - sft
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: fine-tuned-visionllama_5
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/august-gislerud-rolfsen-relu/Dummy/runs/9mfryeev)
18
+ # fine-tuned-visionllama_5
19
+
20
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - eval_loss: 2.0710
23
+ - eval_runtime: 8.0583
24
+ - eval_samples_per_second: 0.62
25
+ - eval_steps_per_second: 0.124
26
+ - epoch: 0.0489
27
+ - step: 425
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 0.0002
47
+ - train_batch_size: 1
48
+ - eval_batch_size: 8
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: constant
52
+ - lr_scheduler_warmup_ratio: 0.03
53
+ - num_epochs: 2
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.13.0
58
+ - Transformers 4.45.1
59
+ - Pytorch 2.2.2+cu121
60
+ - Datasets 3.0.1
61
+ - Tokenizers 0.20.3
adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "q_proj",
24
- "v_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "v_proj",
24
+ "q_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:141b7dc1a732738e873a9bb6e26889fe9b39efc1e08ea9a4ca4ceadefd393aed
3
  size 23641256
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78584636ede9a36a020bf4fc93240f002220daa3dbc2779147eb7a39f187edc8
3
  size 23641256
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:05f741cc6ae801095ee7b66243e5dba82cf5e8ae4c0e6b612bcc4b98a093ce79
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e540f97775b099d85753089c65f68a2c4a692045a369e17ac2a4d5edccf54f79
3
  size 5496