Improve metadata: add pipeline tag, library name, license, and Github link

#1
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +12 -17
README.md CHANGED
@@ -1,20 +1,17 @@
1
  ---
2
- tags:
3
- - lora
4
- - llama
5
- - vision-language
6
- - peft
7
- - fine-tuned
8
- license: llama3.2
9
  base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
10
- library_name: peft
11
- model_type: causal-lm
12
- datasets:
13
- - your-dataset-name # If applicable
 
 
 
 
14
  inference: false
 
15
  ---
16
 
17
-
18
  # **lavender-llama-3.2-11b-lora**
19
  🚀 **LoRA fine-tuned model based on** `meta-llama/Llama-3.2-11B-Vision-Instruct`
20
 
@@ -27,6 +24,7 @@ This model retains the core capabilities of Llama-3.2 while incorporating Stable
27
  - **Fine-Tuned Model**: `lxasqjc/lavender-llama-3.2-11b-lora`
28
  - **Lavender Paper**: [Diffusion Instruction Tuning (arXiv)](https://arxiv.org/abs/2502.06814)
29
  - **Lavender Project Space**: [Diffusion Instruction Tuning](https://astrazeneca.github.io/vlm/)
 
30
  - **Parameter Efficient Fine-Tuning (PEFT)**: Uses **LoRA** (Low-Rank Adaptation) to optimize model efficiency.
31
  - **License**: Llama 3.2 Community License (See [`LICENSE.txt`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct/resolve/main/LICENSE.txt))
32
 
@@ -115,7 +113,7 @@ print(processor.decode(output[0]))
115
  - **LoRA Paper**: [Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)
116
  - **PEFT Documentation**: [Hugging Face PEFT](https://huggingface.co/docs/peft)
117
  - **Project Space**: [Diffusion Instruction Tuning](https://astrazeneca.github.io/vlm/)
118
- - **Paper**: [Diffusion Instruction Tuning (arXiv)](https://arxiv.org/abs/your-paper-link)
119
 
120
  ### **Citation**
121
  If you use this model or work in your research, please cite:
@@ -129,7 +127,4 @@ If you use this model or work in your research, please cite:
129
  primaryClass={cs.LG},
130
  url={https://arxiv.org/abs/2502.06814},
131
  }
132
- ```
133
-
134
-
135
-
 
1
  ---
 
 
 
 
 
 
 
2
  base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
3
+ library_name: transformers
4
+ license: llama3.2
5
+ tags:
6
+ - lora
7
+ - llama
8
+ - vision-language
9
+ - peft
10
+ - fine-tuned
11
  inference: false
12
+ pipeline_tag: image-text-to-text
13
  ---
14
 
 
15
  # **lavender-llama-3.2-11b-lora**
16
  🚀 **LoRA fine-tuned model based on** `meta-llama/Llama-3.2-11B-Vision-Instruct`
17
 
 
24
  - **Fine-Tuned Model**: `lxasqjc/lavender-llama-3.2-11b-lora`
25
  - **Lavender Paper**: [Diffusion Instruction Tuning (arXiv)](https://arxiv.org/abs/2502.06814)
26
  - **Lavender Project Space**: [Diffusion Instruction Tuning](https://astrazeneca.github.io/vlm/)
27
+ - **Github repository**: [Diffusion Instruction Tuning](https://github.com/AstraZeneca/vlm)
28
  - **Parameter Efficient Fine-Tuning (PEFT)**: Uses **LoRA** (Low-Rank Adaptation) to optimize model efficiency.
29
  - **License**: Llama 3.2 Community License (See [`LICENSE.txt`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct/resolve/main/LICENSE.txt))
30
 
 
113
  - **LoRA Paper**: [Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)
114
  - **PEFT Documentation**: [Hugging Face PEFT](https://huggingface.co/docs/peft)
115
  - **Project Space**: [Diffusion Instruction Tuning](https://astrazeneca.github.io/vlm/)
116
+ - **Paper**: [Diffusion Instruction Tuning (arXiv)](https://arxiv.org/abs/2502.06814)
117
 
118
  ### **Citation**
119
  If you use this model or work in your research, please cite:
 
127
  primaryClass={cs.LG},
128
  url={https://arxiv.org/abs/2502.06814},
129
  }
130
+ ```