grimulkan's picture
Update README.md
e19f6f6 verified
|
raw
history blame
931 Bytes
---
license: llama3.2
---
This is a merge of the vision adapters from [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) onto [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated).
Please respect the respective licenses of Meta Llama & Nous Research.
The method I used is detailed in [this post](https://www.reddit.com/r/LocalLLaMA/comments/1fzduyx/merging_llama_32_vision_adapters_onto_31_finetunes/). I also merged the tokenizer and generation configs.
Example python code for weight merging is available in [merge_vision_example.py](https://huggingface.co/grimulkan/Llama-3.2-90B-Vision-Hermes-3-lorablated-merge/blob/main/merge_vision_example.py), which works for both 11B and 90B.
A 90B version of this merge is [available here](https://huggingface.co/grimulkan/Llama-3.2-90B-Vision-Hermes-3-lorablated-merge).