benjamin-paine commited on
Commit
bed4cd6
·
1 Parent(s): 6ff263f

update README

Browse files
Files changed (1) hide show
  1. README.md +1 -10
README.md CHANGED
@@ -6,15 +6,6 @@ license: apache-2.0
6
  <img src="https://github.com/user-attachments/assets/f965fd42-2a95-4552-9b5f-465fc4037a91" width="650" /><br />
7
  <em>An open source real-time AI inference engine for seamless scaling</em>
8
  </div>
9
- <hr/>
10
- <p align="center">
11
- <img src="https://img.shields.io/static/v1?label=painebenjamin&message=taproot&color=234b0e&logo=github" alt="painebenjamin - taproot">
12
- <img src="https://img.shields.io/github/stars/painebenjamin/taproot?style=social" alt="stars - taproot">
13
- <img src="https://img.shields.io/github/forks/painebenjamin/taproot?style=social" alt="forks - taproot"><br />
14
- <a href="https://github.com/painebenjamin/taproot/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-Apache-234b0e" alt="License"></a>
15
- <a href="https://pypi.org/project/enfugue"><img alt="PyPI - Version" src="https://img.shields.io/pypi/v/taproot?color=234b0e"></a>
16
- <a href="https://pypi.org/project/enfugue"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/taproot?logo=python&logoColor=white&color=234b0e"></a>
17
- </p>
18
 
19
  # About
20
 
@@ -605,4 +596,4 @@ There are many deployment possibilities across networks, with configuration avai
605
  <h3>llava-v1-6-34b-q3-k-m</h3>
606
  <table><tbody><tr><td>Name</td><td>LLaVA V1.6 34B (Q3-K-M) Image Captioning</td></tr><tr><td>Author</td><td>Haotian Liu, Chunyuan Li, Li Yuheng and Yong Jae Lee<br />Published in arXiv, vol. 2310.03744, “Improved Baselines with Visual Instruction Tuning”, 2023<br />https://arxiv.org/abs/2310.03744</td></tr><tr><td>License</td><td>Meta Llama 2 Community License (https://www.llama.com/llama2/license/)</td></tr><tr><td>Files</td><td><ol><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/visual-question-answering-llava-v1-6-34b-q3-k-m.gguf" target="_blank">visual-question-answering-llava-v1-6-34b-q3-k-m.gguf (16.65 GB)</a></li><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/image-encoding-clip-llava-mmproj-v1-6-34b.fp16.gguf" target="_blank">image-encoding-clip-llava-mmproj-v1-6-34b.fp16.gguf (699.99 MB)</a></li></ol><p><strong>Total Size</strong>: 17.35 GB</p></td></tr><tr><td>Minimum VRAM</td><td>18.06 GB</td></tr></tbody></table>
607
  <h3>moondream-v2 (default)</h3>
608
- <table><tbody><tr><td>Name</td><td>Moondream V2 Image Captioning</td></tr><tr><td>Author</td><td>Vikhyat Korrapati<br />Published in Hugging Face, vol. 10.57967/hf/3219, “Moondream2”, 2024<br />https://huggingface.co/vikhyatk/moondream2</td></tr><tr><td>License</td><td>Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0)</td></tr><tr><td>Files</td><td><ol><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/visual-question-answering-moondream-v2.fp16.gguf" target="_blank">visual-question-answering-moondream-v2.fp16.gguf (2.84 GB)</a></li><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/image-encoding-clip-moondream-v2-mmproj.fp16.gguf" target="_blank">image-encoding-clip-moondream-v2-mmproj.fp16.gguf (909.78 MB)</a></li></ol><p><strong>Total Size</strong>: 3.75 GB</p></td></tr><tr><td>Minimum VRAM</td><td>4.44 GB</td></tr></tbody></table>
 
6
  <img src="https://github.com/user-attachments/assets/f965fd42-2a95-4552-9b5f-465fc4037a91" width="650" /><br />
7
  <em>An open source real-time AI inference engine for seamless scaling</em>
8
  </div>
 
 
 
 
 
 
 
 
 
9
 
10
  # About
11
 
 
596
  <h3>llava-v1-6-34b-q3-k-m</h3>
597
  <table><tbody><tr><td>Name</td><td>LLaVA V1.6 34B (Q3-K-M) Image Captioning</td></tr><tr><td>Author</td><td>Haotian Liu, Chunyuan Li, Li Yuheng and Yong Jae Lee<br />Published in arXiv, vol. 2310.03744, “Improved Baselines with Visual Instruction Tuning”, 2023<br />https://arxiv.org/abs/2310.03744</td></tr><tr><td>License</td><td>Meta Llama 2 Community License (https://www.llama.com/llama2/license/)</td></tr><tr><td>Files</td><td><ol><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/visual-question-answering-llava-v1-6-34b-q3-k-m.gguf" target="_blank">visual-question-answering-llava-v1-6-34b-q3-k-m.gguf (16.65 GB)</a></li><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/image-encoding-clip-llava-mmproj-v1-6-34b.fp16.gguf" target="_blank">image-encoding-clip-llava-mmproj-v1-6-34b.fp16.gguf (699.99 MB)</a></li></ol><p><strong>Total Size</strong>: 17.35 GB</p></td></tr><tr><td>Minimum VRAM</td><td>18.06 GB</td></tr></tbody></table>
598
  <h3>moondream-v2 (default)</h3>
599
+ <table><tbody><tr><td>Name</td><td>Moondream V2 Image Captioning</td></tr><tr><td>Author</td><td>Vikhyat Korrapati<br />Published in Hugging Face, vol. 10.57967/hf/3219, “Moondream2”, 2024<br />https://huggingface.co/vikhyatk/moondream2</td></tr><tr><td>License</td><td>Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0)</td></tr><tr><td>Files</td><td><ol><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/visual-question-answering-moondream-v2.fp16.gguf" target="_blank">visual-question-answering-moondream-v2.fp16.gguf (2.84 GB)</a></li><li><a href="https://huggingface.co/benjamin-paine/taproot-common/resolve/main/image-encoding-clip-moondream-v2-mmproj.fp16.gguf" target="_blank">image-encoding-clip-moondream-v2-mmproj.fp16.gguf (909.78 MB)</a></li></ol><p><strong>Total Size</strong>: 3.75 GB</p></td></tr><tr><td>Minimum VRAM</td><td>4.44 GB</td></tr></tbody></table>