Strange behaviour of Llama3.2-vision - it behaves like text model
1
#9 opened 3 months ago
by
jirkazcech
How to use it in ollama
1
#8 opened 3 months ago
by
vejahetobeu

Exporting to GGUF
5
#7 opened 3 months ago
by
krasivayakoshka

Training with images
4
#6 opened 4 months ago
by
Khawn2u

AttributeError: Model MllamaForConditionalGeneration does not support BitsAndBytes quantization yet.
1
#5 opened 4 months ago
by
luizhsalazar
How much vram needed?
3
#4 opened 5 months ago
by
Dizzl500
How load this model?
3
#3 opened 5 months ago
by
benTow07
Can you post the script that was used to quantize this model please?
10
#2 opened 5 months ago
by
ctranslate2-4you