Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
amitha
/
mllava-llama2-zh
like
0
Visual Question Answering
Transformers
Safetensors
Chinese
llava_llama
llava
vlm
custom_code
arxiv:
2406.11665
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Use this model
main
mllava-llama2-zh
1 contributor
History:
6 commits
amitha
Update README.md
683f879
verified
5 months ago
.gitattributes
1.52 kB
initial commit
5 months ago
README.md
149 Bytes
Update README.md
5 months ago
clip_encoder.py
3.67 kB
Upload LlavaLlamaForCausalLM
5 months ago
config.json
1.36 kB
Upload LlavaLlamaForCausalLM
5 months ago
constants.py
941 Bytes
Upload LlavaLlamaForCausalLM
5 months ago
generation_config.json
183 Bytes
Upload LlavaLlamaForCausalLM
5 months ago
llava_arch.py
18.1 kB
Upload LlavaLlamaForCausalLM
5 months ago
llava_llama.py
5.48 kB
Upload LlavaLlamaForCausalLM
5 months ago
model-00001-of-00003.safetensors
4.94 GB
LFS
Upload LlavaLlamaForCausalLM
5 months ago
model-00002-of-00003.safetensors
4.95 GB
LFS
Upload LlavaLlamaForCausalLM
5 months ago
model-00003-of-00003.safetensors
4.85 GB
LFS
Upload LlavaLlamaForCausalLM
5 months ago
model.safetensors.index.json
73.2 kB
Upload LlavaLlamaForCausalLM
5 months ago
multimodal_encoder.py
1.16 kB
Upload LlavaLlamaForCausalLM
5 months ago
multimodal_projector.py
2.03 kB
Upload LlavaLlamaForCausalLM
5 months ago
utils.py
8.28 kB
Upload LlavaLlamaForCausalLM
5 months ago