|
--- |
|
inference: false |
|
license: cc-by-4.0 |
|
datasets: |
|
- taesiri/video-game-question-answering |
|
- taesiri/video-game-question-answering-mixtral-8x7b-instruct-v0-1 |
|
language: |
|
- en |
|
pipeline_tag: visual-question-answering |
|
--- |
|
|
|
<br> |
|
<br> |
|
|
|
# LLaVA-VideoGameVQA - Work In Progress - Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. |
|
It is an auto-regressive language model, based on the transformer architecture. |
|
|
|
**Model date:** |
|
LLaVA-v1.5-13B-LoRA was trained in December 2023. |
|
|
|
**LoRA Weights** |
|
- [Checkpoint 1](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-1) trained on 28K question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b` |
|
- [Checkpoint 5](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-5) trained on 74K question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b` |
|
|