Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: cc-by-4.0
|
4 |
+
datasets:
|
5 |
+
- taesiri/video-game-question-answering
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
pipeline_tag: visual-question-answering
|
9 |
+
---
|
10 |
+
|
11 |
+
<br>
|
12 |
+
<br>
|
13 |
+
|
14 |
+
# LLaVA-VideoGameVQA - Work In Progress - Model Card
|
15 |
+
|
16 |
+
## Model details
|
17 |
+
|
18 |
+
**Model type:**
|
19 |
+
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
|
20 |
+
It is an auto-regressive language model, based on the transformer architecture.
|
21 |
+
|
22 |
+
**Model date:**
|
23 |
+
LLaVA-v1.5-13B-LoRA was trained in December 2023.
|
24 |
+
|
25 |
+
**LoRA Weights**
|
26 |
+
- [Checkpoint 1](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-1) trained on 28K question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b`
|