File size: 981 Bytes
ece6c5a
 
 
 
 
2c7f96e
ece6c5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c7f96e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
inference: false
license: cc-by-4.0
datasets:
- taesiri/video-game-question-answering
- taesiri/video-game-question-answering-mixtral-8x7b-instruct-v0-1
language:
- en
pipeline_tag: visual-question-answering
---

<br>
<br>

# LLaVA-VideoGameVQA - Work In Progress - Model Card

## Model details

**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.

**Model date:**
LLaVA-v1.5-13B-LoRA was trained in December 2023.

**LoRA Weights**
 - [Checkpoint 1](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-1) trained on 28K question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b`
 - [Checkpoint 5](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-5) trained on 74K question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b`