mehmetkeremturkcan's picture
Update README.md
9a99546 verified
metadata
license: apache-2.0
datasets:
  - 5CD-AI/LLaVA-CoT-o1-Instruct
  - HuggingFaceM4/the_cauldron
  - AnyModal/flickr30k
  - openbmb/RLAIF-V-Dataset
base_model:
  - deepseek-ai/DeepSeek-R1-Distill-Llama-8B
  - google/vit-large-patch32-384
library_name: transformers
pipeline_tag: image-text-to-text
tags:
  - vqa
  - vlm

mehmetkeremturkcan/DeepSeek-LLaVA-Instruct

DeepSeer: Vision Language Models with Reasoning

Vision language models with chain-of-thought reasoning are just starting to emerge. This is a proof-of-concept to train a vision model with thinking-enabled chat templates based on DeepSeek-R1 models.

Note that this model will not always use thinking tokens, due to the current lack of high-quality CoT data in non-science contexts.

Setup

pip install git+https://github.com/facebookresearch/schedule_free.git
pip install peft
git clone https://github.com/mkturkcan/seers.git
cd seers/seers/
git clone https://huggingface.co/mehmetkeremturkcan/DeepSeek-LLaVA-Instruct

Test

Run, in the seers/seers folder,

python predict_llava.py

Train

seers training code is public! Run

python train_cot_mixed.py

Training Details

This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Llama-8B on the 5CD-AI/LLaVA-CoT-o1-Instruct dataset. It has been trained using seers.