Text Generation
Transformers
PyTorch
llava
medical
histopathology
wisdomik commited on
Commit
da5ae82
·
verified ·
1 Parent(s): f0d5c0a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md CHANGED
@@ -1,3 +1,66 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ inference: false
4
+ datasets:
5
+ - wisdomik/QUILT-LLaVA-Instruct-107K
6
+ - wisdomik/QuiltVQA_RED
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - medical
10
+ - histopathology
11
  ---
12
+
13
+ <br>
14
+ <br>
15
+
16
+ <p align="center">
17
+ <img src="https://quilt-llava.github.io/static/images/teaser.png" alt="fig2" width="60%"/>
18
+ </p>
19
+
20
+ # Quilt-LlaVA Model Card
21
+
22
+ ## Model details
23
+
24
+ **Model type:**
25
+ [Quilt-LLaVA](https://quilt-llava.github.io/) is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on histopathology educational video sourced images and GPT-generated multimodal instruction-following data.
26
+ It is an auto-regressive language model, based on the transformer architecture.
27
+
28
+
29
+ **Citation**
30
+ ```bibtex
31
+ @article{seyfioglu2023quilt,
32
+ title={Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos},
33
+ author={Seyfioglu, Mehmet Saygin and Ikezogwo, Wisdom O and Ghezloo, Fatemeh and Krishna, Ranjay and Shapiro, Linda},
34
+ journal={arXiv preprint arXiv:2312.04746},
35
+ year={2023}
36
+ }
37
+ ```
38
+ **Model date:**
39
+ Quilt-LlaVA-v1.5-7B was trained in November 2023.
40
+
41
+ **Paper or resources for more information:**
42
+ https://quilt-llava.github.io/
43
+
44
+ ## License
45
+ Llama 2 is licensed under the LLAMA 2 Community License,
46
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
47
+
48
+ **Where to send questions or comments about the model:**
49
+ https://github.com/quilt-llava/quilt-llava.github.io/issues
50
+
51
+ ## Intended use
52
+ **Primary intended uses:**
53
+ The primary use of Quilt-LlaVA is research on medical large multimodal models and chatbots.
54
+
55
+ **Primary intended users:**
56
+ The primary intended users of these models are AI researchers.
57
+
58
+ We primarily imagine the model will be used by researchers to better understand the robustness, generalization, and other capabilities, biases, and constraints of large vision-language generative histopathology models.
59
+
60
+ ## Training dataset
61
+ - 723K filtered image-text pairs from QUILT-1M https://quilt1m.github.io/.
62
+ - 107K GPT-generated multimodal instruction-following data from QUILT-Instruct https://huggingface.co/datasets/wisdomik/QUILT-LLaVA-Instruct-107K.
63
+
64
+
65
+ ## Evaluation dataset
66
+ A collection of 4 academic VQA histopathology benchmarks