Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,124 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
---
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- ibm-granite/granite-vision-3.2-2b
|
4 |
---
|
5 |
+
|
6 |
+
|
7 |
+
# MISHANM/deepseek-ai_janus-Pro-7B-fp16
|
8 |
+
|
9 |
+
The MISHANM/ibm-granite-granite-vision-3.2-2b-fp16 model is a sophisticated vision-language model designed for image-to-text generation. It leverages advanced neural architectures to transform visual inputs into coherent textual descriptions.
|
10 |
+
|
11 |
+
## Model Details
|
12 |
+
1. Language: English
|
13 |
+
2. Tasks: Imgae to Text Generation
|
14 |
+
|
15 |
+
### Model Example output
|
16 |
+
|
17 |
+
This is the model inference output:
|
18 |
+
|
19 |
+

|
20 |
+
|
21 |
+
|
22 |
+
## Getting Started
|
23 |
+
|
24 |
+
To begin using the model, ensure you have the necessary dependencies:
|
25 |
+
|
26 |
+
```shell
|
27 |
+
pip install transformers>=4.49
|
28 |
+
|
29 |
+
```
|
30 |
+
|
31 |
+
## Use the code below to get started with the model.
|
32 |
+
|
33 |
+
|
34 |
+
Using Gradio
|
35 |
+
|
36 |
+
```python
|
37 |
+
import gradio as gr
|
38 |
+
from transformers import AutoProcessor, AutoModelForVision2Seq
|
39 |
+
import torch
|
40 |
+
from PIL import Image
|
41 |
+
|
42 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
43 |
+
|
44 |
+
model_path = "MISHANM/ibm-granite-granite-vision-3.2-2b-fp16"
|
45 |
+
processor = AutoProcessor.from_pretrained(model_path)
|
46 |
+
model = AutoModelForVision2Seq.from_pretrained(model_path, ignore_mismatched_sizes=True).to(device)
|
47 |
+
|
48 |
+
|
49 |
+
def process_image_and_prompt(image_path, prompt):
|
50 |
+
# Load the image
|
51 |
+
image = Image.open(image_path).convert("RGB")
|
52 |
+
|
53 |
+
# Prepare the conversation input
|
54 |
+
conversation = [
|
55 |
+
{
|
56 |
+
"role": "user",
|
57 |
+
"content": [
|
58 |
+
{"type": "image", "url": image},
|
59 |
+
{"type": "text", "text": prompt},
|
60 |
+
],
|
61 |
+
},
|
62 |
+
]
|
63 |
+
|
64 |
+
# Process the inputs
|
65 |
+
inputs = processor.apply_chat_template(
|
66 |
+
conversation,
|
67 |
+
add_generation_prompt=True,
|
68 |
+
tokenize=True,
|
69 |
+
return_dict=True,
|
70 |
+
return_tensors="pt"
|
71 |
+
).to(device)
|
72 |
+
|
73 |
+
|
74 |
+
# Generate the output
|
75 |
+
output = model.generate(**inputs, max_new_tokens=100)
|
76 |
+
return processor.decode(output[0], skip_special_tokens=True)
|
77 |
+
|
78 |
+
# Create the Gradio interface
|
79 |
+
iface = gr.Interface(
|
80 |
+
fn=process_image_and_prompt,
|
81 |
+
inputs=[
|
82 |
+
gr.Image(type="filepath", label="Upload Image"),
|
83 |
+
gr.Textbox(lines=2, placeholder="Enter your prompt here...", label="Prompt")
|
84 |
+
],
|
85 |
+
outputs="text",
|
86 |
+
title="Granite Vision: Advanced Image-to-Text Generation Model",
|
87 |
+
description="Upload an image and enter a text prompt to get a response from the model."
|
88 |
+
)
|
89 |
+
|
90 |
+
# Launch the Gradio app
|
91 |
+
iface.launch(share=True)
|
92 |
+
|
93 |
+
|
94 |
+
```
|
95 |
+
|
96 |
+
## Uses
|
97 |
+
|
98 |
+
### Direct Use
|
99 |
+
|
100 |
+
This model is ideal for converting images into descriptive text, making it valuable for creative projects, content creation, and artistic exploration.
|
101 |
+
|
102 |
+
### Out-of-Scope Use
|
103 |
+
|
104 |
+
The model is not intended for generating explicit or harmful content. It may also face challenges with highly abstract or nonsensical prompts.
|
105 |
+
|
106 |
+
## Bias, Risks, and Limitations
|
107 |
+
|
108 |
+
The model may reflect biases present in its training data, potentially resulting in stereotypical or biased outputs. Users should be aware of these limitations and review generated content for accuracy and appropriateness.
|
109 |
+
|
110 |
+
### Recommendations
|
111 |
+
|
112 |
+
Users are encouraged to critically evaluate the model's outputs, especially in sensitive contexts, to ensure they meet the desired standards of accuracy and appropriateness.
|
113 |
+
|
114 |
+
## Citation Information
|
115 |
+
```
|
116 |
+
@misc{MISHANM/ibm-granite-granite-vision-3.2-2b-fp16,
|
117 |
+
author = {Mishan Maurya},
|
118 |
+
title = {Introducing Image to Text Generation model},
|
119 |
+
year = {2025},
|
120 |
+
publisher = {Hugging Face},
|
121 |
+
journal = {Hugging Face repository},
|
122 |
+
|
123 |
+
}
|
124 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|