RaushanTurganbay HF staff commited on
Commit
986edb7
·
verified ·
1 Parent(s): be17bfd

Update pipeline example

Browse files
Files changed (1) hide show
  1. README.md +8 -18
README.md CHANGED
@@ -39,35 +39,25 @@ The model supports multi-image and multi-prompt generation. Meaning that you can
39
  Below we used [`"llava-hf/llava-interleave-qwen-0.5b-hf"`](https://huggingface.co/llava-hf/llava-interleave-qwen-0.5b-hf) checkpoint.
40
 
41
  ```python
42
- from transformers import pipeline, AutoProcessor
43
- from PIL import Image
44
- import requests
45
-
46
- model_id = "llava-hf/llava-interleave-qwen-7b-dpo-hf"
47
- pipe = pipeline("image-to-text", model=model_id)
48
- processor = AutoProcessor.from_pretrained(model_id)
49
-
50
- url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
51
- image = Image.open(requests.get(url, stream=True).raw)
52
 
53
- # Define a chat history and use `apply_chat_template` to get correctly formatted prompt
54
- # Each value in "content" has to be a list of dicts with types ("text", "image")
55
- conversation = [
56
  {
57
-
58
  "role": "user",
59
  "content": [
 
60
  {"type": "text", "text": "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"},
61
- {"type": "image"},
62
  ],
63
  },
64
  ]
65
- prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
66
 
67
- outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
68
- print(outputs)
 
69
  ```
70
 
 
71
  ### Using pure `transformers`:
72
 
73
  Below is an example script to run generation in `float16` precision on a GPU device:
 
39
  Below we used [`"llava-hf/llava-interleave-qwen-0.5b-hf"`](https://huggingface.co/llava-hf/llava-interleave-qwen-0.5b-hf) checkpoint.
40
 
41
  ```python
42
+ from transformers import pipeline
 
 
 
 
 
 
 
 
 
43
 
44
+ pipe = pipeline("image-text-to-text", model="llava-interleave-qwen-7b-dpo-hf")
45
+ messages = [
 
46
  {
 
47
  "role": "user",
48
  "content": [
49
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"},
50
  {"type": "text", "text": "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"},
 
51
  ],
52
  },
53
  ]
 
54
 
55
+ out = pipe(text=messages, max_new_tokens=20)
56
+ print(out)
57
+ >>> [{'input_text': [{'role': 'user', 'content': [{'type': 'image', 'url': 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg'}, {'type': 'text', 'text': 'What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud'}]}], 'generated_text': 'Lava'}]
58
  ```
59
 
60
+
61
  ### Using pure `transformers`:
62
 
63
  Below is an example script to run generation in `float16` precision on a GPU device: