How to use this for thousands of image with a good speed?
#11
by
the-sanyam
- opened
How to use this for thousands of image with a good speed?
Should I do some technique like multithreading ? Any other alternative/ process/ resource to use?
From https://github.com/QwenLM/Qwen2-VL
Deployment
We recommend using vLLM for fast Qwen2-VL deployment and inference. You need to use vllm>=0.6.1 to enable Qwen2-VL support. You can also use our official docker image.
Installation
pip install git+https://github.com/huggingface/transformers@21fac7abba2a37fae86106f87fcf9974fd1e3830
pip install accelerate
pip install qwen-vl-utils
# Change to your CUDA version
CUDA_VERSION=cu121
pip install 'vllm==0.6.1' --extra-index-url https://download.pytorch.org/whl/${CUDA_VERSION}