Rename README.md to { "script_id": 1, "parameter_id": 1 }# Install vLLM from pip: pip install vllm Copy # Load and run the model: vllm serve "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B" Copy # Call the server using curl: curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' Use Docker images Copy # Deploy with docker on Linux: docker run --runtime nvidia --gpus all \ --name my_vllm_container \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HUGGING_FACE_HUB_TOKEN=<secret>" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:latest \ --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B Copy # Load and run the model: docker exec -it my_vllm_container bash -c "vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B" Copy # Call the server using curl: curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' Quick Links Read the vLLM documentation

#1
by 9x25dillon - opened
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment