Deploying a production ready service with GGUF on AWS account.
#39
by
samagra-tensorfuse
- opened
Hi People
In the past few weeks we have been doing tons of PoCs with enterprises trying to deploy DeepSeek R1. The most popular combination was the Unsloth GGUF
quants on 4xL40S.
We just dropped the guide to deploy it on serverless GPUs on your own cloud: https://tensorfuse.io/docs/guides/integrations/llama_cpp
Single request tok/sec - 24 tok/sec
Context size - 5k
We also ran multiple experiments to figure out the right combination of context size fit and tps. You can modify the the "--n-gpu-layers" and "--ctx-size" paramters to calculate tokens per second for each scenario, here are the results -
- GPU Layers 30 , context 10k, speed 6.3 t/s
- GPU Layers 40, context 10k, speed 8.5 t/s
- GPU Layers 50, context 10k , speed 12 t/s
- At GPU layers > 50 , 10k context window will not fit.