aigeek0x0 commited on
Commit
50ba801
·
verified ·
1 Parent(s): f3cd32c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ In order to leverage instruction fine-tuning, your prompt should be surrounded b
31
  ## Model Usage
32
  You can try it out for free using this [notebook](https://www.kaggle.com/metheaigeek/radiantloom-mixtral-8x7b-fusion).
33
 
34
- For more powerful GPU usage and faster inference, you can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=ch3txp7g1c) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs. To learn more about the deployment process and API endpoint, consult the deployment guide provided [here](https://github.com/aigeek0x0/Radiantloom-Mixtral-8X7B-Fusion/blob/main/deployment-guide.md).
35
 
36
 
37
  ## Inference Code
 
31
  ## Model Usage
32
  You can try it out for free using this [notebook](https://www.kaggle.com/metheaigeek/radiantloom-mixtral-8x7b-fusion).
33
 
34
+ For more powerful GPU usage and faster inference, you can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=ch3txp7g1c) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs. To learn more about the deployment process and API endpoint, consult the deployment guide provided [here](https://github.com/aigeek0x0/Radiantloom-Mixtral-8X7B-Fusion/blob/main/runpod-deployment-guide.md).
35
 
36
 
37
  ## Inference Code