<!DOCTYPE html> <html> <head> <title>orca_mini_v3_13B-GGML (q5_K_S)</title> </head> <body> <h1>orca_mini_v3_13B-GGML (q5_K_S)</h1> <p> With the utilization of the <a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a> package, we are excited to introduce the GGML model hosted in the Hugging Face Docker Spaces, made accessible through an OpenAI-compatible API. This space includes comprehensive API documentation to facilitate seamless integration. </p> <ul> <li> The API endpoint: <a href="https://limcheekin-orca-mini-v3-13b-ggml.hf.space/v1" >https://limcheekin-orca-mini-v3-13b-ggml.hf.space/v1</a > </li> <li> The API doc: <a href="https://limcheekin-orca-mini-v3-13b-ggml.hf.space/docs" >https://limcheekin-orca-mini-v3-13b-ggml.hf.space/docs</a > </li> </ul> <p> If you find this resource valuable, your support in the form of starring the space would be greatly appreciated. Your engagement plays a vital role in furthering the application for a community GPU grant, ultimately enhancing the capabilities and accessibility of this space. </p> </body> </html>