File size: 1,053 Bytes
9fcab8c 5b3f8fc 9fcab8c c5d5f04 18ddab2 5b3f8fc 9fcab8c 5b3f8fc 9767187 5b3f8fc 6e2cf22 5b3f8fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
title: orca_mini_v3_13B-GGML (q5_K_S)
colorFrom: purple
colorTo: blue
sdk: docker
app_file: index.html
models:
- TheBloke/orca_mini_v3_13B-GGML
tags:
- inference api
- openai-api compatible
- llama-cpp-python
- orca_mini_v3_13B
- ggml
pinned: false
---
# orca_mini_v3_13B-GGML (q5_K_S)
With the utilization of the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) package, we are excited to introduce the GGML model hosted in the Hugging Face Docker Spaces, made accessible through an OpenAI-compatible API. This space includes comprehensive API documentation to facilitate seamless integration.
- The API endpoint:
https://limcheekin-orca-mini-v3-13b-ggml.hf.space/v1
- The API doc:
https://limcheekin-orca-mini-v3-13b-ggml.hf.space/docs
If you find this resource valuable, your support in the form of starring the space would be greatly appreciated. Your engagement plays a vital role in furthering the application for a community GPU grant, ultimately enhancing the capabilities and accessibility of this space.
|