limcheekin commited on
Commit
7652cb7
·
1 Parent(s): 28ee0f2

feat: converted readme.md to index.html

Browse files
Files changed (4) hide show
  1. Dockerfile +1 -1
  2. README.md +1 -9
  3. index.html +30 -3
  4. main.py +5 -5
Dockerfile CHANGED
@@ -19,7 +19,7 @@ RUN mkdir model && \
19
 
20
  COPY ./start_server.sh ./
21
  COPY ./main.py ./
22
- COPY ./README.md ./
23
 
24
  # Make the server start script executable
25
  RUN chmod +x ./start_server.sh
 
19
 
20
  COPY ./start_server.sh ./
21
  COPY ./main.py ./
22
+ COPY ./index.html ./
23
 
24
  # Make the server start script executable
25
  RUN chmod +x ./start_server.sh
README.md CHANGED
@@ -3,7 +3,6 @@ title: orca_mini_v3_13B-GGML (q5_K_S)
3
  colorFrom: purple
4
  colorTo: blue
5
  sdk: docker
6
- app_file: index.html
7
  models:
8
  - TheBloke/orca_mini_v3_13B-GGML
9
  tags:
@@ -17,11 +16,4 @@ pinned: false
17
 
18
  # orca_mini_v3_13B-GGML (q5_K_S)
19
 
20
- With the utilization of the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) package, we are excited to introduce the GGML model hosted in the Hugging Face Docker Spaces, made accessible through an OpenAI-compatible API. This space includes comprehensive API documentation to facilitate seamless integration.
21
-
22
- - The API endpoint:
23
- https://limcheekin-orca-mini-v3-13b-ggml.hf.space/v1
24
- - The API doc:
25
- https://limcheekin-orca-mini-v3-13b-ggml.hf.space/docs
26
-
27
- If you find this resource valuable, your support in the form of starring the space would be greatly appreciated. Your engagement plays a vital role in furthering the application for a community GPU grant, ultimately enhancing the capabilities and accessibility of this space.
 
3
  colorFrom: purple
4
  colorTo: blue
5
  sdk: docker
 
6
  models:
7
  - TheBloke/orca_mini_v3_13B-GGML
8
  tags:
 
16
 
17
  # orca_mini_v3_13B-GGML (q5_K_S)
18
 
19
+ Please refer to the [index.html](index.html) for more information.
 
 
 
 
 
 
 
index.html CHANGED
@@ -1,10 +1,37 @@
1
  <!DOCTYPE html>
2
  <html>
3
  <head>
4
- <title>Test Page</title>
5
  </head>
6
  <body>
7
- <h1>Hello, World!</h1>
8
- <p>This is a simple HTML test page.</p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  </body>
10
  </html>
 
1
  <!DOCTYPE html>
2
  <html>
3
  <head>
4
+ <title>orca_mini_v3_13B-GGML (q5_K_S)</title>
5
  </head>
6
  <body>
7
+ <h1>orca_mini_v3_13B-GGML (q5_K_S)</h1>
8
+ <p>
9
+ With the utilization of the
10
+ <a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
11
+ package, we are excited to introduce the GGML model hosted in the Hugging
12
+ Face Docker Spaces, made accessible through an OpenAI-compatible API. This
13
+ space includes comprehensive API documentation to facilitate seamless
14
+ integration.
15
+ </p>
16
+ <ul>
17
+ <li>
18
+ The API endpoint:
19
+ <a href="https://limcheekin-orca-mini-v3-13b-ggml.hf.space/v1"
20
+ >https://limcheekin-orca-mini-v3-13b-ggml.hf.space/v1</a
21
+ >
22
+ </li>
23
+ <li>
24
+ The API doc:
25
+ <a href="https://limcheekin-orca-mini-v3-13b-ggml.hf.space/docs"
26
+ >https://limcheekin-orca-mini-v3-13b-ggml.hf.space/docs</a
27
+ >
28
+ </li>
29
+ </ul>
30
+ <p>
31
+ If you find this resource valuable, your support in the form of starring
32
+ the space would be greatly appreciated. Your engagement plays a vital role
33
+ in furthering the application for a community GPU grant, ultimately
34
+ enhancing the capabilities and accessibility of this space.
35
+ </p>
36
  </body>
37
  </html>
main.py CHANGED
@@ -1,5 +1,4 @@
1
  from llama_cpp.server.app import create_app, Settings
2
- # from fastapi.staticfiles import StaticFiles
3
  from fastapi.responses import HTMLResponse
4
  import os
5
 
@@ -12,15 +11,16 @@ app = create_app(
12
  )
13
  )
14
 
15
- # app.mount("/static", StaticFiles(directory="static"), name="static")
16
-
17
 
18
  @app.get("/", response_class=HTMLResponse)
19
  async def read_items():
20
- with open("README.md", "r") as f:
21
  content = f.read()
22
  return content
23
 
24
  if __name__ == "__main__":
25
  import uvicorn
26
- uvicorn.run(app, host=os.environ["HOST"], port=os.environ["PORT"])
 
 
 
 
1
  from llama_cpp.server.app import create_app, Settings
 
2
  from fastapi.responses import HTMLResponse
3
  import os
4
 
 
11
  )
12
  )
13
 
 
 
14
 
15
  @app.get("/", response_class=HTMLResponse)
16
  async def read_items():
17
+ with open("index.html", "r") as f:
18
  content = f.read()
19
  return content
20
 
21
  if __name__ == "__main__":
22
  import uvicorn
23
+ uvicorn.run(app,
24
+ host=os.environ["HOST"],
25
+ port=int(os.environ["PORT"])
26
+ )