parth parekh commited on
Commit
f7807b8
β€’
1 Parent(s): 678ca1e

added better description for begging for coffee for pro

Browse files
Files changed (2) hide show
  1. README.md +1 -1
  2. main.py +38 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: Llama 3.2 1B FastApi
3
- emoji: 🏒
4
  colorFrom: purple
5
  colorTo: pink
6
  sdk: docker
 
1
  ---
2
  title: Llama 3.2 1B FastApi
3
+ emoji: πŸš€πŸŒ‘
4
  colorFrom: purple
5
  colorTo: pink
6
  sdk: docker
main.py CHANGED
@@ -10,9 +10,45 @@ from accelerate import Accelerator
10
  # Load environment variables from a .env file (useful for local development)
11
  load_dotenv()
12
 
13
- # Initialize FastAPI app
14
- app = FastAPI(title="Llama-3.2-1B-Instruct-API",description="Use the Llama-3.2-1B-Instruct model using the API", docs_url="/", redoc_url="/doc")
 
 
 
 
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  # Set your Hugging Face token from environment variable
17
  HF_TOKEN = os.getenv("HF_TOKEN")
18
 
 
10
  # Load environment variables from a .env file (useful for local development)
11
  load_dotenv()
12
 
13
+ # HTML for the Buy Me a Coffee badge
14
+ badge_html = """
15
+ <a href="https://buymeacoffee.com/xxparthparekhxx" target="_blank">
16
+ <img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" >
17
+ </a>
18
+ """
19
 
20
+ # FastAPI app with embedded Buy Me a Coffee badge and instructions
21
+ app = FastAPI(
22
+ title="Llama-3.2-1B-Instruct-API",
23
+ description=f"""
24
+ {badge_html}
25
+
26
+ ## Please Chill Out! πŸ™
27
+ This API takes around **5.62 minutes** to process a single request due to current hardware limitations.
28
+
29
+ ### Want Faster Responses? Help Me Out! πŸš€
30
+ If you'd like to see this API running faster on high-performance **A100** hardware, please consider buying me a coffee. β˜• Your support will go towards upgrading to **Hugging Face Pro**, which will allow me to run A100-powered spaces for everyone! πŸ™Œ
31
+
32
+ ### Instructions to Clone and Run Locally:
33
+
34
+ 1. **Clone the Repository:**
35
+ ```bash
36
+ git clone https://huggingface.co/spaces/xxparthparekhxx/llama-3.2-1B-FastApi
37
+ cd llama-3.2-1B-FastApi
38
+ ```
39
+
40
+ 2. **Run the Docker container:**
41
+ ```bash
42
+ docker build -t llama-api .
43
+ docker run -p 7860:7860 llama-api
44
+ ```
45
+
46
+ 3. **Access the API locally:**
47
+ Open [http://localhost:7860](http://localhost:7860) to access the API docs locally.
48
+ """,
49
+ docs_url="/", # URL for Swagger docs
50
+ redoc_url="/doc" # URL for ReDoc docs
51
+ )
52
  # Set your Hugging Face token from environment variable
53
  HF_TOKEN = os.getenv("HF_TOKEN")
54