Jamar561 commited on
Commit
925951a
1 Parent(s): 1147c07

first commit

Browse files
.env ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ OPENAI_API_KEY=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
2
+ PINECONE_API_KEY=5207f7a8-e003-4610-8adb-367ac66812d4
3
+ MONGO_URI=mongodb+srv://jandrade2018:[email protected]/?retryWrites=true&w=majority
Dockerfile ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use the base Python image
2
+ FROM python:3.9
3
+
4
+ # Set up a new user named "user" with user ID 1000
5
+ # This creates a new user within the Docker container with user ID 1000.
6
+ RUN useradd -m -u 1000 user
7
+
8
+ # Overrides permissions for Hugging Face Docker.
9
+ # Switches to the newly created user to run subsequent commands, enhancing security.
10
+ USER user
11
+
12
+ # Set environment variables for the user's home directory and executable path
13
+ ENV HOME=/home/user \
14
+ PATH=/home/user/.local/bin:$PATH
15
+
16
+ # Set the working directory to /home/user/app
17
+ WORKDIR $HOME/app
18
+
19
+ # Install Python dependencies first to avoid reinstalling on code changes
20
+ # Copy the requirements.txt file into the container and install dependencies.
21
+ COPY ./requirements.txt $HOME/app/requirements.txt
22
+ RUN pip install --no-cache-dir --upgrade -r requirements.txt \
23
+ && pip install --no-cache-dir --upgrade pymongo
24
+
25
+
26
+ # Switch back to root to install system dependencies
27
+ USER root
28
+
29
+ # Install system dependencies
30
+ RUN apt-get update \
31
+ && apt-get install -y ffmpeg python3-pyaudio portaudio19-dev \
32
+ && apt-get clean
33
+
34
+ # Switch back to the user
35
+ USER user
36
+
37
+ # Expose the secret OPENAI_API_KEY at buildtime and use its value as an environment variable
38
+ RUN --mount=type=secret,id=OPENAI_API_KEY,mode=0444,required=true \
39
+ echo "export OPENAI_API_KEY=$(cat /run/secrets/OPENAI_API_KEY)" >> /home/user/app/.env
40
+
41
+ RUN --mount=type=secret,id=PINECONE_API_KEY,mode=0444,required=true \
42
+ echo "export PINECONE_API_KEY=$(cat /run/secrets/PINECONE_API_KEY)" >> /home/user/app/.env
43
+
44
+ RUN --mount=type=secret,id=MONGO_URI,mode=0444,required=true \
45
+ echo "export MONGO_URI=$(cat /run/secrets/MONGO_URI)" >> /home/user/app/.env
46
+
47
+ # Source the .env file to set environment variables
48
+ RUN echo "source $HOME/app/.env" >> $HOME/.bashrc
49
+
50
+ # Copy the rest of the application into the container
51
+ # This includes your Python scripts, models, and any other necessary files.
52
+ COPY --chown=user . $HOME/app
53
+
54
+
55
+ # Specify the command to run when the container starts
56
+ # Here, it runs the "app.py" script using the Python interpreter.
57
+ CMD ["python", "app.py"]
58
+
59
+ # --------------------------------------------------------------------------------------------------------------------------------------------------------------------
60
+
61
+ # Overview
62
+
63
+ # This Dockerfile is used to build a Docker image for a Python application.
64
+ # It starts with the official Python 3.9 image as a base.
65
+ # It then sets up a new user, switches to that user for security reasons, and defines environment variables.
66
+ # The working directory is set to "/home/user/app," where Python dependencies are installed from the "requirements.txt" file.
67
+ # The entire application is copied into the container.
68
+ # Finally, the CMD directive specifies that the "app.py" script should run when the container starts.
69
+
70
+ # Architecture:
71
+
72
+ # In the context of Hugging Face Docker Spaces, this Docker image encapsulates your Python application,
73
+ # ensuring that it runs consistently across different environments (linus, macOS, windows, etc).
74
+ # Docker containers provide a lightweight and isolated environment for applications, enhancing portability and reproducibility.
75
+ # The use of a non-root user and defined environment variables contributes to security best practices.
76
+ # The "CMD" instruction specifies the default behavior of the container.
77
+
78
+
README.md CHANGED
@@ -1,10 +1,130 @@
1
  ---
2
- title: FirstAid Bot
3
- emoji: 🌍
4
- colorFrom: indigo
5
- colorTo: yellow
6
  sdk: docker
7
- pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: First-Aid Bot
3
+ emoji: 🐳
4
+ colorFrom: purple
5
+ colorTo: gray
6
  sdk: docker
7
+ app_port: 7860
8
  ---
9
 
10
+ # -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
11
+
12
+ title: Basic Docker SDK Space
13
+ # Basic Docker SDK Space Configuration
14
+ # Describes the title or name of your Docker Space.
15
+ # Helps identify and provide a quick overview of the purpose or content of your Space.
16
+
17
+ emoji: 🐳
18
+ # Adds an emoji character to your title for visual representation.
19
+ # Emojis can add a creative touch and quickly convey the theme or essence of your Space.
20
+
21
+ colorFrom: purple
22
+ # Defines a gradient color scheme for the Space thumbnail.
23
+ # colorFrom sets the starting color of the gradient (purple).
24
+
25
+ colorTo: gray
26
+ # colorTo sets the ending color of the gradient (gray).
27
+
28
+ sdk: docker
29
+ # Specifies the software development kit (SDK) or runtime environment for your Space.
30
+ # Here, it is set to "docker," indicating that this Space is configured to work with Docker.
31
+ # It defines the underlying technology used to run and manage your application.
32
+
33
+ app_port: 7860
34
+ # Specifies the port through which your application will be accessible.
35
+ # This is the port that will be exposed for users to interact with your application when deployed in a Docker Space.
36
+ # app_port: Exposes this port of the app
37
+
38
+ # code up top is a README.md file's YAML block.
39
+ # You can change the default exposed port 7860
40
+
41
+ # Docker Spaces for Your Custom Applications
42
+
43
+ # -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
44
+
45
+ <!-- Introduction to Docker Spaces -->
46
+ Unlock the full potential of your custom applications by harnessing the flexibility of Docker Spaces. Docker Spaces provide a versatile environment for running apps that go beyond the conventional capabilities of Streamlit and Gradio. Whether you're developing FastAPI, Go endpoints, Phoenix apps, or robust ML Ops tools, Docker Spaces empower you to craft solutions tailored to your unique needs.
47
+
48
+ ## Setting Up Your Docker Space
49
+
50
+ <!-- Explanation for setting up Docker Spaces -->
51
+ To initialize your Space with Docker, make sure to select Docker as the SDK when creating a new Space. This is done by setting the `sdk` property to `docker` in the YAML block of your README.md file.
52
+
53
+ # Exposing Your Application Port
54
+
55
+ <!-- Explanation for exposing the application port -->
56
+ By setting `app_port: 7860`, you're specifying the port through which your application inside the Docker container will be accessible. This port is exposed externally, allowing users to interact with your application using this port when the Docker Space is deployed.
57
+
58
+ # Docker Behind the Scenes
59
+
60
+ <!-- Overview of Docker architecture and purpose -->
61
+ ## Overview:
62
+
63
+ This Dockerfile is used to build a Docker image for a Python application. It starts with the official Python 3.9 image as a base. It then sets up a new user, switches to that user for security reasons, and defines environment variables. The working directory is set to "/home/user/app," where Python dependencies are installed from the "requirements.txt" file. The entire application is copied into the container. Finally, the CMD directive specifies that the "app.py" script should run when the container starts.
64
+
65
+ ## Architecture:
66
+
67
+ In the context of Hugging Face Docker Spaces, this Docker image encapsulates your Python application, ensuring that it runs consistently across different environments. Docker containers provide a lightweight and isolated environment for applications, enhancing portability and reproducibility. The use of a non-root user and defined environment variables contributes to security best practices. The "CMD" instruction specifies the default behavior of the container.
68
+
69
+ ## Why Containers:
70
+
71
+ Containers, like Docker, package applications and their dependencies, providing a consistent and reproducible environment. This facilitates seamless deployment across various systems, reducing compatibility issues. Containers isolate applications, preventing conflicts with the underlying infrastructure, making them a popular choice for deploying and managing applications in production.
72
+
73
+ # -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
74
+
75
+ # Visual Representation of Architecture:
76
+
77
+ This diagram represents the sequential steps in building a Docker image for the Python application. Each box indicates a distinct operation or instruction in the Dockerfile, and arrows depict the flow of these operations. The image starts with an official Python 3.9 base, and each subsequent step contributes to setting up the environment, installing dependencies, and defining the behavior when the container starts.
78
+
79
+
80
+
81
+ # Docker Image Build Process
82
+
83
+ Base Image: python:3.9
84
+ |
85
+ └── Set up user "user" with ID 1000
86
+ |
87
+ └── Define environment variables (HOME, PATH)
88
+ |
89
+ └── Set working directory to /home/user/app
90
+ | |
91
+ | └── Copy requirements.txt into /home/user/app
92
+ | |
93
+ | └── Install Python dependencies from requirements.txt
94
+ |
95
+ └── Copy entire application into /home/user/app
96
+ |
97
+ └── CMD ["python", "app.py"]
98
+
99
+ # Docker Container Runtime Process
100
+
101
+ Docker Container:
102
+ |
103
+ └── Python Application
104
+ | |
105
+ | └── Gradio with default port 7860
106
+ | └── FastAPI, OpenAI, and other dependencies
107
+ |
108
+ └── Exposed Port: 7860
109
+ |
110
+ └── Running "app.py" script
111
+
112
+ # Port Mapping Explanation:
113
+
114
+ When dealing with Docker containers and applications like Gradio, it's essential to understand how ports work. In simple terms, a port is like a designated entrance at a busy airport.
115
+
116
+ In this scenario:
117
+ - The Gradio application inside your Docker container is like an airport terminal.
118
+ - The Gradio application uses Port 7860 as its entrance gate, allowing data to flow in and out.
119
+
120
+ Now, Docker acts like a traffic controller, managing the flow of data between your computer and the Gradio application inside the container.
121
+
122
+ Here's the breakdown:
123
+ - Your computer communicates with the Docker container through a specific port.
124
+ - Think of it as your boarding pass, guiding your data to the correct entrance (Port 7860) of the Gradio application.
125
+
126
+ So, when you expose Port 7860 in your Docker Space configuration (README.md), you're essentially saying, "Hey, users, when you want to interact with my application, use Port 7860 as the entrance gate."
127
+
128
+ In conclusion, it's a way to ensure smooth communication between your computer and the Gradio application running inside the Docker container, allowing you to access and interact with the application seamlessly.
129
+
130
+
app.py ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import openai
4
+ import pinecone
5
+ import requests
6
+ import gradio as gr
7
+ from gtts import gTTS
8
+ from dotenv import load_dotenv
9
+ from langchain.llms import OpenAI
10
+ from langchain import PromptTemplate
11
+ from langchain.vectorstores import Chroma
12
+ from requests.exceptions import JSONDecodeError
13
+ from transformers import AutoTokenizer, AutoModel
14
+ from langchain.embeddings import OpenAIEmbeddings
15
+ from langchain.chains import RetrievalQA, LLMChain
16
+ from langchain.document_loaders import TextLoader, DirectoryLoader
17
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
18
+
19
+ # Load environment variables from .env file
20
+ load_dotenv()
21
+
22
+ # Initialize Pinecone with API key
23
+ pinecone.init(api_key="5207f7a8-e003-4610-8adb-367ac66812d4", environment='gcp-starter')
24
+ index_name = "clinical-bert-index"
25
+
26
+ # Create a vector database that stores medical knowledge
27
+ loader = DirectoryLoader('./medical_data/', glob="./*.txt", loader_cls=TextLoader)
28
+ documents = loader.load()
29
+
30
+ # Split documents into texts
31
+ text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
32
+ texts = text_splitter.split_documents(documents)
33
+
34
+ # Initialize Vectordb
35
+ persist_directory = 'db'
36
+ embedding = OpenAIEmbeddings()
37
+ vectordb = Chroma.from_documents(documents=texts, embedding=embedding, persist_directory=persist_directory)
38
+ vectordb.persist()
39
+
40
+ # Create a retrieval QA chain using the vector database as its retriever
41
+ retriever = vectordb.as_retriever()
42
+ docs = retriever.get_relevant_documents("For Cuts and Scrapes ")
43
+ retriever = vectordb.as_retriever(search_kwargs={"k": 2})
44
+
45
+ # Specify the template that the LLM will use to generate its responses
46
+ bot_template = '''I want you to act as a medicine advisor for people.
47
+ Explain in simple words how to treat a {medical_complication}'''
48
+
49
+ tokenizer = AutoTokenizer.from_pretrained("medicalai/ClinicalBERT")
50
+ model = AutoModel.from_pretrained("medicalai/ClinicalBERT")
51
+ model_path = "medicalai/ClinicalBERT"
52
+ tokenizer_str = tokenizer.__class__.__name__
53
+
54
+ # Create Prompt
55
+ prompt = PromptTemplate(
56
+ input_variables=['medical_complication'],
57
+ template=bot_template
58
+ )
59
+
60
+ # Specify the LLM that you want to use as the language model
61
+ llm = OpenAI(temperature=0.8)
62
+ chain1 = LLMChain(llm=llm, prompt=prompt)
63
+
64
+ # Create the retrieval QA chain
65
+ qa_chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True)
66
+
67
+ # Global variables
68
+ global_filepath = None
69
+ global_feedback = None
70
+ chatgpt_response = ""
71
+ modMed_response = ""
72
+ trigger_words = ""
73
+
74
+
75
+ def preprocess_text(text):
76
+ # Preprocess the input text
77
+ text = text.lower()
78
+ text = re.sub(r"[^a-zA-Z0-9\s]", "", text)
79
+ text = re.sub(r"\s+", " ", text).strip()
80
+
81
+ inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512)
82
+
83
+ with torch.no_grad():
84
+ outputs = model(**inputs)
85
+ embeddings = outputs.last_hidden_state
86
+
87
+ embeddings_list = embeddings.squeeze().tolist()
88
+ embeddings_array = embeddings.squeeze().numpy()
89
+
90
+ reducer = umap.UMAP(n_components=768)
91
+ reduced_embeddings = reducer.fit_transform(embeddings_array)
92
+
93
+ return reduced_embeddings
94
+
95
+
96
+ # API key for OpenAI
97
+ my_key = os.getenv("OPENAI_API_KEY")
98
+ openai.api_key = my_key
99
+
100
+ # Initialize Pinecone index
101
+ pinecone_index = pinecone.Index(index_name=index_name)
102
+
103
+ # Define function to retrieve embeddings from Pinecone
104
+ def retrieve_embeddings_from_pinecone(query):
105
+ results = pinecone_index.query(
106
+ vector=query,
107
+ top_k=3,
108
+ include_values=True
109
+ )
110
+ retrieved_embeddings = results[0].vectors
111
+ return retrieved_embeddings
112
+
113
+
114
+ # Function to process user input
115
+ def process_user_input(audio_filepath, feedback):
116
+ global global_filepath
117
+ audio = open(audio_filepath, "rb")
118
+ global_filepath = audio_filepath
119
+
120
+ transcript = openai.Audio.transcribe("whisper-1", audio)
121
+
122
+ return transcript["text"]
123
+
124
+
125
+ # Function to find trigger words
126
+ def findTriggerWords(user_input):
127
+ prompt = (
128
+ f"Given this user input: {user_input}\n"
129
+ "Task: Identify and return important keywords from the user input. "
130
+ "These keywords are crucial for understanding the user's intent and finding a relevant solution. "
131
+ "Consider context and relevance. Provide a numbered list up to 5 keywords or less"
132
+ )
133
+
134
+ response = openai.Completion.create(
135
+ model="text-davinci-003",
136
+ prompt=prompt,
137
+ max_tokens=500,
138
+ temperature=0.7,
139
+ )
140
+
141
+ ChatGPT_response = response['choices'][0]['text']
142
+ return ChatGPT_response.replace(".", "").replace("\n", "", 1).strip()
143
+
144
+
145
+ # Function to make an API call
146
+ def api_call(url):
147
+ try:
148
+ response = requests.post(url)
149
+ if response.status_code == 200:
150
+ updated_data = response.json()
151
+ print(f"Updated Database: {updated_data}")
152
+ return updated_data
153
+ else:
154
+ print(f"Error updating database: {response.status_code}")
155
+ print(response.text)
156
+ return None
157
+ except JSONDecodeError as e:
158
+ print(f"JSONDecodeError: {e}")
159
+ print(f"Response text: {response.text}")
160
+ return None
161
+
162
+
163
+ # Function to process feedback
164
+ def process_feedback(feedback, current_filepath):
165
+ global global_filepath, global_feedback, chatgpt_response, modMed_response
166
+ ans = ""
167
+ url = ""
168
+ backend_url = "https://iq4aas9gc2.execute-api.us-east-2.amazonaws.com/default/test/"
169
+
170
+ if feedback in ["🏥 ModMed", "🤖 ChatGPT"] and global_feedback == None:
171
+ global_feedback = feedback
172
+ incr_query_string = ''.join(char for char in feedback if char.isalnum())
173
+ url = f"{backend_url}increment_likes/{incr_query_string}"
174
+ print("new audio file")
175
+
176
+ elif feedback in ["🏥 ModMed", "🤖 ChatGPT"] and global_feedback != None:
177
+ print("same audio file, different radio button")
178
+ global_feedback = feedback
179
+ decr_query_string = ''.join(char for char in global_feedback if char.isalnum())
180
+ incr_query_string = ''.join(char for char in feedback if char.isalnum())
181
+ decrement_url = f"{backend_url}decrement_likes/{decr_query_string}"
182
+ increment_url = f"{backend_url}increment_likes/{incr_query_string}"
183
+
184
+ # Decrement likes
185
+ decrement_data = api_call(decrement_url)
186
+
187
+ if decrement_data:
188
+ # Increment likes if decrement was successful
189
+ url = increment_url
190
+
191
+ else:
192
+ return ans
193
+
194
+ updated_data = api_call(url)
195
+
196
+ if updated_data:
197
+ if feedback == "🏥 ModMed":
198
+ chatgpt_response = ""
199
+ modMed_response = "True"
200
+ elif feedback == "🤖 ChatGPT":
201
+ modMed_response = ""
202
+ chatgpt_response = "True"
203
+
204
+ preferred_strings = ", ".join(string for string in ["ModMed", "ChatGPT"] if string != incr_query_string)
205
+ ans = f"{updated_data['Likes']}/{updated_data['TotalLikes']} People preferred {incr_query_string} over {preferred_strings}.\nThank you! 👍"
206
+
207
+ return ans
208
+
209
+
210
+ # Function to handle the chatbot logic
211
+ def chatbot(microphone_filepath, upload_filepath, feedback):
212
+ global global_filepath, global_feedback, chatgpt_response, modMed_response, trigger_words
213
+ print("Feedback", feedback)
214
+
215
+ if microphone_filepath is not None:
216
+ audio_filepath = microphone_filepath
217
+ elif upload_filepath is not None:
218
+ audio_filepath = upload_filepath
219
+ else:
220
+ global_filepath = global_feedback = None
221
+ chatgpt_response = ""
222
+ modMed_response = ""
223
+ trigger_words = ""
224
+ print(trigger_words)
225
+ global_filepath = None
226
+ global_feedback = None
227
+ return None, None, None, None, None
228
+
229
+ # Process user input
230
+ if global_filepath != audio_filepath:
231
+ user_input = process_user_input(audio_filepath, feedback)
232
+ trigger_words = findTriggerWords(user_input)
233
+ elif feedback == "Clear" and global_filepath != None:
234
+ feedback = ""
235
+ chatgpt_response = ""
236
+ modMed_response = ""
237
+ trigger_words = ""
238
+ global_filepath = None
239
+ global_feedback = None
240
+ return None, None, None, None, None
241
+ else:
242
+ user_input = None
243
+
244
+ if user_input is not None or feedback != global_feedback:
245
+ # Get the chatbot response
246
+ chatgpt_prompt = f"Act like a medical bot and return at most 5 sentences if the user_input isn't a medical question then answer the question in general: user_input:\n{user_input}"
247
+ llm_response = qa_chain(chatgpt_prompt)
248
+ prompt_response = chain1(user_input)
249
+
250
+ f_modMed_response, f_chatgpt_response = process_llm_response(llm_response, prompt_response)
251
+ ans = process_feedback(feedback, global_filepath)
252
+
253
+ if modMed_response == "" and chatgpt_response != "":
254
+ print("CHATGPT FEEDBACK")
255
+ clean_response = f_chatgpt_response.split('<br>')[0]
256
+ audio_response = text_to_speech(clean_response)
257
+ return gr.make_waveform(audio_response, animate=True), None, f_chatgpt_response, trigger_words, ans
258
+
259
+ elif modMed_response != "" and chatgpt_response == "":
260
+ print("MODMED FEEDBACK")
261
+ clean_response = f_modMed_response.split('<br>')[0]
262
+ audio_response = text_to_speech(clean_response)
263
+ return gr.make_waveform(audio_response, animate=True), f_modMed_response, None, trigger_words, ans
264
+ else:
265
+ print("NO FEEDBACK")
266
+ audio_response = text_to_speech(f_modMed_response.split('<br>')[0])
267
+ return gr.make_waveform(audio_response, animate=True), f_modMed_response, f_chatgpt_response, trigger_words, ans
268
+
269
+ return None, None, None, None, None
270
+
271
+
272
+ def process_llm_response(llm_response, prompt_response):
273
+ ChatGPT_response = llm_response['result']
274
+ ModMed_response = str(prompt_response["text"])
275
+
276
+ ChatGPT_image_html = f'<img src="https://static.vecteezy.com/system/resources/previews/021/495/996/original/chatgpt-openai-logo-icon-free-png.png" alt="image" style="width:25px;height:25px;display:inline-block;">'
277
+ ModMed_image_html = f'<img src="https://png.pngtree.com/png-clipart/20230707/original/pngtree-green-approved-stamp-with-check-mark-symbol-vector-png-image_9271227.png" alt="image" style="width:25px;height:25px;display:inline-block;">'
278
+
279
+ ModMed_source = f'<br><span style="color: darkgray;">ModMedicine Certified {ModMed_image_html}</span>'
280
+ ChatGPT_source = f'<br><span style="color: darkgray;">ChatGPT {ChatGPT_image_html}</span>'
281
+
282
+ return (
283
+ ModMed_response + ModMed_source,
284
+ ChatGPT_response + ChatGPT_source
285
+ )
286
+
287
+
288
+ def play_response(response=None):
289
+ if response is not None:
290
+ audio_path = text_to_speech(response)
291
+ return gr.Audio(audio_path)
292
+ else:
293
+ return None
294
+
295
+
296
+ def text_to_speech(text):
297
+ # Find the index of 'ModMed Certified' (case insensitive)
298
+ certified_index = text.lower().find('ModMedicine certified')
299
+ chatgpt_index = text.lower().find('ChatGPT')
300
+
301
+ if certified_index != -1 or chatgpt_index != -1:
302
+ # Cut off the text after 'ModMed Certified'
303
+ text = text[:certified_index]
304
+
305
+ tts = gTTS(text=text, lang='en', tld='co.uk')
306
+ audio_path = 'response.mp3'
307
+ tts.save(audio_path)
308
+
309
+ return audio_path
310
+
311
+
312
+ # Feedback radio button choices
313
+ feedback_buttons = gr.Radio(
314
+ choices=["🏥 ModMed", "🤖 ChatGPT", "Clear"],
315
+ label="Which solution was better?",
316
+ default=None # Set the default value to None
317
+ )
318
+
319
+ # Gradio Interface
320
+ demo = gr.Interface(
321
+ fn=chatbot,
322
+ inputs=[
323
+ gr.Audio(source="microphone", type="filepath"),
324
+ gr.Audio(source="upload", type="filepath"),
325
+ feedback_buttons
326
+ ],
327
+ outputs=[
328
+ gr.Video(autoplay=True, label="ModMedicine"),
329
+ gr.outputs.HTML(label="ModMed Response"),
330
+ gr.outputs.HTML(label="ChatGpt Response"),
331
+ gr.Text(label="Trigger words"),
332
+ gr.Text(label="Feedback")
333
+ ],
334
+ examples=[
335
+ ["./dummy_audio1.mp3"],
336
+ ["./dummy_audio2.mp3"]
337
+ ],
338
+ title="First-Aid Bot",
339
+ description='<img src="https://drive.google.com/uc?id=1fWN0xn_KXLb0fCtTAyoxwacwzyH6w4am&export=download" alt="logo" style="display: block; margin: auto; width:125px;height:125px;">',
340
+ live=True
341
+ )
342
+
343
+ # Launch the Gradio interface
344
+ demo.launch(server_name="0.0.0.0", server_port=7860)
345
+
dummy_audio1.mp3 ADDED
Binary file (85.1 kB). View file
 
dummy_audio2.mp3 ADDED
Binary file (77 kB). View file
 
icon.png ADDED
index.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # First-Aid Bot
2
+
3
+ ## Overview
4
+
5
+ **First-Aid Bot** is an award-winning senior project that earned the *People's Choice Award*. This collaborative endeavor was orchestrated by a talented team, each contributing their unique skills to craft a cutting-edge solution for medical assistance.
6
+
7
+ ![First-Aid Bot](https://cdn-uploads.huggingface.co/production/uploads/64c57ab6a3f7a8107dba1022/pfjSOmtlOBK0YMWUxXh2B.jpeg)
8
+
9
+ ## Team Members
10
+
11
+ - ### Diego Gama (Team Leader, UI Developer, OpenAI Whisper, User Feedback)
12
+ - Led the project with a focus on the user interface.
13
+ - Implemented a sophisticated backend feedback counter for users' preferred models.
14
+ - Specialized in technologies like *OpenAI Whisper* and *Gradio*.
15
+
16
+ - ### Jamar Andrade (OpenAI Whisper, OpenAI ChatGPT, Docker Spaces, Gradio)
17
+ - Worked on the integration of *OpenAI Whisper* and *OpenAI ChatGPT* for audio-to-text processing and prompt engineering.
18
+ - Utilized *Docker* for efficient deployment and integrated *Gradio* for a user-friendly interface.
19
+
20
+ - ### Charles Bazile (Pinecone, ClinicalBERT)
21
+ - Worked on integrating *Pinecone* and *ClinicalBERT* into the application.
22
+ - Contributed to the advanced features that enhance the medical capabilities of First-Aid Bot.
23
+
24
+ - ### Cesaire Civil (LangChain, ChromaDB)
25
+ - Specialized in *LangChain* and *ChromaDB*.
26
+ - Implemented the intelligent language processing and curated dataset retrieval functionalities.
27
+
28
+ - ### Christopher Rodriguez (ChromaDB, Pinecone, Gradio)
29
+ - Focused on the integration of *ChromaDB* and *Pinecone*.
30
+ - Contributed to the implementation of *Gradio* for a seamless user experience.
31
+
32
+ ## Sponsored by Modern Medicine
33
+
34
+ The success of **First-Aid Bot** was made possible through the generous sponsorship of *Modern Medicine*. Their support and collaboration enriched the project, providing valuable insights and access to a curated dataset of past doctor-client interactions, making the application more effective and reliable.
35
+
36
+ ## Technologies Used
37
+
38
+ ![Technologies Used](https://cdn-uploads.huggingface.co/production/uploads/64c57ab6a3f7a8107dba1022/pVyDcrBkRHiXvMWLQHzI_.png)
39
+
40
+ ## Hugging Face Deployment
41
+
42
+ **First-Aid Bot** is hosted on *Hugging Face* using *DockerSpace*. To experience the application, simply follow the link below:
43
+
44
+ [First-Aid Bot - Hugging Face Deployment](https://huggingface.co/spaces/Jamar561/FirstAid-Bot)
45
+
46
+ ## PowerPoint Presentation
47
+
48
+ [Link to PowerPoint Presentation](https://docs.google.com/presentation/d/1wfDbsV_VQZneTKxGB-qSgcALTu0C4O0p/edit?usp=sharing&ouid=109114647024700962274&rtpof=true&sd=true)
49
+
50
+ ## Recognition
51
+
52
+ The **First-Aid Bot** project received the *People's Choice Award*, a testament to the team's dedication and the impactful solution they crafted. This readme is not just a guide; it's an acknowledgment of the collective effort that went into making **First-Aid Bot** a success.
53
+
54
+ ## Getting Started
55
+
56
+ To explore **First-Aid Bot**:
57
+
58
+ 1. [Access the deployed application](https://huggingface.co/spaces/Jamar561/FirstAid-Bot)
59
+
60
+ Thank you for being part of the **First-Aid Bot** journey!
medical_data/dolphins_injury_report.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Miami Dolphins cornerback Jalen Ramsey is expected to miss at least six to eight weeks due to a meniscus injury that will require surgery on his left knee, NFL Network Insiders Tom Pelissero and Mike Garafolo reported Thursday. Ramsey was injured in Miami's practice earlier in the day.
2
+
3
+ Dolphins head coach Mike McDaniel on Friday confirmed that Ramsey injured his meniscus and will undergo knee surgery at 1 p.m. ET on Friday.
4
+
5
+ "The length of this rehabilitation is kind of dictated on a couple of things that could occur in the surgery," McDaniel explained to reporters. "The exact timeline is a little to be determined. What I can tell you is I don't think the beginning of the regular season is really a part of the scenario. It's going to be into the season and how deep that is depends kind of on what happens today."
6
+
7
+ NFL Network Insider Ian Rapoport previously reported on Thursday night it's likely Ramsey will require a full meniscus repair, which would put his return date all the way back to December. Ultimately, doctors will determine how severe the injury is, and therefore what course of action they must take to fix it. If the ligament can just be trimmed, Ramsey could be back as soon as six weeks, setting him up to return just around the Dolphins' season opener versus the Chargers on Sept. 10 -- though McDaniel expressed his doubt that Ramsey would be available Week 1 even under an optimistic timeline.
8
+
9
+ If doctors determine they need to remove the meniscus, that would increase the cornerback's recovery timeline, Pelissero and Garafolo reported on Thursday.
10
+
11
+ Ramsey released a statement on social media, writing that he'd "be back on that field stronger than ever... in due time" and that "I know my brothers gone hold it down until I'm back tho!"
12
+
13
+ However, Ramsey followed soon after with another tweet that appeared to lean to a longer recovery time.
14
+
15
+
16
+ The injury appeared to occur near the end of practice when Ramsey forced an incomplete pass and grabbed his knee after the play, Wolfe reported Thursday morning. Though Ramsey initially tried to stay on the field, he eventually exited to be looked at by trainers and later was carted off.
17
+
18
+ This will mark Ramsey's second meniscus surgery, having previously undergone a procedure on his right knee in 2016 as a rookie. In that instance, Ramsey was able to return around six weeks post-op, and will be hoping for a similar outcome this time around.
19
+
20
+ If Ramsey were to miss time in the regular season, the Dolphins would be playing without one of their most significant offseason additions, having traded with the Rams to acquire him in March. The six-time Pro Bowler was expected to become half of a dynamic duo alongside CB Xavien Howard, with the aim of bolstering a secondary that also includes Jevon Holland and Brandon Jones.
21
+
22
+ In line to see time opposite Howard as Ramsey recovers are CB Kader Kohou, who started 13 games in his rookie season, and 2023 second-round pick Cam Smith out of South Carolina, with three preseason games available for both to work with the starters.
23
+
24
+ "I feel good about the entire crew," McDaniel said on Friday. "We are dealing with some injuries now in that group. But I feel very, very good about the competition there, and the guys that are ready to go see some more opportunities. There's Pro Bowlers and hungry young guys and everything in between. So it'll be outstanding work for us moving forward."
medical_data/first-aid.txt ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ To clean a wound, wash it with soap and water. If the wound is dirty, you can also use a mild antiseptic.
2
+
3
+ To dress a wound, apply a bandage or gauze pad to the wound. You can also use antibiotic ointment to help prevent infection.
4
+
5
+ To prevent infection, keep the wound clean and covered. You should also avoid touching the wound with dirty hands.
6
+
7
+ To recognize the common cold, look for symptoms such as a runny nose, sore throat, and cough. To treat a cold, you can rest, drink plenty of fluids, and take over-the-counter medications.
8
+
9
+ To recognize the flu, look for symptoms such as fever, body aches, and fatigue. To treat the flu, you can rest, drink plenty of fluids, and take over-the-counter medications or antiviral medications.
10
+
11
+ To recognize allergies, look for symptoms such as sneezing, runny nose, and itchy eyes. To treat allergies, you can avoid the allergens that trigger your allergies, take over-the-counter medications, or get allergy shots.
12
+
13
+ If you have a wound that is bleeding heavily, is deep, or is in a place that is difficult to clean, you should seek medical attention.
14
+
15
+ For Cuts and Scrapes Wash your hands thoroughly. Clean the wound gently with mild soap and warm water. Apply an over-the-counter antibiotic ointment. Cover the wound with a clean bandage or sterile dressing.
16
+
17
+ Burns For minor burns (first-degree), Run cool (not cold) water over the burn for about 10-20 minutes, Cover with a sterile non-stick dressing. For more severe burns (second or third-degree) Seek immediate medical attention.
18
+
19
+ Sprains and Strains, Rest the injured area. Apply ice wrapped in a cloth for 15-20 minutes every 1-2 hours. Use compression bandages if necessary. Elevate the injured limb above heart level to reduce swelling.
20
+
21
+ Nosebleeds, Tilt your head forward slightly (not backward) to prevent swallowing blood. Pinch your nostrils together and breathe through your mouth. Apply gentle pressure for 10-15 minutes. If bleeding persists, seek medical attention.
22
+
23
+ Choking, If someone is choking and can't breathe or speak, perform the Heimlich maneuver. For infants, use back blows and chest thrusts. Seek immediate medical attention if the obstruction isn't cleared.
24
+
25
+ Allergic Reactions, For mild allergic reactions (itchy skin, mild hives), Take an antihistamine if available and follow the label instructions. For severe allergic reactions (anaphylaxis), Use an epinephrine auto-injector if prescribed. Call 911 immediately.
26
+
27
+ Insect Bites and Stings, Remove the stinger if present (for bee stings). Wash the area with soap and water. Apply a cold compress to reduce swelling and itching. Use over-the-counter antihistamines or pain relievers if needed.
28
+
29
+ Heat Exhaustion/Heat Stroke, Move to a cooler place and rest. Drink water and cool down with damp cloths or a cool bath. Heat stroke is a medical emergency; call 911 immediately.
30
+
31
+ For diarrhea, drink plenty of fluids to prevent dehydration.
32
+ For earache, apply a warm compress to the ear.
33
+ For fever, take acetaminophen or ibuprofen to reduce the temperature.
34
+ For a head injury, apply ice to the area and seek medical attention if necessary.
35
+ For a high blood pressure reading, rest and avoid caffeine and alcohol.
36
+ For a bee sting, remove the stinger and apply a cold compress to the area.
37
+ For a bug bite, apply calamine lotion to the affected area.
38
+ For a cold, rest, drink plenty of fluids, and gargle with salt water.
39
+ For a concussion, rest and avoid strenuous activity.
40
+ For a cough, drink plenty of fluids and suck on lozenges.
41
+ For a dog bite, wash the wound with soap and water and seek medical attention.
42
+ For a fever, dress lightly and sponge bathe with lukewarm water.
43
+ For a headache, take acetaminophen or ibuprofen to relieve pain.
44
+ For a heart attack, lie down and rest while waiting for medical help. call 911 immediately and chew an aspirin if available.
requirements.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio==3.44.4
2
+ langchain==0.0.300
3
+ openai==0.27.8
4
+ fastapi==0.99.1
5
+ Flask==2.3.3
6
+ Flask-Cors==4.0.0
7
+ uvicorn==0.23.2
8
+ python-dotenv==1.0.0
9
+ Chroma==0.2.0
10
+ chroma-hnswlib==0.7.3
11
+ chromadb==0.4.12
12
+ tiktoken==0.3.1
13
+ transformers==4.31.0
14
+ gtts==2.4.0
15
+ bson>=0.5.10
16
+ pymongo==4.6.0
17
+ requests==2.31.0
18
+ requests-oauthlib==1.3.1
19
+ pinecone-client==2.2.4
20
+ torch==2.1.1
response.mp3 ADDED
Binary file (35.7 kB). View file