schrilax commited on
Commit
b91a3a6
β€’
1 Parent(s): 5fb9677

Add application file

Browse files
Files changed (4) hide show
  1. Dockerfile +11 -0
  2. app.py +45 -0
  3. chainlit.md +14 -0
  4. requirements.txt +2 -0
Dockerfile ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9
2
+ RUN useradd -m -u 1000 user
3
+ USER user
4
+ ENV HOME=/home/user \
5
+ PATH=/home/user/.local/bin:$PATH
6
+ WORKDIR $HOME/app
7
+ COPY --chown=user . $HOME/app
8
+ COPY ./requirements.txt ~/app/requirements.txt
9
+ RUN pip install -r requirements.txt
10
+ COPY . .
11
+ CMD ["chainlit", "run", "app.py", "--port", "7860"]
app.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # You can find this code for Chainlit python streaming here (https://docs.chainlit.io/concepts/streaming/python)
2
+
3
+ # OpenAI Chat completion
4
+
5
+ import openai #importing openai for API usage
6
+ import chainlit as cl #importing chainlit for our app
7
+
8
+ # You only need the api key inserted here if it's not in your .env file
9
+ openai.api_key = "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"
10
+
11
+ # We select our model. If you do not have access to GPT-4, please use GPT-3.5T (gpt-3.5-turbo)
12
+
13
+ model_name = "gpt-3.5-turbo"
14
+ # model_name = "gpt-4"
15
+ settings = {
16
+ "temperature": 0.7, # higher value increases output diveresity/randomness
17
+ "max_tokens": 500, # maximum length of output response
18
+ "top_p": 1, # choose only the top x% of possible words to return
19
+ "frequency_penalty": 0, # higher value will result in the model being more conservative in its use of repeated tokens.
20
+ "presence_penalty": 0, # higher value will result in the model being more likely to generate tokens that have not yet been included in the generated text
21
+ }
22
+
23
+ @cl.on_chat_start # marks a function that will be executed at the start of a user session
24
+ def start_chat():
25
+ cl.user_session.set(
26
+ "message_history",
27
+ [{"role": "system", "content": "You are a helpful assistant."}],
28
+ )
29
+
30
+
31
+ @cl.on_message # marks a function that should be run each time the chatbot receives a message from a user
32
+ async def main(message: str):
33
+ message_history = cl.user_session.get("message_history")
34
+ message_history.append({"role": "user", "content": message})
35
+
36
+ msg = cl.Message(content="")
37
+
38
+ async for stream_resp in await openai.ChatCompletion.acreate(
39
+ model=model_name, messages=message_history, stream=True, **settings
40
+ ):
41
+ token = stream_resp.choices[0]["delta"].get("content", "")
42
+ await msg.stream_token(token)
43
+
44
+ message_history.append({"role": "assistant", "content": msg.content})
45
+ await msg.send()
chainlit.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Welcome to Chainlit! πŸš€πŸ€–
2
+
3
+ Hi there, Developer! πŸ‘‹ We're excited to have you on board. Chainlit is a powerful tool designed to help you prototype, debug and share applications built on top of LLMs.
4
+
5
+ ## Useful Links πŸ”—
6
+
7
+ - **Documentation:** Get started with our comprehensive [Chainlit Documentation](https://docs.chainlit.io) πŸ“š
8
+ - **Discord Community:** Join our friendly [Chainlit Discord](https://discord.gg/k73SQ3FyUh) to ask questions, share your projects, and connect with other developers! πŸ’¬
9
+
10
+ We can't wait to see what you create with Chainlit! Happy coding! πŸ’»πŸ˜Š
11
+
12
+ ## Welcome screen
13
+
14
+ To modify the welcome screen, edit the `chainlit.md` file at the root of your project. If you do not want a welcome screen, just leave this file empty.
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ chainlit
2
+ openai