Spaces:
Sleeping
Upload main1.py
Browse files<code>The given code is a Streamlit app that allows users to interact with multiple language models (LLMs) via the Hugging Face API. Here are the key components:</code>
<h3>Model Selection:</h3>
Users can choose from various pre-trained models, including “Mistral-7B-Instruct-v0.2,” “Mistral-7B-Instruct-v0.3,” “GPT-2,” “BLOOM,” and “OPT.”
The selected model determines the LLM used for generating responses.
<h3>User Input:</h3>
Users can enter their messages in the text input field.
The input is then used as a prompt for the LLM.
<h3>LLM Initialization:</h3>
The get_llm function initializes the selected LLM based on the chosen model.
Parameters like max_length and temperature are set for response generation.
<h3>Chat Prompt Template:</h3>
The PromptTemplate defines a chat message format, including placeholders for user input.
The template is used to create a full prompt for the LLM.
<h3>Response Generation:</h3>
When the user submits a message, it’s added to the chat history.
The LLM generates a response based on the combined prompt (user message + template).
The assistant’s response is added to the chat history.
<h3>Display Chat History:</h3>
The chat history (user and assistant messages) is displayed using Streamlit’s chat component.
<h3>Clear Chat Button:</h3>
Users can clear the chat history by clicking the button.
Customize the code further (e.g., improve UI, add error handling) based on your specific use case. Feel free to ask for more details or any other questions!
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from langchain_huggingface import HuggingFaceEndpoint
|
3 |
+
from langchain_core.prompts import PromptTemplate
|
4 |
+
import os
|
5 |
+
|
6 |
+
# Set up your Hugging Face API token
|
7 |
+
os.environ["HUGGINGFACEHUB_API_TOKEN"] = st.secrets["HF_TOKEN"]
|
8 |
+
|
9 |
+
# Define the models
|
10 |
+
models = {
|
11 |
+
"Mistral-7B-Instruct-v0.2": "mistralai/Mistral-7B-Instruct-v0.2",
|
12 |
+
"Mistral-7B-Instruct-v0.3": "mistralai/Mistral-7B-Instruct-v0.3",
|
13 |
+
"GPT-2": "gpt2",
|
14 |
+
"BLOOM": "bigscience/bloom",
|
15 |
+
"OPT": "facebook/opt-350m"
|
16 |
+
}
|
17 |
+
|
18 |
+
# Initialize session state
|
19 |
+
if 'messages' not in st.session_state:
|
20 |
+
st.session_state.messages = []
|
21 |
+
|
22 |
+
# Streamlit app
|
23 |
+
st.title("Multi-Model LLM Chat")
|
24 |
+
|
25 |
+
# Model selection
|
26 |
+
selected_model = st.selectbox("Choose a model", list(models.keys()))
|
27 |
+
|
28 |
+
# User input
|
29 |
+
user_input = st.text_input("Your message:")
|
30 |
+
|
31 |
+
# Initialize LLM
|
32 |
+
@st.cache_resource
|
33 |
+
def get_llm(model_name):
|
34 |
+
return HuggingFaceEndpoint(
|
35 |
+
repo_id=models[model_name],
|
36 |
+
max_length=128,
|
37 |
+
temperature=0.7
|
38 |
+
)
|
39 |
+
|
40 |
+
llm = get_llm(selected_model)
|
41 |
+
|
42 |
+
# Chat prompt template
|
43 |
+
prompt = PromptTemplate(
|
44 |
+
template="Human: {human_input}\n\nAssistant: Let's think about this step-by-step:",
|
45 |
+
input_variables=["human_input"]
|
46 |
+
)
|
47 |
+
|
48 |
+
# Generate response
|
49 |
+
if user_input:
|
50 |
+
# Add user message to chat history
|
51 |
+
st.session_state.messages.append({"role": "user", "content": user_input})
|
52 |
+
|
53 |
+
# Generate LLM response
|
54 |
+
with st.spinner("Generating response..."):
|
55 |
+
full_prompt = prompt.format(human_input=user_input)
|
56 |
+
response = llm.invoke(full_prompt)
|
57 |
+
|
58 |
+
# Add assistant response to chat history
|
59 |
+
st.session_state.messages.append({"role": "assistant", "content": response})
|
60 |
+
|
61 |
+
# Display chat history
|
62 |
+
for message in st.session_state.messages:
|
63 |
+
with st.chat_message(message["role"]):
|
64 |
+
st.write(message["content"])
|
65 |
+
|
66 |
+
# Clear chat button
|
67 |
+
if st.button("Clear Chat"):
|
68 |
+
st.session_state.messages = []
|
69 |
+
st.experimental_rerun()
|