seanpedrickcase commited on
Commit
59c1c22
·
0 Parent(s):

First commit

Browse files
.dockerignore ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.pdf
2
+ *.url
3
+ *.jpg
4
+ *.png
5
+ *.ipynb
6
+ *.xls
7
+ *.xlsx
8
+ examples/*
9
+ output/*
10
+ tools/__pycache__/*
11
+ build/*
12
+ dist/*
13
+ logs/*
14
+ usage/*
15
+ feedback/*
16
+ test_code/*
.github/workflows/check_file_size.yml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Check file size
2
+ on: # or directly `on: [push]` to run the action on every push on any branch
3
+ pull_request:
4
+ branches: [main]
5
+
6
+ # to run this workflow manually from the Actions tab
7
+ workflow_dispatch:
8
+
9
+ jobs:
10
+ sync-to-hub:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - name: Check large files
14
+ uses: ActionsDesk/[email protected]
15
+ with:
16
+ filesizelimit: 10485760 # this is 10MB so we can sync to HF Spaces
.github/workflows/sync_to_hf.yml ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face hub
2
+ on:
3
+ push:
4
+ branches: [main]
5
+
6
+ # to run this workflow manually from the Actions tab
7
+ workflow_dispatch:
8
+
9
+ jobs:
10
+ sync-to-hub:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - uses: actions/checkout@v3
14
+ with:
15
+ fetch-depth: 0
16
+ lfs: true
17
+ - name: Push to hub
18
+ env:
19
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
20
+ run: git push https://seanpedrickcase:[email protected]/spaces/seanpedrickcase/llm_topic_modelling main
.gitignore ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.pdf
2
+ *.url
3
+ *.jpg
4
+ *.png
5
+ *.ipynb
6
+ *.xls
7
+ *.xlsx
8
+ examples/*
9
+ output/*
10
+ tools/__pycache__/*
11
+ build/*
12
+ dist/*
13
+ logs/*
14
+ usage/*
15
+ feedback/*
16
+ test_code/*
Dockerfile ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stage 1: Build dependencies and download models
2
+ FROM public.ecr.aws/docker/library/python:3.11.9-slim-bookworm AS builder
3
+
4
+ # Install system dependencies. Need to specify -y for poppler to get it to install
5
+ RUN apt-get update \
6
+ && apt-get clean \
7
+ && rm -rf /var/lib/apt/lists/*
8
+
9
+ WORKDIR /src
10
+
11
+ COPY requirements.txt .
12
+
13
+ RUN pip install --no-cache-dir --target=/install -r requirements.txt
14
+
15
+ RUN rm requirements.txt
16
+
17
+ # Stage 2: Final runtime image
18
+ FROM public.ecr.aws/docker/library/python:3.11.9-slim-bookworm
19
+
20
+ # Install system dependencies. Need to specify -y for poppler to get it to install
21
+ RUN apt-get update \
22
+ && apt-get clean \
23
+ && rm -rf /var/lib/apt/lists/*
24
+
25
+ # Set up a new user named "user" with user ID 1000
26
+ RUN useradd -m -u 1000 user
27
+
28
+ # Make output folder
29
+ RUN mkdir -p /home/user/app/output \
30
+ && mkdir -p /home/user/app/logs \
31
+ && chown -R user:user /home/user/app
32
+
33
+ # Copy installed packages from builder stage
34
+ COPY --from=builder /install /usr/local/lib/python3.11/site-packages/
35
+
36
+ # Switch to the "user" user
37
+ USER user
38
+
39
+ # Set environmental variables
40
+ ENV HOME=/home/user \
41
+ PATH=/home/user/.local/bin:$PATH \
42
+ PYTHONPATH=/home/user/app \
43
+ PYTHONUNBUFFERED=1 \
44
+ PYTHONDONTWRITEBYTECODE=1 \
45
+ GRADIO_ALLOW_FLAGGING=never \
46
+ GRADIO_NUM_PORTS=1 \
47
+ GRADIO_SERVER_NAME=0.0.0.0 \
48
+ GRADIO_SERVER_PORT=7860 \
49
+ GRADIO_THEME=huggingface \
50
+ TLDEXTRACT_CACHE=$HOME/app/tld/.tld_set_snapshot \
51
+ SYSTEM=spaces
52
+
53
+ # Set the working directory to the user's home directory
54
+ WORKDIR $HOME/app
55
+
56
+ # Copy the current directory contents into the container at $HOME/app setting the owner to the user
57
+ COPY --chown=user . $HOME/app
58
+
59
+ CMD ["python", "app.py"]
README.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Large language model topic modeller
3
+ emoji: 📝
4
+ colorFrom: purple
5
+ colorTo: yellow
6
+ sdk: 5.6.0
7
+ app_file: app.py
8
+ pinned: false
9
+ license: cc-by-nc-4.0
10
+ ---
11
+
12
+ # Large language model topic modelling
13
+
14
+ Extract topics and summarise outputs using Large Language Models (LLMs, Gemini Flash/Pro, or Claude 3 through AWS Bedrock if running on AWS). The app will query the LLM with batches of responses to produce summary tables, which are then compared iteratively to output a table with the general topics, subtopics, topic sentiment, and relevant text rows related to them. The prompts are designed for topic modelling public consultations, but they can be adapted to different contexts (see the LLM settings tab to modify).
15
+
16
+ You can use an AWS Bedrock model (Claude 3, paid), or Gemini (a free API, but with strict limits for the Pro model). Due to the strict API limits for the best model (Pro 1.5), the use of Gemini requires an API key. To set up your own Gemini API key, go here: https://aistudio.google.com/app/u/1/plan_information.
17
+
18
+ NOTE: that **API calls to Gemini are not considered secure**, so please only submit redacted, non-sensitive tabular files to this source. AWS Bedrock API calls are considered to be secure.
19
+
20
+ Large language models are not 100% accurate and may produce biased or harmful outputs. All outputs from this app **absolutely need to be checked by a human** to check for harmful outputs, hallucinations, and accuracy.
app.py ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import socket
3
+ from tools.helper_functions import ensure_output_folder_exists, add_folder_to_path, put_columns_in_df, get_connection_params, output_folder, get_or_create_env_var, reveal_feedback_buttons, wipe_logs, model_full_names, view_table
4
+ from tools.aws_functions import upload_file_to_s3
5
+ from tools.llm_api_call import llm_query, load_in_data_file, load_in_previous_data_files, sample_reference_table_summaries, summarise_output_topics
6
+ from tools.auth import authenticate_user
7
+ from tools.prompts import initial_table_prompt, prompt2, prompt3, system_prompt, add_existing_topics_system_prompt, add_existing_topics_prompt
8
+ #from tools.aws_functions import load_data_from_aws
9
+ import gradio as gr
10
+ import pandas as pd
11
+
12
+ from datetime import datetime
13
+ today_rev = datetime.now().strftime("%Y%m%d")
14
+
15
+ ensure_output_folder_exists()
16
+
17
+ host_name = socket.gethostname()
18
+
19
+ access_logs_data_folder = 'logs/' + today_rev + '/' + host_name + '/'
20
+ feedback_data_folder = 'feedback/' + today_rev + '/' + host_name + '/'
21
+ usage_data_folder = 'usage/' + today_rev + '/' + host_name + '/'
22
+
23
+ batch_size_default = 20
24
+
25
+ # Create the gradio interface
26
+ app = gr.Blocks(theme = gr.themes.Base())
27
+
28
+ with app:
29
+
30
+ ###
31
+ # STATE VARIABLES
32
+ ###
33
+
34
+ text_output_file_list_state = gr.State([])
35
+ log_files_output_list_state = gr.State([])
36
+ first_loop_state = gr.State(True)
37
+ second_loop_state = gr.State(False)
38
+
39
+ file_data_state = gr.State(pd.DataFrame())
40
+ master_topic_df_state = gr.State(pd.DataFrame())
41
+ master_reference_df_state = gr.State(pd.DataFrame())
42
+ master_unique_topics_df_state = gr.State(pd.DataFrame())
43
+
44
+ session_hash_state = gr.State()
45
+ s3_output_folder_state = gr.State()
46
+
47
+ # Logging state
48
+ log_file_name = 'log.csv'
49
+
50
+ access_logs_state = gr.State(access_logs_data_folder + log_file_name)
51
+ access_s3_logs_loc_state = gr.State(access_logs_data_folder)
52
+ usage_logs_state = gr.State(usage_data_folder + log_file_name)
53
+ usage_s3_logs_loc_state = gr.State(usage_data_folder)
54
+ feedback_logs_state = gr.State(feedback_data_folder + log_file_name)
55
+ feedback_s3_logs_loc_state = gr.State(feedback_data_folder)
56
+
57
+ # Summary state objects
58
+ summary_reference_table_sample_state = gr.State(pd.DataFrame())
59
+ master_reference_df_revised_summaries_state = gr.State(pd.DataFrame())
60
+ master_unique_topics_df_revised_summaries_state = gr.State(pd.DataFrame())
61
+ summarised_references_markdown = gr.Markdown("", visible=False)
62
+ summarised_outputs_list = gr.Dropdown(value=[], choices=[], visible=False, label="List of summarised outputs", allow_custom_value=True)
63
+ latest_summary_completed_num = gr.Number(0, visible=False)
64
+
65
+ ###
66
+ # UI LAYOUT
67
+ ###
68
+
69
+ gr.Markdown(
70
+ """# Large language model topic modelling
71
+
72
+ Extract topics and summarise outputs using Large Language Models (LLMs, Gemini Flash/Pro, or Claude 3 through AWS Bedrock if running on AWS). The app will query the LLM with batches of responses to produce summary tables, which are then compared iteratively to output a table with the general topics, subtopics, topic sentiment, and relevant text rows related to them. The prompts are designed for topic modelling public consultations, but they can be adapted to different contexts (see the LLM settings tab to modify).
73
+
74
+ You can use an AWS Bedrock model (Claude 3, paid), or Gemini (a free API, but with strict limits for the Pro model). Due to the strict API limits for the best model (Pro 1.5), the use of Gemini requires an API key. To set up your own Gemini API key, go here: https://aistudio.google.com/app/u/1/plan_information.
75
+
76
+ NOTE: that **API calls to Gemini are not considered secure**, so please only submit redacted, non-sensitive tabular files to this source. AWS Bedrock API calls are considered to be secure.
77
+
78
+ Large language models are not 100% accurate and may produce biased or harmful outputs. All outputs from this app **absolutely need to be checked by a human** to check for harmful outputs, hallucinations, and accuracy.""")
79
+
80
+ with gr.Tab(label="Extract topics"):
81
+ gr.Markdown(
82
+ """
83
+ ### Choose a tabular data file (xlsx or csv) of consultation responses to summarise.
84
+ """
85
+ )
86
+ with gr.Row():
87
+ model_choice = gr.Dropdown(value = "gemini-1.5-flash-002", choices = model_full_names, label="LLM model to use", multiselect=False)
88
+ in_api_key = gr.Textbox(value = "", label="Enter Gemini API key (only if using Google API models)", lines=1, type="password")
89
+
90
+ with gr.Accordion("Upload xlsx or csv files with consultation responses", open = True):
91
+ in_data_files = gr.File(label="Choose Excel or csv files", file_count= "multiple", file_types=['.xlsx', '.xls', '.csv', '.parquet', '.csv.gz'])
92
+
93
+ in_excel_sheets = gr.Dropdown(choices=["Choose Excel sheet with responses"], multiselect = False, label="Select the Excel sheet that has the responses.", visible=False, allow_custom_value=True)
94
+ in_colnames = gr.Dropdown(choices=["Choose column with responses"], multiselect = False, label="Select column that contains the responses (showing columns present across all files).", allow_custom_value=True, interactive=True)
95
+
96
+ with gr.Accordion("I have my own list of topics (zero shot topic modelling).", open = False):
97
+ candidate_topics = gr.File(label="Input topics from file (csv). File should have at least one column with a header and topic keywords in cells below. Topics will be taken from the first column of the file.")
98
+
99
+ context_textbox = gr.Textbox(label="Write a short description (one sentence of less) giving context to the large language model about the your consultation and any relevant context")
100
+
101
+ extract_topics_btn = gr.Button("Extract topics from open text", variant="primary")
102
+
103
+ text_output_summary = gr.Markdown(value="### Language model response will appear here")
104
+ text_output_file = gr.File(label="Output files")
105
+ latest_batch_completed = gr.Number(value=0, label="Number of files prepared", interactive=False, visible=False)
106
+ # Duplicate version of the above variable for when you don't want to initiate the summarisation loop
107
+ latest_batch_completed_no_loop = gr.Number(value=0, label="Number of files prepared", interactive=False, visible=False)
108
+
109
+ data_feedback_title = gr.Markdown(value="## Please give feedback", visible=False)
110
+ data_feedback_radio = gr.Radio(label="Please give some feedback about the results of the redaction. A reminder that the app is only expected to identify about 60% of personally identifiable information in a given (typed) document.",
111
+ choices=["The results were good", "The results were not good"], visible=False)
112
+ data_further_details_text = gr.Textbox(label="Please give more detailed feedback about the results:", visible=False)
113
+ data_submit_feedback_btn = gr.Button(value="Submit feedback", visible=False)
114
+
115
+ with gr.Row():
116
+ s3_logs_output_textbox = gr.Textbox(label="Feedback submission logs", visible=False)
117
+
118
+ with gr.Tab(label="Summarise topic outputs"):
119
+ gr.Markdown(
120
+ """
121
+ ### Load in data files from a consultation summarisation to summarise the outputs.
122
+ """)
123
+ with gr.Accordion("Upload reference data file and unique data files", open = True):
124
+ summarisation_in_previous_data_files = gr.File(label="Choose output csv files", file_count= "multiple", file_types=['.xlsx', '.xls', '.csv', '.parquet', '.csv.gz'])
125
+ summarisation_in_previous_data_files_status = gr.Textbox(value = "", label="Previous file input", visible=False)
126
+ summarise_previous_data_btn = gr.Button("Summarise existing topics", variant="primary")
127
+ summary_output_files = gr.File(label="Summarised output files", interactive=False)
128
+
129
+ with gr.Tab(label="Continue previous topic extraction"):
130
+ gr.Markdown(
131
+ """
132
+ ### Load in data files from a previous attempt at summarising a consultation to continue it.
133
+ """)
134
+
135
+ with gr.Accordion("Upload reference data file and unique data files", open = True):
136
+ in_previous_data_files = gr.File(label="Choose output csv files", file_count= "multiple", file_types=['.xlsx', '.xls', '.csv', '.parquet', '.csv.gz'])
137
+ in_previous_data_files_status = gr.Textbox(value = "", label="Previous file input")
138
+ continue_previous_data_files_btn = gr.Button(value="Continue previous topic extraction", variant="primary")
139
+
140
+
141
+ with gr.Tab(label="View output topics table"):
142
+ gr.Markdown(
143
+ """
144
+ ### View a 'unique_topic_table' csv file in markdown format.
145
+ """)
146
+
147
+ in_view_table = gr.File(label="Choose unique topic csv files", file_count= "single", file_types=['.csv', '.parquet', '.csv.gz'])
148
+ view_table_markdown = gr.Markdown(value = "", label="View table")
149
+
150
+ with gr.Tab(label="LLM settings"):
151
+ gr.Markdown(
152
+ """
153
+ Define settings that affect large language model output.
154
+ """)
155
+ with gr.Accordion("Settings for LLM generation", open = True):
156
+ temperature_slide = gr.Slider(minimum=0.1, maximum=1.0, value=0.3, label="Choose LLM temperature setting")
157
+ batch_size_number = gr.Number(label = "Number of responses to submit in a single LLM query", value = batch_size_default, precision=0)
158
+ random_seed = gr.Number(value=42, label="Random seed for LLM generation", visible=False)
159
+
160
+ with gr.Accordion("Prompt settings", open = True):
161
+ number_of_prompts = gr.Number(value=1, label="Number of prompts to send to LLM in sequence", minimum=1, maximum=3)
162
+ system_prompt_textbox = gr.Textbox(label="System prompt", lines = 4, value = system_prompt)
163
+ initial_table_prompt_textbox = gr.Textbox(label = "Prompt 1", lines = 8, value = initial_table_prompt)
164
+ prompt_2_textbox = gr.Textbox(label = "Prompt 2", lines = 8, value = prompt2, visible=False)
165
+ prompt_3_textbox = gr.Textbox(label = "Prompt 3", lines = 8, value = prompt3, visible=False)
166
+ add_to_existing_topics_system_prompt_textbox = gr.Textbox(label="Summary system prompt", lines = 4, value = add_existing_topics_system_prompt)
167
+ add_to_existing_topics_prompt_textbox = gr.Textbox(label = "Summary prompt", lines = 8, value = add_existing_topics_prompt)
168
+
169
+ log_files_output = gr.File(label="Log file output", interactive=False)
170
+ conversation_metadata_textbox = gr.Textbox(label="Query metadata - usage counts and other parameters", interactive=False, lines=8)
171
+
172
+ # Invisible text box to hold the session hash/username just for logging purposes
173
+ session_hash_textbox = gr.Textbox(label = "Session hash", value="", visible=False)
174
+ data_file_names_textbox = gr.Textbox(label = "Data file name", value="", visible=False)
175
+ estimated_time_taken_number = gr.Number(label= "Estimated time taken (seconds)", value=0.0, precision=1, visible=False) # This keeps track of the time taken to redact files for logging purposes.
176
+ total_number_of_batches = gr.Number(label = "Current batch number", value = 1, precision=0, visible=False)
177
+
178
+ text_output_logs = gr.Textbox(label = "Output summary logs", visible=False)
179
+
180
+ # AWS options - not yet implemented
181
+ # with gr.Tab(label="Advanced options"):
182
+ # with gr.Accordion(label = "AWS data access", open = True):
183
+ # aws_password_box = gr.Textbox(label="Password for AWS data access (ask the Data team if you don't have this)")
184
+ # with gr.Row():
185
+ # in_aws_file = gr.Dropdown(label="Choose file to load from AWS (only valid for API Gateway app)", choices=["None", "Lambeth borough plan"])
186
+ # load_aws_data_button = gr.Button(value="Load data from AWS", variant="secondary")
187
+
188
+ # aws_log_box = gr.Textbox(label="AWS data load status")
189
+
190
+ # ### Loading AWS data ###
191
+ # load_aws_data_button.click(fn=load_data_from_aws, inputs=[in_aws_file, aws_password_box], outputs=[in_file, aws_log_box])
192
+
193
+ ###
194
+ # INTERACTIVE ELEMENT FUNCTIONS
195
+ ###
196
+
197
+ # Tabular data upload
198
+ in_data_files.upload(fn=put_columns_in_df, inputs=[in_data_files], outputs=[in_colnames, in_excel_sheets, data_file_names_textbox])
199
+
200
+ extract_topics_btn.click(load_in_data_file,
201
+ inputs = [in_data_files, in_colnames, batch_size_number], outputs = [file_data_state, data_file_names_textbox, total_number_of_batches], api_name="load_data").then(\
202
+ fn=llm_query,
203
+ inputs=[file_data_state, master_topic_df_state, master_reference_df_state, master_unique_topics_df_state, text_output_summary, data_file_names_textbox, total_number_of_batches, in_api_key, temperature_slide, in_colnames, model_choice, candidate_topics, latest_batch_completed, text_output_summary, text_output_file_list_state, log_files_output_list_state, first_loop_state, conversation_metadata_textbox, initial_table_prompt_textbox, prompt_2_textbox, prompt_3_textbox, system_prompt_textbox, add_to_existing_topics_system_prompt_textbox, add_to_existing_topics_prompt_textbox, number_of_prompts, batch_size_number, context_textbox, estimated_time_taken_number],
204
+ outputs=[text_output_summary, master_topic_df_state, master_unique_topics_df_state, master_reference_df_state, text_output_file, text_output_file_list_state, latest_batch_completed, log_files_output, log_files_output_list_state, conversation_metadata_textbox, estimated_time_taken_number, summarisation_in_previous_data_files], api_name="llm_query")
205
+
206
+ # If the output file count text box changes, keep going with redacting each data file until done. Then reveal the feedback buttons.
207
+ latest_batch_completed.change(fn=llm_query,
208
+ inputs=[file_data_state, master_topic_df_state, master_reference_df_state, master_unique_topics_df_state, text_output_summary, data_file_names_textbox, total_number_of_batches, in_api_key, temperature_slide, in_colnames, model_choice, candidate_topics, latest_batch_completed, text_output_summary, text_output_file_list_state, log_files_output_list_state, second_loop_state, conversation_metadata_textbox, initial_table_prompt_textbox, prompt_2_textbox, prompt_3_textbox, system_prompt_textbox, add_to_existing_topics_system_prompt_textbox, add_to_existing_topics_prompt_textbox, number_of_prompts, batch_size_number, context_textbox, estimated_time_taken_number],
209
+ outputs=[text_output_summary, master_topic_df_state, master_unique_topics_df_state, master_reference_df_state, text_output_file, text_output_file_list_state, latest_batch_completed, log_files_output, log_files_output_list_state, conversation_metadata_textbox, estimated_time_taken_number, summarisation_in_previous_data_files]).\
210
+ then(fn = reveal_feedback_buttons,
211
+ outputs=[data_feedback_radio, data_further_details_text, data_submit_feedback_btn, data_feedback_title], scroll_to_output=True)
212
+
213
+ # If uploaded partially completed consultation files do this. This should then start up the 'latest_batch_completed' change action above to continue extracting topics.
214
+ continue_previous_data_files_btn.click(
215
+ load_in_data_file, inputs = [in_data_files, in_colnames, batch_size_number], outputs = [file_data_state, data_file_names_textbox, total_number_of_batches]).\
216
+ then(load_in_previous_data_files, inputs=[in_previous_data_files], outputs=[master_reference_df_state, master_unique_topics_df_state, latest_batch_completed, in_previous_data_files_status, data_file_names_textbox])
217
+
218
+ # When button pressed, summarise previous data
219
+ summarise_previous_data_btn.click(load_in_previous_data_files, inputs=[summarisation_in_previous_data_files], outputs=[master_reference_df_state, master_unique_topics_df_state, latest_batch_completed_no_loop, summarisation_in_previous_data_files_status, data_file_names_textbox]).\
220
+ then(sample_reference_table_summaries, inputs=[master_reference_df_state, master_unique_topics_df_state, random_seed], outputs=[summary_reference_table_sample_state, summarised_references_markdown, master_reference_df_state, master_unique_topics_df_state]).\
221
+ then(summarise_output_topics, inputs=[summary_reference_table_sample_state, master_unique_topics_df_state, master_reference_df_state, model_choice, in_api_key, summarised_references_markdown, temperature_slide, data_file_names_textbox, summarised_outputs_list, latest_summary_completed_num, conversation_metadata_textbox], outputs=[summary_reference_table_sample_state, master_unique_topics_df_revised_summaries_state, master_reference_df_revised_summaries_state, summary_output_files, summarised_outputs_list, latest_summary_completed_num, conversation_metadata_textbox])
222
+
223
+ latest_summary_completed_num.change(summarise_output_topics, inputs=[summary_reference_table_sample_state, master_unique_topics_df_state, master_reference_df_state, model_choice, in_api_key, summarised_references_markdown, temperature_slide, data_file_names_textbox, summarised_outputs_list, latest_summary_completed_num, conversation_metadata_textbox], outputs=[summary_reference_table_sample_state, master_unique_topics_df_revised_summaries_state, master_reference_df_revised_summaries_state, summary_output_files, summarised_outputs_list, latest_summary_completed_num, conversation_metadata_textbox])
224
+
225
+ ###
226
+ # LOGGING AND ON APP LOAD FUNCTIONS
227
+ ###
228
+ app.load(get_connection_params, inputs=None, outputs=[session_hash_state, s3_output_folder_state, session_hash_textbox])
229
+
230
+ # Log usernames and times of access to file (to know who is using the app when running on AWS)
231
+ access_callback = gr.CSVLogger(dataset_file_name=log_file_name)
232
+ access_callback.setup([session_hash_textbox], access_logs_data_folder)
233
+ session_hash_textbox.change(lambda *args: access_callback.flag(list(args)), [session_hash_textbox], None, preprocess=False).\
234
+ then(fn = upload_file_to_s3, inputs=[access_logs_state, access_s3_logs_loc_state], outputs=[s3_logs_output_textbox])
235
+
236
+ # Log usage usage when making a query
237
+ usage_callback = gr.CSVLogger(dataset_file_name=log_file_name)
238
+ usage_callback.setup([session_hash_textbox, data_file_names_textbox, model_choice, conversation_metadata_textbox, estimated_time_taken_number], usage_data_folder)
239
+
240
+ conversation_metadata_textbox.change(lambda *args: usage_callback.flag(list(args)), [session_hash_textbox, data_file_names_textbox, model_choice, conversation_metadata_textbox, estimated_time_taken_number], None, preprocess=False).\
241
+ then(fn = upload_file_to_s3, inputs=[usage_logs_state, usage_s3_logs_loc_state], outputs=[s3_logs_output_textbox])
242
+
243
+ # User submitted feedback
244
+ feedback_callback = gr.CSVLogger(dataset_file_name=log_file_name)
245
+ feedback_callback.setup([data_feedback_radio, data_further_details_text, data_file_names_textbox, model_choice, temperature_slide, text_output_summary, conversation_metadata_textbox], feedback_data_folder)
246
+
247
+ data_submit_feedback_btn.click(lambda *args: feedback_callback.flag(list(args)), [data_feedback_radio, data_further_details_text, data_file_names_textbox, model_choice, temperature_slide, text_output_summary, conversation_metadata_textbox], None, preprocess=False).\
248
+ then(fn = upload_file_to_s3, inputs=[feedback_logs_state, feedback_s3_logs_loc_state], outputs=[data_further_details_text])
249
+
250
+ in_view_table.upload(view_table, inputs=[in_view_table], outputs=[view_table_markdown])
251
+
252
+ # Launch the Gradio app
253
+ COGNITO_AUTH = get_or_create_env_var('COGNITO_AUTH', '0')
254
+ print(f'The value of COGNITO_AUTH is {COGNITO_AUTH}')
255
+
256
+ if __name__ == "__main__":
257
+ if os.environ['COGNITO_AUTH'] == "1":
258
+ app.queue().launch(show_error=True, auth=authenticate_user, max_file_size='50mb')
259
+ else:
260
+ app.queue().launch(show_error=True, inbrowser=True, max_file_size='50mb')
requirements.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pandas==2.2.3
2
+ gradio==5.6.0
3
+ boto3==1.35.71
4
+ pyarrow==18.1.0
5
+ openpyxl==3.1.3
6
+ markdown==3.7
7
+ tabulate==0.9.0
8
+ lxml==5.3.0
9
+ google-generativeai==0.8.3
10
+ html5lib==1.1
11
+ beautifulsoup4==4.12.3
tools/__init__.py ADDED
File without changes
tools/auth.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import boto3
3
+ from tools.helper_functions import get_or_create_env_var
4
+
5
+ client_id = get_or_create_env_var('AWS_CLIENT_ID', 'l762du1rg94e1r2q0ii7ls0ef') # This client id is borrowed from async gradio app client
6
+ print(f'The value of AWS_CLIENT_ID is {client_id}')
7
+
8
+ user_pool_id = get_or_create_env_var('AWS_USER_POOL_ID', 'eu-west-2_8fCzl8qej')
9
+ print(f'The value of AWS_USER_POOL_ID is {user_pool_id}')
10
+
11
+ def authenticate_user(username, password, user_pool_id=user_pool_id, client_id=client_id):
12
+ """Authenticates a user against an AWS Cognito user pool.
13
+
14
+ Args:
15
+ user_pool_id (str): The ID of the Cognito user pool.
16
+ client_id (str): The ID of the Cognito user pool client.
17
+ username (str): The username of the user.
18
+ password (str): The password of the user.
19
+
20
+ Returns:
21
+ bool: True if the user is authenticated, False otherwise.
22
+ """
23
+
24
+ client = boto3.client('cognito-idp') # Cognito Identity Provider client
25
+
26
+ try:
27
+ response = client.initiate_auth(
28
+ AuthFlow='USER_PASSWORD_AUTH',
29
+ AuthParameters={
30
+ 'USERNAME': username,
31
+ 'PASSWORD': password,
32
+ },
33
+ ClientId=client_id
34
+ )
35
+
36
+ # If successful, you'll receive an AuthenticationResult in the response
37
+ if response.get('AuthenticationResult'):
38
+ return True
39
+ else:
40
+ return False
41
+
42
+ except client.exceptions.NotAuthorizedException:
43
+ return False
44
+ except client.exceptions.UserNotFoundException:
45
+ return False
46
+ except Exception as e:
47
+ print(f"An error occurred: {e}")
48
+ return False
tools/aws_functions.py ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Type, List
2
+ import pandas as pd
3
+ import boto3
4
+ import tempfile
5
+ import os
6
+ from tools.helper_functions import get_or_create_env_var, RUN_AWS_FUNCTIONS
7
+
8
+ PandasDataFrame = Type[pd.DataFrame]
9
+
10
+ # Get AWS credentials if required
11
+ bucket_name=""
12
+
13
+ AWS_REGION = get_or_create_env_var('AWS_REGION', 'eu-west-2')
14
+ print(f'The value of AWS_REGION is {AWS_REGION}')
15
+
16
+ if RUN_AWS_FUNCTIONS == "1":
17
+ try:
18
+ bucket_name = os.environ['CONSULTATION_SUMMARY_BUCKET']
19
+ session = boto3.Session() # profile_name="default"
20
+ except Exception as e:
21
+ print(e)
22
+
23
+ def get_assumed_role_info():
24
+ sts_endpoint = 'https://sts.' + AWS_REGION + '.amazonaws.com'
25
+ sts = boto3.client('sts', region_name=AWS_REGION, endpoint_url=sts_endpoint)
26
+ response = sts.get_caller_identity()
27
+
28
+ # Extract ARN of the assumed role
29
+ assumed_role_arn = response['Arn']
30
+
31
+ # Extract the name of the assumed role from the ARN
32
+ assumed_role_name = assumed_role_arn.split('/')[-1]
33
+
34
+ return assumed_role_arn, assumed_role_name
35
+
36
+ try:
37
+ assumed_role_arn, assumed_role_name = get_assumed_role_info()
38
+
39
+ print("Assumed Role ARN:", assumed_role_arn)
40
+ print("Assumed Role Name:", assumed_role_name)
41
+
42
+ except Exception as e:
43
+
44
+ print(e)
45
+
46
+ # Download direct from S3 - requires login credentials
47
+ def download_file_from_s3(bucket_name, key, local_file_path):
48
+
49
+ s3 = boto3.client('s3')
50
+ s3.download_file(bucket_name, key, local_file_path)
51
+ print(f"File downloaded from S3: s3://{bucket_name}/{key} to {local_file_path}")
52
+
53
+ def download_folder_from_s3(bucket_name, s3_folder, local_folder):
54
+ """
55
+ Download all files from an S3 folder to a local folder.
56
+ """
57
+ s3 = boto3.client('s3')
58
+
59
+ # List objects in the specified S3 folder
60
+ response = s3.list_objects_v2(Bucket=bucket_name, Prefix=s3_folder)
61
+
62
+ # Download each object
63
+ for obj in response.get('Contents', []):
64
+ # Extract object key and construct local file path
65
+ object_key = obj['Key']
66
+ local_file_path = os.path.join(local_folder, os.path.relpath(object_key, s3_folder))
67
+
68
+ # Create directories if necessary
69
+ os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
70
+
71
+ # Download the object
72
+ try:
73
+ s3.download_file(bucket_name, object_key, local_file_path)
74
+ print(f"Downloaded 's3://{bucket_name}/{object_key}' to '{local_file_path}'")
75
+ except Exception as e:
76
+ print(f"Error downloading 's3://{bucket_name}/{object_key}':", e)
77
+
78
+ def download_files_from_s3(bucket_name, s3_folder, local_folder, filenames):
79
+ """
80
+ Download specific files from an S3 folder to a local folder.
81
+ """
82
+ s3 = boto3.client('s3')
83
+
84
+ print("Trying to download file: ", filenames)
85
+
86
+ if filenames == '*':
87
+ # List all objects in the S3 folder
88
+ print("Trying to download all files in AWS folder: ", s3_folder)
89
+ response = s3.list_objects_v2(Bucket=bucket_name, Prefix=s3_folder)
90
+
91
+ print("Found files in AWS folder: ", response.get('Contents', []))
92
+
93
+ filenames = [obj['Key'].split('/')[-1] for obj in response.get('Contents', [])]
94
+
95
+ print("Found filenames in AWS folder: ", filenames)
96
+
97
+ for filename in filenames:
98
+ object_key = os.path.join(s3_folder, filename)
99
+ local_file_path = os.path.join(local_folder, filename)
100
+
101
+ # Create directories if necessary
102
+ os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
103
+
104
+ # Download the object
105
+ try:
106
+ s3.download_file(bucket_name, object_key, local_file_path)
107
+ print(f"Downloaded 's3://{bucket_name}/{object_key}' to '{local_file_path}'")
108
+ except Exception as e:
109
+ print(f"Error downloading 's3://{bucket_name}/{object_key}':", e)
110
+
111
+ def load_data_from_aws(in_aws_keyword_file, aws_password="", bucket_name=bucket_name):
112
+
113
+ temp_dir = tempfile.mkdtemp()
114
+ local_address_stub = temp_dir + '/doc-redaction/'
115
+ files = []
116
+
117
+ if not 'LAMBETH_BOROUGH_PLAN_PASSWORD' in os.environ:
118
+ out_message = "Can't verify password for dataset access. Do you have a valid AWS connection? Data not loaded."
119
+ return files, out_message
120
+
121
+ if aws_password:
122
+ if "Lambeth borough plan" in in_aws_keyword_file and aws_password == os.environ['LAMBETH_BOROUGH_PLAN_PASSWORD']:
123
+
124
+ s3_folder_stub = 'example-data/lambeth-borough-plan/latest/'
125
+
126
+ local_folder_path = local_address_stub
127
+
128
+ # Check if folder exists
129
+ if not os.path.exists(local_folder_path):
130
+ print(f"Folder {local_folder_path} does not exist! Making folder.")
131
+
132
+ os.mkdir(local_folder_path)
133
+
134
+ # Check if folder is empty
135
+ if len(os.listdir(local_folder_path)) == 0:
136
+ print(f"Folder {local_folder_path} is empty")
137
+ # Download data
138
+ download_files_from_s3(bucket_name, s3_folder_stub, local_folder_path, filenames='*')
139
+
140
+ print("AWS data downloaded")
141
+
142
+ else:
143
+ print(f"Folder {local_folder_path} is not empty")
144
+
145
+ #files = os.listdir(local_folder_stub)
146
+ #print(files)
147
+
148
+ files = [os.path.join(local_folder_path, f) for f in os.listdir(local_folder_path) if os.path.isfile(os.path.join(local_folder_path, f))]
149
+
150
+ out_message = "Data successfully loaded from AWS"
151
+ print(out_message)
152
+
153
+ else:
154
+ out_message = "Data not loaded from AWS"
155
+ print(out_message)
156
+ else:
157
+ out_message = "No password provided. Please ask the data team for access if you need this."
158
+ print(out_message)
159
+
160
+ return files, out_message
161
+
162
+ def upload_file_to_s3(local_file_paths:List[str], s3_key:str, s3_bucket:str=bucket_name):
163
+ """
164
+ Uploads a file from local machine to Amazon S3.
165
+
166
+ Args:
167
+ - local_file_path: Local file path(s) of the file(s) to upload.
168
+ - s3_key: Key (path) to the file in the S3 bucket.
169
+ - s3_bucket: Name of the S3 bucket.
170
+
171
+ Returns:
172
+ - Message as variable/printed to console
173
+ """
174
+ final_out_message = []
175
+
176
+ s3_client = boto3.client('s3')
177
+
178
+ if isinstance(local_file_paths, str):
179
+ local_file_paths = [local_file_paths]
180
+
181
+ for file in local_file_paths:
182
+ try:
183
+ # Get file name off file path
184
+ file_name = os.path.basename(file)
185
+
186
+ s3_key_full = s3_key + file_name
187
+ print("S3 key: ", s3_key_full)
188
+
189
+ s3_client.upload_file(file, s3_bucket, s3_key_full)
190
+ out_message = "File " + file_name + " uploaded successfully!"
191
+ print(out_message)
192
+
193
+ except Exception as e:
194
+ out_message = f"Error uploading file(s): {e}"
195
+ print(out_message)
196
+
197
+ final_out_message.append(out_message)
198
+ final_out_message_str = '\n'.join(final_out_message)
199
+
200
+ return final_out_message_str
201
+
202
+
tools/helper_functions.py ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import gradio as gr
3
+ import pandas as pd
4
+
5
+
6
+
7
+ def get_or_create_env_var(var_name, default_value):
8
+ # Get the environment variable if it exists
9
+ value = os.environ.get(var_name)
10
+
11
+ # If it doesn't exist, set it to the default value
12
+ if value is None:
13
+ os.environ[var_name] = default_value
14
+ value = default_value
15
+
16
+ return value
17
+
18
+ RUN_AWS_FUNCTIONS = get_or_create_env_var("RUN_AWS_FUNCTIONS", "0")
19
+ print(f'The value of RUN_AWS_FUNCTIONS is {RUN_AWS_FUNCTIONS}')
20
+
21
+ if RUN_AWS_FUNCTIONS == "1":
22
+ model_full_names = ["anthropic.claude-3-haiku-20240307-v1:0", "anthropic.claude-3-sonnet-20240229-v1:0", "gemini-1.5-flash-002", "gemini-1.5-pro-002"]
23
+ model_short_names = ["haiku", "sonnet", "gemini_flash", "gemini_pro"]
24
+ else:
25
+ model_full_names = ["gemini-1.5-flash-002", "gemini-1.5-pro-002"]
26
+ model_short_names = ["gemini_flash", "gemini_pro"]
27
+
28
+ model_name_map = {short: full for short, full in zip(model_full_names, model_short_names)}
29
+
30
+ # Retrieving or setting output folder
31
+ env_var_name = 'GRADIO_OUTPUT_FOLDER'
32
+ default_value = 'output/'
33
+
34
+ output_folder = get_or_create_env_var(env_var_name, default_value)
35
+ print(f'The value of {env_var_name} is {output_folder}')
36
+
37
+ def get_file_path_with_extension(file_path):
38
+ # First, get the basename of the file (e.g., "example.txt" from "/path/to/example.txt")
39
+ basename = os.path.basename(file_path)
40
+
41
+ # Return the basename with its extension
42
+ return basename
43
+
44
+ def get_file_path_end(file_path):
45
+ # First, get the basename of the file (e.g., "example.txt" from "/path/to/example.txt")
46
+ basename = os.path.basename(file_path)
47
+
48
+ # Then, split the basename and its extension and return only the basename without the extension
49
+ filename_without_extension, _ = os.path.splitext(basename)
50
+
51
+ #print(filename_without_extension)
52
+
53
+ return filename_without_extension
54
+
55
+ def detect_file_type(filename):
56
+ """Detect the file type based on its extension."""
57
+ if (filename.endswith('.csv')) | (filename.endswith('.csv.gz')) | (filename.endswith('.zip')):
58
+ return 'csv'
59
+ elif filename.endswith('.xlsx'):
60
+ return 'xlsx'
61
+ elif filename.endswith('.parquet'):
62
+ return 'parquet'
63
+ elif filename.endswith('.pdf'):
64
+ return 'pdf'
65
+ elif filename.endswith('.jpg'):
66
+ return 'jpg'
67
+ elif filename.endswith('.jpeg'):
68
+ return 'jpeg'
69
+ elif filename.endswith('.png'):
70
+ return 'png'
71
+ else:
72
+ raise ValueError("Unsupported file type.")
73
+
74
+ def read_file(filename):
75
+ """Read the file based on its detected type."""
76
+ file_type = detect_file_type(filename)
77
+
78
+ if file_type == 'csv':
79
+ return pd.read_csv(filename, low_memory=False)
80
+ elif file_type == 'xlsx':
81
+ return pd.read_excel(filename)
82
+ elif file_type == 'parquet':
83
+ return pd.read_parquet(filename)
84
+
85
+ def view_table(file_path: str, max_width: int = 60): # Added max_width parameter
86
+ df = pd.read_csv(file_path)
87
+
88
+ df_cleaned = df.replace('\n', ' ', regex=True)
89
+
90
+ # Wrap text in each column to the specified max width, including whole words
91
+ def wrap_text(text):
92
+ if isinstance(text, str):
93
+ words = text.split(' ')
94
+ wrapped_lines = []
95
+ current_line = ""
96
+
97
+ for word in words:
98
+ # Check if adding the next word exceeds the max width
99
+ if len(current_line) + len(word) + 1 > max_width: # +1 for the space
100
+ wrapped_lines.append(current_line)
101
+ current_line = word # Start a new line with the current word
102
+ else:
103
+ if current_line: # If current_line is not empty, add a space
104
+ current_line += ' '
105
+ current_line += word
106
+
107
+ # Add any remaining text in current_line to wrapped_lines
108
+ if current_line:
109
+ wrapped_lines.append(current_line)
110
+
111
+ return '<br>'.join(wrapped_lines) # Join lines with <br>
112
+ return text
113
+
114
+ # Use apply with axis=1 to apply wrap_text to each element
115
+ df_cleaned = df_cleaned.apply(lambda col: col.map(wrap_text))
116
+
117
+ table_out = df_cleaned.to_markdown(index=False)
118
+
119
+ return table_out
120
+
121
+ def ensure_output_folder_exists():
122
+ """Checks if the 'output/' folder exists, creates it if not."""
123
+
124
+ folder_name = "output/"
125
+
126
+ if not os.path.exists(folder_name):
127
+ # Create the folder if it doesn't exist
128
+ os.makedirs(folder_name)
129
+ print(f"Created the 'output/' folder.")
130
+ else:
131
+ print(f"The 'output/' folder already exists.")
132
+
133
+ def put_columns_in_df(in_file):
134
+ new_choices = []
135
+ concat_choices = []
136
+ all_sheet_names = []
137
+ number_of_excel_files = 0
138
+
139
+ for file in in_file:
140
+ file_name = file.name
141
+ file_type = detect_file_type(file_name)
142
+ #print("File type is:", file_type)
143
+
144
+ file_end = get_file_path_with_extension(file_name)
145
+
146
+ if file_type == 'xlsx':
147
+ number_of_excel_files += 1
148
+ new_choices = []
149
+ print("Running through all xlsx sheets")
150
+ anon_xlsx = pd.ExcelFile(file_name)
151
+ new_sheet_names = anon_xlsx.sheet_names
152
+ # Iterate through the sheet names
153
+ for sheet_name in new_sheet_names:
154
+ # Read each sheet into a DataFrame
155
+ df = pd.read_excel(file_name, sheet_name=sheet_name)
156
+
157
+ # Process the DataFrame (e.g., print its contents)
158
+ print(f"Sheet Name: {sheet_name}")
159
+ print(df.head()) # Print the first few rows
160
+
161
+ new_choices.extend(list(df.columns))
162
+
163
+ all_sheet_names.extend(new_sheet_names)
164
+
165
+ else:
166
+ df = read_file(file_name)
167
+ new_choices = list(df.columns)
168
+
169
+ concat_choices.extend(new_choices)
170
+
171
+ # Drop duplicate columns
172
+ concat_choices = list(set(concat_choices))
173
+
174
+ if number_of_excel_files > 0:
175
+ return gr.Dropdown(choices=concat_choices, value=concat_choices[0]), gr.Dropdown(choices=all_sheet_names, value=all_sheet_names[0], visible=True), file_end
176
+ else:
177
+ return gr.Dropdown(choices=concat_choices, value=concat_choices[0]), gr.Dropdown(visible=False), file_end
178
+
179
+ # Following function is only relevant for locally-created executable files based on this app (when using pyinstaller it creates a _internal folder that contains tesseract and poppler. These need to be added to the system path to enable the app to run)
180
+ def add_folder_to_path(folder_path: str):
181
+ '''
182
+ Check if a folder exists on your system. If so, get the absolute path and then add it to the system Path variable if it doesn't already exist.
183
+ '''
184
+
185
+ if os.path.exists(folder_path) and os.path.isdir(folder_path):
186
+ print(folder_path, "folder exists.")
187
+
188
+ # Resolve relative path to absolute path
189
+ absolute_path = os.path.abspath(folder_path)
190
+
191
+ current_path = os.environ['PATH']
192
+ if absolute_path not in current_path.split(os.pathsep):
193
+ full_path_extension = absolute_path + os.pathsep + current_path
194
+ os.environ['PATH'] = full_path_extension
195
+ #print(f"Updated PATH with: ", full_path_extension)
196
+ else:
197
+ print(f"Directory {folder_path} already exists in PATH.")
198
+ else:
199
+ print(f"Folder not found at {folder_path} - not added to PATH")
200
+
201
+ # Upon running a process, the feedback buttons are revealed
202
+ def reveal_feedback_buttons():
203
+ return gr.Radio(visible=True), gr.Textbox(visible=True), gr.Button(visible=True), gr.Markdown(visible=True)
204
+
205
+ def wipe_logs(feedback_logs_loc, usage_logs_loc):
206
+ try:
207
+ os.remove(feedback_logs_loc)
208
+ except Exception as e:
209
+ print("Could not remove feedback logs file", e)
210
+ try:
211
+ os.remove(usage_logs_loc)
212
+ except Exception as e:
213
+ print("Could not remove usage logs file", e)
214
+
215
+ async def get_connection_params(request: gr.Request):
216
+ base_folder = ""
217
+
218
+ if request:
219
+ #print("request user:", request.username)
220
+
221
+ #request_data = await request.json() # Parse JSON body
222
+ #print("All request data:", request_data)
223
+ #context_value = request_data.get('context')
224
+ #if 'context' in request_data:
225
+ # print("Request context dictionary:", request_data['context'])
226
+
227
+ # print("Request headers dictionary:", request.headers)
228
+ # print("All host elements", request.client)
229
+ # print("IP address:", request.client.host)
230
+ # print("Query parameters:", dict(request.query_params))
231
+ # To get the underlying FastAPI items you would need to use await and some fancy @ stuff for a live query: https://fastapi.tiangolo.com/vi/reference/request/
232
+ #print("Request dictionary to object:", request.request.body())
233
+ print("Session hash:", request.session_hash)
234
+
235
+ # Retrieving or setting CUSTOM_CLOUDFRONT_HEADER
236
+ CUSTOM_CLOUDFRONT_HEADER_var = get_or_create_env_var('CUSTOM_CLOUDFRONT_HEADER', '')
237
+ #print(f'The value of CUSTOM_CLOUDFRONT_HEADER is {CUSTOM_CLOUDFRONT_HEADER_var}')
238
+
239
+ # Retrieving or setting CUSTOM_CLOUDFRONT_HEADER_VALUE
240
+ CUSTOM_CLOUDFRONT_HEADER_VALUE_var = get_or_create_env_var('CUSTOM_CLOUDFRONT_HEADER_VALUE', '')
241
+ #print(f'The value of CUSTOM_CLOUDFRONT_HEADER_VALUE_var is {CUSTOM_CLOUDFRONT_HEADER_VALUE_var}')
242
+
243
+ if CUSTOM_CLOUDFRONT_HEADER_var and CUSTOM_CLOUDFRONT_HEADER_VALUE_var:
244
+ if CUSTOM_CLOUDFRONT_HEADER_var in request.headers:
245
+ supplied_cloudfront_custom_value = request.headers[CUSTOM_CLOUDFRONT_HEADER_var]
246
+ if supplied_cloudfront_custom_value == CUSTOM_CLOUDFRONT_HEADER_VALUE_var:
247
+ print("Custom Cloudfront header found:", supplied_cloudfront_custom_value)
248
+ else:
249
+ raise(ValueError, "Custom Cloudfront header value does not match expected value.")
250
+
251
+ # Get output save folder from 1 - username passed in from direct Cognito login, 2 - Cognito ID header passed through a Lambda authenticator, 3 - the session hash.
252
+
253
+ if request.username:
254
+ out_session_hash = request.username
255
+ base_folder = "user-files/"
256
+ print("Request username found:", out_session_hash)
257
+
258
+ elif 'x-cognito-id' in request.headers:
259
+ out_session_hash = request.headers['x-cognito-id']
260
+ base_folder = "user-files/"
261
+ print("Cognito ID found:", out_session_hash)
262
+
263
+ else:
264
+ out_session_hash = request.session_hash
265
+ base_folder = "temp-files/"
266
+ # print("Cognito ID not found. Using session hash as save folder:", out_session_hash)
267
+
268
+ output_folder = base_folder + out_session_hash + "/"
269
+ #if bucket_name:
270
+ # print("S3 output folder is: " + "s3://" + bucket_name + "/" + output_folder)
271
+
272
+ return out_session_hash, output_folder, out_session_hash
273
+ else:
274
+ print("No session parameters found.")
275
+ return "",""
tools/llm_api_call.py ADDED
@@ -0,0 +1,1516 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import google.generativeai as ai
3
+ import pandas as pd
4
+ import numpy as np
5
+ import gradio as gr
6
+ import markdown
7
+ import time
8
+ import boto3
9
+ import json
10
+ import string
11
+ import re
12
+ from rapidfuzz import process, fuzz
13
+ from tqdm import tqdm
14
+ from gradio import Progress
15
+ from typing import List, Tuple
16
+ from io import StringIO
17
+
18
+ from tools.prompts import initial_table_prompt, prompt2, prompt3, system_prompt, summarise_topic_descriptions_prompt, summarise_topic_descriptions_system_prompt, add_existing_topics_system_prompt, add_existing_topics_prompt
19
+ from tools.helper_functions import output_folder, detect_file_type, get_file_path_end, read_file, get_or_create_env_var, model_name_map
20
+
21
+ # ResponseObject class for AWS Bedrock calls
22
+ class ResponseObject:
23
+ def __init__(self, text, usage_metadata):
24
+ self.text = text
25
+ self.usage_metadata = usage_metadata
26
+
27
+ max_tokens = 4096
28
+ timeout_wait = 30 # AWS now seems to have a 60 second minimum wait between API calls
29
+ number_of_api_retry_attempts = 5
30
+ max_time_for_loop = 180
31
+
32
+
33
+ AWS_DEFAULT_REGION = get_or_create_env_var('AWS_DEFAULT_REGION', 'eu-west-2')
34
+ print(f'The value of AWS_DEFAULT_REGION is {AWS_DEFAULT_REGION}')
35
+
36
+ bedrock_runtime = boto3.client('bedrock-runtime', region_name=AWS_DEFAULT_REGION)
37
+
38
+ ### HELPER FUNCTIONS
39
+
40
+ def normalise_string(text):
41
+ # Replace two or more dashes with a single dash
42
+ text = re.sub(r'-{2,}', '-', text)
43
+
44
+ # Replace two or more spaces with a single space
45
+ text = re.sub(r'\s{2,}', ' ', text)
46
+
47
+ return text
48
+
49
+ def load_in_file(file_path: str, colname:str=""):
50
+ """
51
+ Loads in a tabular data file and returns data and file name.
52
+
53
+ Parameters:
54
+ - file_path (str): The path to the file to be processed.
55
+ """
56
+ file_type = detect_file_type(file_path)
57
+ print("File type is:", file_type)
58
+
59
+ file_name = get_file_path_end(file_path)
60
+ file_data = read_file(file_path)
61
+
62
+ if colname:
63
+ file_data[colname] = file_data[colname].fillna("")
64
+
65
+ file_data[colname] = file_data[colname].astype(str).str.replace("\bnan\b", "", regex=True)
66
+
67
+ print(file_data[colname])
68
+
69
+ return file_data, file_name
70
+
71
+ def load_in_data_file(file_paths:List[str], in_colnames:List[str], batch_size:int=50):
72
+ '''Load in data table, work out how many batches needed.'''
73
+
74
+ try:
75
+ file_data, file_name = load_in_file(file_paths[0], colname=in_colnames)
76
+ num_batches = (len(file_data) // batch_size) + 1
77
+ print("Total number of batches:", num_batches)
78
+
79
+ except Exception as e:
80
+ print(e)
81
+ file_data = pd.DataFrame()
82
+ file_name = ""
83
+ num_batches = 1
84
+
85
+ return file_data, file_name, num_batches
86
+
87
+ def load_in_previous_data_files(file_paths_partial_output:List[str]):
88
+ '''Load in data table from a partially completed consultation summary to continue it.'''
89
+
90
+ reference_file_data = pd.DataFrame()
91
+ reference_file_name = ""
92
+ unique_file_data = pd.DataFrame()
93
+ unique_file_name = ""
94
+ out_message = ""
95
+ latest_batch = 0
96
+
97
+ for file in file_paths_partial_output:
98
+ # If reference table
99
+ if 'reference_table' in file.name:
100
+ try:
101
+ reference_file_data, reference_file_name = load_in_file(file)
102
+ print("reference_file_data:", reference_file_data.head(2))
103
+ out_message = out_message + " Reference file load successful"
104
+ except Exception as e:
105
+ out_message = "Could not load reference file data:" + str(e)
106
+ print("Could not load reference file data:", e)
107
+ # If unique table
108
+ if 'unique_topics' in file.name:
109
+ try:
110
+ unique_file_data, unique_file_name = load_in_file(file)
111
+ print("unique_topics_file:", unique_file_data.head(2))
112
+ out_message = out_message + " Unique table file load successful"
113
+ except Exception as e:
114
+ out_message = "Could not load unique table file data:" + str(e)
115
+ print("Could not load unique table file data:", e)
116
+ if 'batch_' in file.name:
117
+ latest_batch = re.search(r'batch_(\d+)', file.name).group(1)
118
+ print("latest batch:", latest_batch)
119
+ latest_batch = int(latest_batch)
120
+
121
+ if latest_batch == 0:
122
+ out_message = out_message + " Latest batch number not found."
123
+ if reference_file_data.empty:
124
+ out_message = out_message + " No reference data table provided."
125
+ if unique_file_data.empty:
126
+ out_message = out_message + " No unique data table provided."
127
+
128
+ print(out_message)
129
+
130
+ return reference_file_data, unique_file_data, latest_batch, out_message, reference_file_name
131
+
132
+ def data_file_to_markdown_table(file_data:pd.DataFrame, file_name:str, chosen_cols: List[str], output_folder: str, batch_number: int, batch_size: int) -> Tuple[str, str, str]:
133
+ """
134
+ Processes a file by simplifying its content based on chosen columns and saves the result to a specified output folder.
135
+
136
+ Parameters:
137
+ - file_data (pd.DataFrame): Tabular data file with responses.
138
+ - file_name (str): File name with extension.
139
+ - chosen_cols (List[str]): A list of column names to include in the simplified file.
140
+ - output_folder (str): The directory where the simplified file will be saved.
141
+ - batch_number (int): The current batch number for processing.
142
+ - batch_size (int): The number of rows to process in each batch.
143
+
144
+ Returns:
145
+ - Tuple[str, str, str]: A tuple containing the path to the simplified CSV file, the simplified markdown table as a string, and the file path end (used for naming the output file).
146
+ """
147
+
148
+ #print("\nfile_data_in_markdown func:", file_data)
149
+ #print("\nBatch size in markdown func:", str(batch_size))
150
+
151
+ normalised_simple_markdown_table = ""
152
+ simplified_csv_table_path = ""
153
+
154
+ # Simplify table to just responses column and the Response reference number
155
+ simple_file = file_data[[chosen_cols]].reset_index(names="Reference")
156
+ simple_file["Reference"] = simple_file["Reference"].astype(int) + 1
157
+ simple_file = simple_file.rename(columns={chosen_cols: "Response"})
158
+ simple_file["Response"] = simple_file["Response"].str.strip()
159
+ file_len = len(simple_file["Reference"])
160
+
161
+
162
+ # Subset the data for the current batch
163
+ start_row = batch_number * batch_size
164
+ if start_row > file_len + 1:
165
+ print("Start row greater than file row length")
166
+ return simplified_csv_table_path, normalised_simple_markdown_table, file_name
167
+
168
+ if (start_row + batch_size) <= file_len + 1:
169
+ end_row = start_row + batch_size
170
+ else:
171
+ end_row = file_len + 1
172
+
173
+ simple_file = simple_file[start_row:end_row] # Select the current batch
174
+
175
+ print("simple_file:", simple_file)
176
+
177
+ # Remove problematic characters including ASCII and various quote marks
178
+ # Remove problematic characters including control characters, special characters, and excessive leading/trailing whitespace
179
+ simple_file["Response"] = simple_file["Response"].str.replace(r'[\x00-\x1F\x7F]|[""<>]|\\', '', regex=True) # Remove control and special characters
180
+ simple_file["Response"] = simple_file["Response"].str.strip() # Remove leading and trailing whitespace
181
+ simple_file["Response"] = simple_file["Response"].str.replace(r'\s+', ' ', regex=True) # Replace multiple spaces with a single space
182
+
183
+ # Remove blank and extremely short responses
184
+ simple_file = simple_file.loc[~(simple_file["Response"].isnull()) & ~(simple_file["Response"] == "None") & ~(simple_file["Response"] == " ") & ~(simple_file["Response"] == ""),:]#~(simple_file["Response"].str.len() < 5), :]
185
+
186
+ simplified_csv_table_path = output_folder + 'simple_markdown_table_' + file_name + '_row_' + str(start_row) + '_to_' + str(end_row) + '.csv'
187
+ simple_file.to_csv(simplified_csv_table_path, index=None)
188
+
189
+ simple_markdown_table = simple_file.to_markdown(index=None)
190
+
191
+ normalised_simple_markdown_table = normalise_string(simple_markdown_table)
192
+
193
+ return simplified_csv_table_path, normalised_simple_markdown_table, start_row, end_row, simple_file
194
+
195
+ def replace_punctuation_with_underscore(input_string):
196
+ # Create a translation table where each punctuation character maps to '_'
197
+ translation_table = str.maketrans(string.punctuation, '_' * len(string.punctuation))
198
+
199
+ # Translate the input string using the translation table
200
+ return input_string.translate(translation_table)
201
+
202
+ ### LLM FUNCTIONS
203
+
204
+ def construct_gemini_generative_model(in_api_key: str, temperature: float, model_choice: str, system_prompt: str, max_tokens: int) -> Tuple[object, dict]:
205
+ """
206
+ Constructs a GenerativeModel for Gemini API calls.
207
+
208
+ Parameters:
209
+ - in_api_key (str): The API key for authentication.
210
+ - temperature (float): The temperature parameter for the model, controlling the randomness of the output.
211
+ - model_choice (str): The choice of model to use for generation.
212
+ - system_prompt (str): The system prompt to guide the generation.
213
+ - max_tokens (int): The maximum number of tokens to generate.
214
+
215
+ Returns:
216
+ - Tuple[object, dict]: A tuple containing the constructed GenerativeModel and its configuration.
217
+ """
218
+ # Construct a GenerativeModel
219
+ try:
220
+ if in_api_key:
221
+ #print("Getting API key from textbox")
222
+ api_key = in_api_key
223
+ ai.configure(api_key=api_key)
224
+ elif "GOOGLE_API_KEY" in os.environ:
225
+ #print("Searching for API key in environmental variables")
226
+ api_key = os.environ["GOOGLE_API_KEY"]
227
+ ai.configure(api_key=api_key)
228
+ else:
229
+ print("No API key foound")
230
+ raise gr.Error("No API key found.")
231
+ except Exception as e:
232
+ print(e)
233
+
234
+ config = ai.GenerationConfig(temperature=temperature, max_output_tokens=max_tokens)
235
+
236
+ #model = ai.GenerativeModel.from_cached_content(cached_content=cache, generation_config=config)
237
+ model = ai.GenerativeModel(model_name='models/' + model_choice, system_instruction=system_prompt, generation_config=config)
238
+
239
+ # Upload CSV file (replace with your actual file path)
240
+ #file_id = ai.upload_file(upload_file_path)
241
+
242
+
243
+ # if file_type == 'xlsx':
244
+ # print("Running through all xlsx sheets")
245
+ # #anon_xlsx = pd.ExcelFile(upload_file_path)
246
+ # if not in_excel_sheets:
247
+ # out_message.append("No Excel sheets selected. Please select at least one to anonymise.")
248
+ # continue
249
+
250
+ # anon_xlsx = pd.ExcelFile(upload_file_path)
251
+
252
+ # # Create xlsx file:
253
+ # anon_xlsx_export_file_name = output_folder + file_name + "_redacted.xlsx"
254
+
255
+
256
+ ### QUERYING LARGE LANGUAGE MODEL ###
257
+ # Prompt caching the table and system prompt. See here: https://ai.google.dev/gemini-api/docs/caching?lang=python
258
+ # Create a cache with a 5 minute TTL. ONLY FOR CACHES OF AT LEAST 32k TOKENS!
259
+ # cache = ai.caching.CachedContent.create(
260
+ # model='models/' + model_choice,
261
+ # display_name=file_name, # used to identify the cache
262
+ # system_instruction=system_prompt,
263
+ # ttl=datetime.timedelta(minutes=5),
264
+ # )
265
+
266
+ return model, config
267
+
268
+ def call_aws_claude(prompt: str, system_prompt: str, temperature: float, max_tokens: int, model_choice: str) -> ResponseObject:
269
+ """
270
+ This function sends a request to AWS Claude with the following parameters:
271
+ - prompt: The user's input prompt to be processed by the model.
272
+ - system_prompt: A system-defined prompt that provides context or instructions for the model.
273
+ - temperature: A value that controls the randomness of the model's output, with higher values resulting in more diverse responses.
274
+ - max_tokens: The maximum number of tokens (words or characters) in the model's response.
275
+ - model_choice: The specific model to use for processing the request.
276
+
277
+ The function constructs the request configuration, invokes the model, extracts the response text, and returns a ResponseObject containing the text and metadata.
278
+ """
279
+
280
+ prompt_config = {
281
+ "anthropic_version": "bedrock-2023-05-31",
282
+ "max_tokens": max_tokens,
283
+ "top_p": 0.999,
284
+ "temperature":temperature,
285
+ "system": system_prompt,
286
+ "messages": [
287
+ {
288
+ "role": "user",
289
+ "content": [
290
+ {"type": "text", "text": prompt},
291
+ ],
292
+ }
293
+ ],
294
+ }
295
+
296
+ body = json.dumps(prompt_config)
297
+
298
+ modelId = model_choice
299
+ accept = "application/json"
300
+ contentType = "application/json"
301
+
302
+ request = bedrock_runtime.invoke_model(
303
+ body=body, modelId=modelId, accept=accept, contentType=contentType
304
+ )
305
+
306
+ # Extract text from request
307
+ response_body = json.loads(request.get("body").read())
308
+ text = response_body.get("content")[0].get("text")
309
+
310
+ response = ResponseObject(
311
+ text=text,
312
+ usage_metadata=request['ResponseMetadata']
313
+ )
314
+
315
+ # Now you can access both the text and metadata
316
+ #print("Text:", response.text)
317
+ print("Metadata:", response.usage_metadata)
318
+ #print("Text:", response.text)
319
+
320
+ return response
321
+
322
+ # Function to send a request and update history
323
+ def send_request(prompt: str, conversation_history: List[dict], model: object, config: dict, model_choice: str, system_prompt: str, temperature: float, progress=Progress(track_tqdm=True)) -> Tuple[str, List[dict]]:
324
+ """
325
+ This function sends a request to a language model with the given prompt, conversation history, model configuration, model choice, system prompt, and temperature.
326
+ It constructs the full prompt by appending the new user prompt to the conversation history, generates a response from the model, and updates the conversation history with the new prompt and response.
327
+ If the model choice is specific to AWS Claude, it calls the `call_aws_claude` function; otherwise, it uses the `model.generate_content` method.
328
+ The function returns the response text and the updated conversation history.
329
+ """
330
+ # Constructing the full prompt from the conversation history
331
+ full_prompt = "Conversation history:\n"
332
+
333
+ for entry in conversation_history:
334
+ role = entry['role'].capitalize() # Assuming the history is stored with 'role' and 'parts'
335
+ message = ' '.join(entry['parts']) # Combining all parts of the message
336
+ full_prompt += f"{role}: {message}\n"
337
+
338
+ # Adding the new user prompt
339
+ full_prompt += f"\nUser: {prompt}"
340
+
341
+ # Clear any existing progress bars
342
+ tqdm._instances.clear()
343
+
344
+ # Print the full prompt for debugging purposes
345
+ #print("full_prompt:", full_prompt)
346
+
347
+ #progress_bar = tqdm(range(0,number_of_api_retry_attempts), desc="Calling API with " + str(timeout_wait) + " seconds per retry.", unit="attempts")
348
+
349
+ progress_bar = range(0,number_of_api_retry_attempts)
350
+
351
+ # Generate the model's response
352
+ if model_choice in ["gemini-1.5-flash-002", "gemini-1.5-pro-002"]:
353
+
354
+ for i in progress_bar:
355
+ try:
356
+ print("Calling Gemini model")
357
+ #print("full_prompt:", full_prompt)
358
+ #print("generation_config:", config)
359
+
360
+ response = model.generate_content(contents=full_prompt, generation_config=config)
361
+
362
+ #progress_bar.close()
363
+ #tqdm._instances.clear()
364
+
365
+ print("Successful call to Gemini model.")
366
+ break
367
+ except Exception as e:
368
+ # If fails, try again after X seconds in case there is a throttle limit
369
+ print("Call to Gemini model failed:", e, " Waiting for ", str(timeout_wait), "seconds and trying again.")
370
+
371
+ time.sleep(timeout_wait)
372
+
373
+ if i == number_of_api_retry_attempts:
374
+ return ResponseObject(text="", usage_metadata={'RequestId':"FAILED"}), conversation_history
375
+ else:
376
+ for i in progress_bar:
377
+ try:
378
+ print("Calling AWS Claude model, attempt", i)
379
+ response = call_aws_claude(prompt, system_prompt, temperature, max_tokens, model_choice)
380
+
381
+ #progress_bar.close()
382
+ #tqdm._instances.clear()
383
+
384
+ print("Successful call to Claude model.")
385
+ break
386
+ except Exception as e:
387
+ # If fails, try again after X seconds in case there is a throttle limit
388
+ print("Call to Claude model failed:", e, " Waiting for ", str(timeout_wait), "seconds and trying again.")
389
+
390
+ time.sleep(timeout_wait)
391
+ #response = call_aws_claude(prompt, system_prompt, temperature, max_tokens, model_choice)
392
+
393
+ if i == number_of_api_retry_attempts:
394
+ return ResponseObject(text="", usage_metadata={'RequestId':"FAILED"}), conversation_history
395
+
396
+
397
+ # Update the conversation history with the new prompt and response
398
+ conversation_history.append({'role': 'user', 'parts': [prompt]})
399
+ conversation_history.append({'role': 'assistant', 'parts': [response.text]})
400
+
401
+ # Print the updated conversation history
402
+ #print("conversation_history:", conversation_history)
403
+
404
+ return response, conversation_history
405
+
406
+ def process_requests(prompts: List[str], system_prompt: str, conversation_history: List[dict], whole_conversation: List[str], whole_conversation_metadata: List[str], model: object, config: dict, model_choice: str, temperature: float, batch_no:int = 1, master:bool = False) -> Tuple[List[ResponseObject], List[dict], List[str], List[str]]:
407
+ """
408
+ Processes a list of prompts by sending them to the model, appending the responses to the conversation history, and updating the whole conversation and metadata.
409
+
410
+ Args:
411
+ prompts (List[str]): A list of prompts to be processed.
412
+ system_prompt (str): The system prompt.
413
+ conversation_history (List[dict]): The history of the conversation.
414
+ whole_conversation (List[str]): The complete conversation including prompts and responses.
415
+ whole_conversation_metadata (List[str]): Metadata about the whole conversation.
416
+ model (object): The model to use for processing the prompts.
417
+ config (dict): Configuration for the model.
418
+ model_choice (str): The choice of model to use.
419
+ temperature (float): The temperature parameter for the model.
420
+ batch_no (int): Batch number of the large language model request.
421
+ master (bool): Is this request for the master table.
422
+
423
+ Returns:
424
+ Tuple[List[ResponseObject], List[dict], List[str], List[str]]: A tuple containing the list of responses, the updated conversation history, the updated whole conversation, and the updated whole conversation metadata.
425
+ """
426
+ responses = []
427
+
428
+ # Clear any existing progress bars
429
+ tqdm._instances.clear()
430
+
431
+ for prompt in prompts:
432
+
433
+ #print("prompt to LLM:", prompt)
434
+
435
+ response, conversation_history = send_request(prompt, conversation_history, model=model, config=config, model_choice=model_choice, system_prompt=system_prompt, temperature=temperature)
436
+
437
+ if not isinstance(response, str):
438
+ #print("response.usage_metadata:", response.usage_metadata)
439
+ #print("Response.text:", response.text)
440
+ #print("responses:", responses)
441
+ responses.append(response)
442
+
443
+ # Create conversation txt object
444
+ whole_conversation.append(prompt)
445
+ whole_conversation.append(response.text)
446
+
447
+ # Create conversation metadata
448
+ if master == False:
449
+ whole_conversation_metadata.append(f"Query batch {batch_no} prompt {len(responses)} metadata:")
450
+ else:
451
+ whole_conversation_metadata.append(f"Query summary metadata:")
452
+
453
+ if not isinstance(response, str):
454
+ try:
455
+ print("model_choice:", model_choice)
456
+ if "claude" in model_choice:
457
+ print("Appending selected metadata items to metadata")
458
+ whole_conversation_metadata.append('x-amzn-bedrock-output-token-count:')
459
+ whole_conversation_metadata.append(str(response.usage_metadata['HTTPHeaders']['x-amzn-bedrock-output-token-count']))
460
+ whole_conversation_metadata.append('x-amzn-bedrock-input-token-count:')
461
+ whole_conversation_metadata.append(str(response.usage_metadata['HTTPHeaders']['x-amzn-bedrock-input-token-count']))
462
+ else:
463
+ whole_conversation_metadata.append(str(response.usage_metadata))
464
+ except KeyError as e:
465
+ print(f"Key error: {e} - Check the structure of response.usage_metadata")
466
+ else:
467
+ print("Response is a string object.")
468
+
469
+
470
+ return responses, conversation_history, whole_conversation, whole_conversation_metadata
471
+
472
+ ### INITIAL TOPIC MODEL DEVELOPMENT FUNCTIONS
473
+
474
+ def clean_markdown_table(text: str):
475
+ lines = text.splitlines()
476
+
477
+ # Remove any empty rows or rows with only pipes
478
+ cleaned_lines = [line for line in lines if not re.match(r'^\s*\|?\s*\|?\s*$', line)]
479
+
480
+ # Merge lines that belong to the same row (i.e., don't start with |)
481
+ merged_lines = []
482
+ buffer = ""
483
+
484
+ for line in cleaned_lines:
485
+ if line.lstrip().startswith('|'): # If line starts with |, it's a new row
486
+ if buffer:
487
+ merged_lines.append(buffer) # Append the buffered content
488
+ buffer = line # Start a new buffer with this row
489
+ else:
490
+ # Continuation of the previous row
491
+ buffer += ' ' + line.strip() # Add content to the current buffer
492
+
493
+ # Don't forget to append the last buffer
494
+ if buffer:
495
+ merged_lines.append(buffer)
496
+
497
+ # Ensure consistent number of pipes in each row based on the header
498
+ header_pipes = merged_lines[0].count('|') # Use the first row to count number of pipes
499
+ result = []
500
+
501
+ for line in merged_lines:
502
+ # Strip excessive whitespace around pipes
503
+ line = re.sub(r'\s*\|\s*', '|', line.strip())
504
+
505
+ # Replace numbers between pipes with commas and a space
506
+ line = re.sub(r'(?<=\|)(\s*\d+)(,\s*\d+)+(?=\|)', lambda m: ', '.join(m.group(0).split(',')), line)
507
+
508
+ # Replace groups of numbers separated by spaces with commas and a space
509
+ line = re.sub(r'(?<=\|)(\s*\d+)(\s+\d+)+(?=\|)', lambda m: ', '.join(m.group(0).split()), line)
510
+
511
+ # Fix inconsistent number of pipes by adjusting them to match the header
512
+ pipe_count = line.count('|')
513
+ if pipe_count < header_pipes:
514
+ line += '|' * (header_pipes - pipe_count) # Add missing pipes
515
+ elif pipe_count > header_pipes:
516
+ # If too many pipes, split line and keep the first `header_pipes` columns
517
+ columns = line.split('|')[:header_pipes + 1] # +1 to keep last pipe at the end
518
+ line = '|'.join(columns)
519
+
520
+ result.append(line)
521
+
522
+ # Join lines back into the cleaned markdown text
523
+ cleaned_text = '\n'.join(result)
524
+
525
+ return cleaned_text
526
+
527
+ def clean_column_name(column_name, max_length=20):
528
+ # Convert to string
529
+ column_name = str(column_name)
530
+ # Replace non-alphanumeric characters (except underscores) with underscores
531
+ column_name = re.sub(r'\W+', '_', column_name)
532
+ # Remove leading/trailing underscores
533
+ column_name = column_name.strip('_')
534
+ # Ensure the result is not empty; fall back to "column" if necessary
535
+ column_name = column_name if column_name else "column"
536
+ # Truncate to max_length
537
+ return column_name[:max_length]
538
+
539
+ def create_unique_table_df_from_reference_table(reference_df:pd.DataFrame):
540
+ new_unique_topics_df = reference_df[["General Topic", "Subtopic", "Sentiment"]]
541
+
542
+ new_unique_topics_df = new_unique_topics_df.rename(columns={new_unique_topics_df.columns[0]: "General Topic", new_unique_topics_df.columns[1]: "Subtopic", new_unique_topics_df.columns[2]: "Sentiment"})
543
+
544
+ # Join existing and new unique topics
545
+ out_unique_topics_df = new_unique_topics_df
546
+
547
+ out_unique_topics_df = out_unique_topics_df.rename(columns={out_unique_topics_df.columns[0]: "General Topic", out_unique_topics_df.columns[1]: "Subtopic", out_unique_topics_df.columns[2]: "Sentiment"})
548
+
549
+ #print("out_unique_topics_df:", out_unique_topics_df)
550
+
551
+ out_unique_topics_df = out_unique_topics_df.drop_duplicates(["General Topic", "Subtopic", "Sentiment"]).\
552
+ drop(["Response References", "Summary"], axis = 1, errors="ignore")
553
+
554
+ # Get count of rows that refer to particular topics
555
+ reference_counts = reference_df.groupby(["General Topic", "Subtopic", "Sentiment"]).agg({
556
+ 'Response References': 'size', # Count the number of references
557
+ 'Summary': lambda x: '<br>'.join(
558
+ sorted(set(x), key=lambda summary: reference_df.loc[reference_df['Summary'] == summary, 'Start row of group'].min())
559
+ )
560
+ }).reset_index()
561
+
562
+ # Join the counts to existing_unique_topics_df
563
+ out_unique_topics_df = out_unique_topics_df.merge(reference_counts, how='left', on=["General Topic", "Subtopic", "Sentiment"]).sort_values("Response References", ascending=False)
564
+
565
+ return out_unique_topics_df
566
+
567
+
568
+ def write_llm_output_and_logs(responses: List[ResponseObject],
569
+ whole_conversation: List[str],
570
+ whole_conversation_metadata: List[str],
571
+ file_name: str,
572
+ latest_batch_completed: int,
573
+ start_row:int,
574
+ end_row:int,
575
+ model_choice_clean: str,
576
+ temperature: float,
577
+ log_files_output_paths: List[str],
578
+ existing_reference_df:pd.DataFrame,
579
+ existing_topics_df:pd.DataFrame,
580
+ batch_size_number:int,
581
+ in_column:str,
582
+ first_run: bool = False) -> None:
583
+ """
584
+ Writes the output of the large language model requests and logs to files.
585
+
586
+ Parameters:
587
+ - responses (List[ResponseObject]): A list of ResponseObject instances containing the text and usage metadata of the responses.
588
+ - whole_conversation (List[str]): A list of strings representing the complete conversation including prompts and responses.
589
+ - whole_conversation_metadata (List[str]): A list of strings representing metadata about the whole conversation.
590
+ - file_name (str): The base part of the output file name.
591
+ - latest_batch_completed (int): The index of the current batch.
592
+ - start_row (int): Start row of the current batch.
593
+ - end_row (int): End row of the current batch.
594
+ - model_choice_clean (str): The cleaned model choice string.
595
+ - temperature (float): The temperature parameter used in the model.
596
+ - log_files_output_paths (List[str]): A list of paths to the log files.
597
+ - existing_reference_df (pd.DataFrame): The existing reference dataframe mapping response numbers to topics.
598
+ - existing_topics_df (pd.DataFrame): The existing unique topics dataframe
599
+ - first_run (bool): A boolean indicating if this is the first run through this function in this process. Defaults to False.
600
+ """
601
+ unique_topics_df_out_path = []
602
+ topic_table_out_path = "topic_table_error.csv"
603
+ reference_table_out_path = "reference_table_error.csv"
604
+ unique_topics_df_out_path = "unique_topic_table_error.csv"
605
+ topic_with_response_df = pd.DataFrame()
606
+ markdown_table = ""
607
+ out_reference_df = pd.DataFrame()
608
+ out_unique_topics_df = pd.DataFrame()
609
+ batch_file_path_details = "error"
610
+
611
+ # If there was an error in parsing, return boolean saying error
612
+ is_error = False
613
+
614
+ # Convert conversation to string and add to log outputs
615
+ whole_conversation_str = '\n'.join(whole_conversation)
616
+ whole_conversation_metadata_str = '\n'.join(whole_conversation_metadata)
617
+
618
+ start_row_reported = start_row + 1
619
+
620
+ # Example usage
621
+ in_column_cleaned = clean_column_name(in_column, max_length=20)
622
+
623
+ # Need to reduce output file names as full length files may be too long
624
+ file_name = clean_column_name(file_name, max_length=30)
625
+
626
+ # Save outputs for each batch. If master file created, label file as master
627
+ batch_file_path_details = f"{file_name}_batch_{latest_batch_completed + 1}_size_{batch_size_number}_col_{in_column_cleaned}"
628
+ row_number_string_start = f"Rows {start_row_reported} to {end_row}: "
629
+
630
+ print("batch_file_path_details:", batch_file_path_details)
631
+
632
+ whole_conversation_path = output_folder + batch_file_path_details + "_full_conversation_" + model_choice_clean + "_temp_" + str(temperature) + ".txt"
633
+ whole_conversation_path_meta = output_folder + batch_file_path_details + "_metadata_" + model_choice_clean + "_temp_" + str(temperature) + ".txt"
634
+
635
+ #with open(whole_conversation_path, "w", encoding='utf-8', errors='replace') as f:
636
+ # f.write(whole_conversation_str)
637
+
638
+ with open(whole_conversation_path_meta, "w", encoding='utf-8', errors='replace') as f:
639
+ f.write(whole_conversation_metadata_str)
640
+
641
+ #log_files_output_paths.append(whole_conversation_path)
642
+ log_files_output_paths.append(whole_conversation_path_meta)
643
+
644
+ # Convert output table to markdown and then to a pandas dataframe to csv
645
+ # try:
646
+ cleaned_response = clean_markdown_table(responses[-1].text)
647
+
648
+ markdown_table = markdown.markdown(cleaned_response, extensions=['tables'])
649
+
650
+ #print("markdown_table:", markdown_table)
651
+
652
+ # Remove <p> tags and make sure it has a valid HTML structure
653
+ html_table = re.sub(r'<p>(.*?)</p>', r'\1', markdown_table)
654
+ html_table = html_table.replace('<p>', '').replace('</p>', '').strip()
655
+
656
+ print("html_table:", html_table)
657
+
658
+ # Now ensure that the HTML structure is correct
659
+ if "<table>" not in html_table:
660
+ html_table = f"""
661
+ <table>
662
+ {html_table}
663
+ </table>
664
+ """
665
+
666
+ # print("Markdown table as HTML:", html_table)
667
+
668
+ html_buffer = StringIO(html_table)
669
+
670
+
671
+ try:
672
+ topic_with_response_df = pd.read_html(html_buffer)[0] # Assuming the first table in the HTML is the one you want
673
+ except Exception as e:
674
+ print("Error when trying to parse table:", e)
675
+ is_error = True
676
+ raise ValueError()
677
+ return topic_table_out_path, reference_table_out_path, unique_topics_df_out_path, topic_with_response_df, markdown_table, out_reference_df, out_unique_topics_df, batch_file_path_details, is_error
678
+
679
+
680
+ # Rename columns to ensure consistent use of data frames later in code
681
+ topic_with_response_df.columns = ["General Topic", "Subtopic", "Sentiment", "Summary", "Response References"]
682
+
683
+ # Fill in NA rows with values from above (topics seem to be included only on one row):
684
+ topic_with_response_df = topic_with_response_df.ffill()
685
+
686
+ # Strip and lower case topic names to remove issues where model is randomly capitalising topics/sentiment
687
+ topic_with_response_df["General Topic"] = topic_with_response_df["General Topic"].str.strip().str.lower().str.capitalize()
688
+ topic_with_response_df["Subtopic"] = topic_with_response_df["Subtopic"].str.strip().str.lower().str.capitalize()
689
+ topic_with_response_df["Sentiment"] = topic_with_response_df["Sentiment"].str.strip().str.lower().str.capitalize()
690
+
691
+ topic_table_out_path = output_folder + batch_file_path_details + "_topic_table_" + model_choice_clean + "_temp_" + str(temperature) + ".csv"
692
+
693
+ # Table to map references to topics
694
+ reference_data = []
695
+
696
+ # Iterate through each row in the original DataFrame
697
+ for index, row in topic_with_response_df.iterrows():
698
+ references = re.split(r',\s*|\s+', str(row.iloc[4])) if pd.notna(row.iloc[4]) else ""
699
+ topic = row.iloc[0] if pd.notna(row.iloc[0]) else ""
700
+ subtopic = row.iloc[1] if pd.notna(row.iloc[1]) else ""
701
+ sentiment = row.iloc[2] if pd.notna(row.iloc[2]) else ""
702
+ summary = row.iloc[3] if pd.notna(row.iloc[3]) else ""
703
+
704
+ summary = row_number_string_start + summary
705
+
706
+ # Create a new entry for each reference number
707
+ for ref in references:
708
+ reference_data.append({
709
+ 'Response References': ref,
710
+ 'General Topic': topic,
711
+ 'Subtopic': subtopic,
712
+ 'Sentiment': sentiment,
713
+ 'Summary': summary,
714
+ "Start row of group": start_row_reported
715
+ })
716
+
717
+ # Create a new DataFrame from the reference data
718
+ new_reference_df = pd.DataFrame(reference_data)
719
+
720
+ # Append on old reference data
721
+ out_reference_df = pd.concat([new_reference_df, existing_reference_df]).dropna(how='all')
722
+
723
+ # Remove duplicate Response references for the same topic
724
+ out_reference_df.drop_duplicates(["Response References", "General Topic", "Subtopic", "Sentiment"], inplace=True)
725
+
726
+ out_reference_df.sort_values(["Start row of group", "Response References", "General Topic", "Subtopic", "Sentiment"], inplace=True)
727
+
728
+ # Save the new DataFrame to CSV
729
+ reference_table_out_path = output_folder + batch_file_path_details + "_reference_table_" + model_choice_clean + "_temp_" + str(temperature) + ".csv"
730
+
731
+ # Table of all unique topics with descriptions
732
+ #print("topic_with_response_df:", topic_with_response_df)
733
+ new_unique_topics_df = topic_with_response_df[["General Topic", "Subtopic", "Sentiment"]]
734
+
735
+ new_unique_topics_df = new_unique_topics_df.rename(columns={new_unique_topics_df.columns[0]: "General Topic", new_unique_topics_df.columns[1]: "Subtopic", new_unique_topics_df.columns[2]: "Sentiment"})
736
+
737
+ # Join existing and new unique topics
738
+ out_unique_topics_df = pd.concat([new_unique_topics_df, existing_topics_df]).dropna(how='all')
739
+
740
+ out_unique_topics_df = out_unique_topics_df.rename(columns={out_unique_topics_df.columns[0]: "General Topic", out_unique_topics_df.columns[1]: "Subtopic", out_unique_topics_df.columns[2]: "Sentiment"})
741
+
742
+ #print("out_unique_topics_df:", out_unique_topics_df)
743
+
744
+ out_unique_topics_df = out_unique_topics_df.drop_duplicates(["General Topic", "Subtopic", "Sentiment"]).\
745
+ drop(["Response References", "Summary"], axis = 1, errors="ignore")
746
+
747
+ # Get count of rows that refer to particular topics
748
+ reference_counts = out_reference_df.groupby(["General Topic", "Subtopic", "Sentiment"]).agg({
749
+ 'Response References': 'size', # Count the number of references
750
+ 'Summary': lambda x: '<br>'.join(
751
+ sorted(set(x), key=lambda summary: out_reference_df.loc[out_reference_df['Summary'] == summary, 'Start row of group'].min())
752
+ )
753
+ }).reset_index()
754
+
755
+ # Join the counts to existing_unique_topics_df
756
+ out_unique_topics_df = out_unique_topics_df.merge(reference_counts, how='left', on=["General Topic", "Subtopic", "Sentiment"]).sort_values("Response References", ascending=False)
757
+
758
+ unique_topics_df_out_path = output_folder + batch_file_path_details + "_unique_topics_" + model_choice_clean + "_temp_" + str(temperature) + ".csv"
759
+
760
+ return topic_table_out_path, reference_table_out_path, unique_topics_df_out_path, topic_with_response_df, markdown_table, out_reference_df, out_unique_topics_df, batch_file_path_details, is_error
761
+
762
+ def llm_query(file_data:pd.DataFrame,
763
+ existing_topics_table:pd.DataFrame,
764
+ existing_reference_df:pd.DataFrame,
765
+ existing_unique_topics_df:pd.DataFrame,
766
+ display_table:str,
767
+ file_name:str,
768
+ num_batches:int,
769
+ in_api_key:str,
770
+ temperature:float,
771
+ chosen_cols:List[str],
772
+ model_choice:str,
773
+ candidate_topics: List=[],
774
+ latest_batch_completed:int=0,
775
+ out_message:List=[],
776
+ out_file_paths:List = [],
777
+ log_files_output_paths:List = [],
778
+ first_loop_state:bool=False,
779
+ whole_conversation_metadata_str:str="",
780
+ initial_table_prompt:str=initial_table_prompt,
781
+ prompt2:str=prompt2,
782
+ prompt3:str=prompt3,
783
+ system_prompt:str=system_prompt,
784
+ add_existing_topics_system_prompt:str=add_existing_topics_system_prompt,
785
+ add_existing_topics_prompt:str=add_existing_topics_prompt,
786
+ number_of_requests:int=1,
787
+ batch_size:int=50,
788
+ context_textbox:str="",
789
+ time_taken:float = 0,
790
+ max_tokens:int=max_tokens,
791
+ model_name_map:dict=model_name_map,
792
+ max_time_for_loop:int=max_time_for_loop,
793
+ progress=Progress(track_tqdm=True)):
794
+
795
+ '''
796
+ Query an LLM (Gemini or AWS Anthropic-based) with up to three prompts about a table of open text data. Up to 'batch_size' rows will be queried at a time.
797
+
798
+ Parameters:
799
+ - file_data (pd.DataFrame): Pandas dataframe containing the consultation response data.
800
+ - existing_topics_table (pd.DataFrame): Pandas dataframe containing the latest master topic table that has been iterated through batches.
801
+ - existing_reference_df (pd.DataFrame): Pandas dataframe containing the list of Response reference numbers alongside the derived topics and subtopics.
802
+ - existing_unique_topics_df (pd.DataFrame): Pandas dataframe containing the unique list of topics, subtopics, sentiment and summaries until this point.
803
+ - display_table (str): Table for display in markdown format.
804
+ - file_name (str): File name of the data file.
805
+ - num_batches (int): Number of batches required to go through all the response rows.
806
+ - in_api_key (str): The API key for authentication.
807
+ - temperature (float): The temperature parameter for the model.
808
+ - chosen_cols (List[str]): A list of chosen columns to process.
809
+ - candidate_topics (List): A list of existing candidate topics submitted by the user.
810
+ - model_choice (str): The choice of model to use.
811
+ - latest_batch_completed (int): The index of the latest file completed.
812
+ - out_message (list): A list to store output messages.
813
+ - out_file_paths (list): A list to store output file paths.
814
+ - log_files_output_paths (list): A list to store log file output paths.
815
+ - first_loop_state (bool): A flag indicating the first loop state.
816
+ - whole_conversation_metadata_str (str): A string to store whole conversation metadata.
817
+ - initial_table_prompt (str): The first prompt for the model.
818
+ - prompt2 (str): The second prompt for the model.
819
+ - prompt3 (str): The third prompt for the model.
820
+ - system_prompt (str): The system prompt for the model.
821
+ - add_existing_topics_system_prompt (str): The system prompt for the summary part of the model.
822
+ - add_existing_topics_prompt (str): The prompt for the model summary.
823
+ - number of requests (int): The number of prompts to send to the model.
824
+ - batch_size (int): The number of data rows to consider in each request.
825
+ - context_textbox (str, optional): A string giving some context to the consultation/task.
826
+ - time_taken (float, optional): The amount of time taken to process the responses up until this point.
827
+ - max_tokens (int): The maximum number of tokens for the model.
828
+ - model_name_map (dict, optional): A dictionary mapping full model name to shortened.
829
+ - max_time_for_loop (int, optional): The number of seconds maximum that the function should run for before breaking (to run again, this is to avoid timeouts with some AWS services if deployed there).
830
+ - progress (Progress): A progress tracker.
831
+ '''
832
+
833
+ tic = time.perf_counter()
834
+ model = ""
835
+ config = ""
836
+ final_time = 0.0
837
+ whole_conversation_metadata = []
838
+ #all_topic_tables_df = []
839
+ #all_markdown_topic_tables = []
840
+ is_error = False
841
+
842
+ # Reset output files on each run:
843
+ # out_file_paths = []
844
+
845
+ #model_choice_clean = replace_punctuation_with_underscore(model_choice)
846
+ model_choice_clean = model_name_map[model_choice]
847
+ print("model_choice_clean:", model_choice_clean)
848
+
849
+ # If this is the first time around, set variables to 0/blank
850
+ if first_loop_state==True:
851
+ if (latest_batch_completed == 999) | (latest_batch_completed == 0):
852
+ latest_batch_completed = 0
853
+ out_message = []
854
+ out_file_paths = []
855
+
856
+ print("latest_batch_completed:", str(latest_batch_completed))
857
+
858
+ # If we have already redacted the last file, return the input out_message and file list to the relevant components
859
+ if latest_batch_completed >= num_batches:
860
+ print("Last batch reached, returning batch:", str(latest_batch_completed))
861
+ # Set to a very high number so as not to mess with subsequent file processing by the user
862
+ #latest_batch_completed = 999
863
+
864
+ toc = time.perf_counter()
865
+ final_time = (toc - tic) + time_taken
866
+ out_time = f"Everything finished in {final_time} seconds."
867
+ print(out_time)
868
+
869
+
870
+ print("All summaries completed. Creating outputs.")
871
+
872
+ model_choice_clean = model_name_map[model_choice]
873
+ # Example usage
874
+ in_column_cleaned = clean_column_name(chosen_cols, max_length=20)
875
+
876
+ # Need to reduce output file names as full length files may be too long
877
+ file_name = clean_column_name(file_name, max_length=30)
878
+
879
+ # Save outputs for each batch. If master file created, label file as master
880
+ file_path_details = f"{file_name}_col_{in_column_cleaned}"
881
+
882
+ # Save the new DataFrame to CSV
883
+ #topic_table_out_path = output_folder + batch_file_path_details + "_topic_table_" + model_choice_clean + "_temp_" + str(temperature) + ".csv"
884
+ reference_table_out_path = output_folder + file_path_details + "_final_reference_table_" + model_choice_clean + "_temp_" + str(temperature) + ".csv"
885
+ unique_topics_df_out_path = output_folder +file_path_details + "_final_unique_topics_" + model_choice_clean + "_temp_" + str(temperature) + ".csv"
886
+
887
+ # Write outputs to csv
888
+ ## Topics with references
889
+ #new_topic_df.to_csv(topic_table_out_path, index=None)
890
+ #log_files_output_paths.append(topic_table_out_path)
891
+
892
+ ## Reference table mapping response numbers to topics
893
+ existing_reference_df.to_csv(reference_table_out_path, index=None)
894
+ out_file_paths.append(reference_table_out_path)
895
+
896
+ ## Unique topic list
897
+ existing_unique_topics_df.to_csv(unique_topics_df_out_path, index=None)
898
+ out_file_paths.append(unique_topics_df_out_path)
899
+
900
+ ## Create a dataframe for missing response references:
901
+ # Assuming existing_reference_df and file_data are already defined
902
+
903
+ # Simplify table to just responses column and the Response reference number
904
+ simple_file = file_data[[chosen_cols]].reset_index(names="Reference")
905
+ simple_file["Reference"] = simple_file["Reference"].astype(int) + 1
906
+ simple_file = simple_file.rename(columns={chosen_cols: "Response"})
907
+ simple_file["Response"] = simple_file["Response"].str.strip()
908
+
909
+ # Step 1: Identify missing references
910
+ #print("simple_file:", simple_file)
911
+
912
+ missing_references = simple_file[~simple_file['Reference'].astype(str).isin(existing_reference_df['Response References'].astype(str).unique())]
913
+
914
+ # Step 2: Create a new DataFrame with the same columns as existing_reference_df
915
+ missing_df = pd.DataFrame(columns=existing_reference_df.columns)
916
+
917
+ # Step 3: Populate the new DataFrame
918
+ missing_df['Response References'] = missing_references['Reference']
919
+ missing_df = missing_df.fillna(np.nan) # Fill other columns with NA
920
+
921
+ # Display the new DataFrame
922
+ #print("missing_df:", missing_df)
923
+
924
+ missing_df_out_path = output_folder + file_path_details + "_missing_references_" + model_choice_clean + "_temp_" + str(temperature) + ".csv"
925
+ missing_df.to_csv(missing_df_out_path, index=None)
926
+ log_files_output_paths.append(missing_df_out_path)
927
+
928
+ out_file_paths = list(set(out_file_paths))
929
+ log_files_output_paths = list(set(log_files_output_paths))
930
+
931
+ print("out_file_paths:", out_file_paths)
932
+
933
+ #final_out_message = '\n'.join(out_message)
934
+ return display_table, existing_topics_table, existing_unique_topics_df, existing_reference_df, out_file_paths, out_file_paths, latest_batch_completed, log_files_output_paths, log_files_output_paths, whole_conversation_metadata_str, final_time, out_file_paths
935
+
936
+
937
+
938
+ if num_batches > 0:
939
+ progress_measure = round(latest_batch_completed / num_batches, 1)
940
+ progress(progress_measure, desc="Querying large language model")
941
+ else:
942
+ progress(0.1, desc="Querying large language model")
943
+
944
+ # Load file
945
+ # If out message or out_file_paths are blank, change to a list so it can be appended to
946
+ if isinstance(out_message, str):
947
+ out_message = [out_message]
948
+
949
+ if not out_file_paths:
950
+ out_file_paths = []
951
+
952
+ # Check if files and text exist
953
+ if file_data.empty:
954
+ out_message = "Please enter a data file to summarise."
955
+ print(out_message)
956
+ return out_message, existing_topics_table, existing_unique_topics_df, existing_reference_df, out_file_paths, out_file_paths, latest_batch_completed, log_files_output_paths, log_files_output_paths, whole_conversation_metadata_str, final_time, out_file_paths#, out_message
957
+
958
+ if model_choice == "anthropic.claude-3-sonnet-20240229-v1:0" and file_data.shape[1] > 300:
959
+ out_message = "Your data has more than 300 rows, using the Sonnet model will be too expensive. Please choose the Haiku model instead."
960
+ print(out_message)
961
+ return out_message, existing_topics_table, existing_unique_topics_df, existing_reference_df, out_file_paths, out_file_paths, latest_batch_completed, log_files_output_paths, log_files_output_paths, whole_conversation_metadata_str, final_time, out_file_paths#, out_message
962
+
963
+ topics_loop_description = "Extracting topics from response batches (each batch of " + str(batch_size) + " responses). " + str(latest_batch_completed) + " batches completed."
964
+ topics_loop = tqdm(range(latest_batch_completed, num_batches), desc = topics_loop_description, unit="batches remaining")
965
+
966
+ for i in topics_loop:
967
+
968
+ #for latest_batch_completed in range(num_batches):
969
+ reported_batch_no = latest_batch_completed + 1
970
+ print("Running query batch", str(reported_batch_no))
971
+
972
+ # Call the function to prepare the input table
973
+ simplified_csv_table_path, normalised_simple_markdown_table, start_row, end_row, simple_table_df = data_file_to_markdown_table(file_data, file_name, chosen_cols, output_folder, latest_batch_completed, batch_size)
974
+ log_files_output_paths.append(simplified_csv_table_path)
975
+
976
+
977
+ # Conversation history
978
+ conversation_history = []
979
+
980
+ print("normalised_simple_markdown_table:", normalised_simple_markdown_table)
981
+
982
+ # If the latest batch of responses contains at least one instance of text
983
+ if not simple_table_df.empty:
984
+
985
+
986
+ print("latest_batch_completed:", latest_batch_completed)
987
+
988
+ # If this is the second batch, the master table will refer back to the current master table when assigning topics to the new table. Also runs if there is an existing list of topics supplied by the user
989
+ if latest_batch_completed >= 1 or candidate_topics:
990
+
991
+ #print("normalised_simple_markdown_table:", normalised_simple_markdown_table)
992
+
993
+ # Prepare Gemini models before query
994
+ if model_choice in ["gemini-1.5-flash-002", "gemini-1.5-pro-002"]:
995
+ print("Using Gemini model:", model_choice)
996
+ model, config = construct_gemini_generative_model(in_api_key=in_api_key, temperature=temperature, model_choice=model_choice, system_prompt=add_existing_topics_system_prompt, max_tokens=max_tokens)
997
+ else:
998
+ print("Using AWS Bedrock model:", model_choice)
999
+
1000
+ if candidate_topics:
1001
+ # 'Zero shot topics' are those supplied by the user
1002
+ zero_shot_topics = read_file(candidate_topics.name)
1003
+ zero_shot_topics_series = zero_shot_topics.iloc[:, 0].str.strip().str.lower()
1004
+ # Max 150 topics allowed
1005
+ if len(zero_shot_topics_series) > 120:
1006
+ print("Maximum 120 topics allowed to fit within large language model context limits.")
1007
+ zero_shot_topics_series = zero_shot_topics_series.iloc[:120]
1008
+
1009
+ zero_shot_topics_list = list(zero_shot_topics_series)
1010
+
1011
+ print("Zero shot topics are:", zero_shot_topics_list)
1012
+
1013
+ #all_topic_tables_df_merged = existing_unique_topics_df
1014
+ existing_unique_topics_df["Response References"] = ""
1015
+
1016
+
1017
+ # Create the most up to date list of topics and subtopics.
1018
+ # If there are candidate topics, but the existing_unique_topics_df hasn't yet been constructed, then create.
1019
+ if candidate_topics and existing_unique_topics_df.empty:
1020
+ existing_unique_topics_df = pd.DataFrame(data={'General Topic':'', 'Subtopic':zero_shot_topics_list, 'Sentiment':''})
1021
+
1022
+ # This part concatenates all zero shot and new topics together, so that for the next prompt the LLM will have the full list available
1023
+ elif candidate_topics and not existing_unique_topics_df.empty:
1024
+ zero_shot_topics_df = pd.DataFrame(data={'General Topic':'', 'Subtopic':zero_shot_topics_list, 'Sentiment':''})
1025
+ existing_unique_topics_df = pd.concat([existing_unique_topics_df, zero_shot_topics_df]).drop_duplicates("Subtopic")
1026
+ zero_shot_topics_list_str = zero_shot_topics_list
1027
+
1028
+ #existing_unique_topics_df.to_csv(output_folder + "Existing topics with zero shot dropped.csv", index = None)
1029
+
1030
+
1031
+ unique_topics_markdown = existing_unique_topics_df[["General Topic", "Subtopic", "Sentiment"]].drop_duplicates(["General Topic", "Subtopic", "Sentiment"]).to_markdown(index=False)
1032
+
1033
+ #existing_unique_topics_df.to_csv(output_folder + f"{file_name}_master_all_topic_tables_df_merged_" + model_choice_clean + "_temp_" + str(temperature) + "_batch_" + str(latest_batch_completed) + ".csv", index=None)
1034
+
1035
+ # Format the summary prompt with the response table and topics
1036
+ formatted_summary_prompt = add_existing_topics_prompt.format(response_table=normalised_simple_markdown_table, topics=unique_topics_markdown, consultation_context=context_textbox, column_name=chosen_cols)
1037
+
1038
+ # Define the output file path for the formatted prompt
1039
+ formatted_prompt_output_path = output_folder + file_name + "_full_prompt_" + model_choice_clean + "_temp_" + str(temperature) + ".txt"
1040
+
1041
+ # Write the formatted prompt to the specified file
1042
+ try:
1043
+ with open(formatted_prompt_output_path, "w", encoding='utf-8', errors='replace') as f:
1044
+ f.write(formatted_summary_prompt)
1045
+ except Exception as e:
1046
+ print(f"Error writing prompt to file {formatted_prompt_output_path}: {e}")
1047
+
1048
+ summary_prompt_list = [formatted_summary_prompt]
1049
+
1050
+ # print("master_summary_prompt_list:", summary_prompt_list[0])
1051
+
1052
+ summary_conversation_history = []
1053
+ summary_whole_conversation = []
1054
+
1055
+ # Process requests to large language model
1056
+ master_summary_response, summary_conversation_history, whole_summary_conversation, whole_conversation_metadata = process_requests(summary_prompt_list, add_existing_topics_system_prompt, summary_conversation_history, summary_whole_conversation, whole_conversation_metadata, model, config, model_choice, temperature, reported_batch_no, master = True)
1057
+
1058
+ # print("master_summary_response:", master_summary_response[-1].text)
1059
+ # print("Whole conversation metadata:", whole_conversation_metadata)
1060
+
1061
+ topic_table_out_path, reference_table_out_path, unique_topics_df_out_path, new_topic_df, new_markdown_table, new_reference_df, new_unique_topics_df, master_batch_out_file_part, is_error = write_llm_output_and_logs(master_summary_response, whole_summary_conversation, whole_conversation_metadata, file_name, latest_batch_completed, start_row, end_row, model_choice_clean, temperature, log_files_output_paths, existing_reference_df, existing_unique_topics_df, batch_size, chosen_cols, first_run=False)
1062
+
1063
+ # If error in table parsing, leave function
1064
+ if is_error == True:
1065
+ final_message_out = "Could not complete summary, error in LLM output."
1066
+ display_table, new_topic_df, new_unique_topics_df, new_reference_df, out_file_paths, out_file_paths, latest_batch_completed, log_files_output_paths, log_files_output_paths, whole_conversation_metadata_str, final_time, out_file_paths#, final_message_out
1067
+
1068
+ # Write outputs to csv
1069
+ ## Topics with references
1070
+ new_topic_df.to_csv(topic_table_out_path, index=None)
1071
+ log_files_output_paths.append(topic_table_out_path)
1072
+
1073
+ ## Reference table mapping response numbers to topics
1074
+ new_reference_df.to_csv(reference_table_out_path, index=None)
1075
+ out_file_paths.append(reference_table_out_path)
1076
+
1077
+ ## Unique topic list
1078
+ new_unique_topics_df = pd.concat([new_unique_topics_df, existing_unique_topics_df]).drop_duplicates('Subtopic')
1079
+
1080
+ new_unique_topics_df.to_csv(unique_topics_df_out_path, index=None)
1081
+ out_file_paths.append(unique_topics_df_out_path)
1082
+
1083
+ #all_topic_tables_df.append(new_topic_df)
1084
+ #all_markdown_topic_tables.append(new_markdown_table)
1085
+
1086
+ #display_table = master_summary_response[-1].text
1087
+
1088
+ # Show unique topics alongside document counts as output
1089
+ display_table = new_unique_topics_df.to_markdown(index=False)
1090
+
1091
+ #whole_conversation_metadata.append(whole_conversation_metadata_str)
1092
+ whole_conversation_metadata_str = ' '.join(whole_conversation_metadata)
1093
+
1094
+
1095
+ # Write final output to text file also
1096
+ #try:
1097
+ # new_final_table_output_path = output_folder + master_batch_out_file_part + "_full_final_response_" + #model_choice_clean + "_temp_" + str(temperature) + ".txt"
1098
+
1099
+ # with open(new_final_table_output_path, "w", encoding='utf-8', errors='replace') as f:
1100
+ # f.write(display_table)
1101
+
1102
+ # log_files_output_paths.append(new_final_table_output_path)
1103
+
1104
+ #except Exception as e:
1105
+ # print(e)
1106
+
1107
+ latest_batch_number_string = "batch_" + str(latest_batch_completed - 1)
1108
+
1109
+ out_file_paths = [col for col in out_file_paths if latest_batch_number_string in col]
1110
+ log_files_output_paths = [col for col in log_files_output_paths if latest_batch_number_string in col]
1111
+
1112
+ print("out_file_paths at end of loop:", out_file_paths)
1113
+
1114
+ # If this is the first batch, run this
1115
+ else:
1116
+ #system_prompt = system_prompt + normalised_simple_markdown_table
1117
+
1118
+ # Prepare Gemini models before query
1119
+ if model_choice in ["gemini-1.5-flash-002", "gemini-1.5-pro-002"]:
1120
+ print("Using Gemini model:", model_choice)
1121
+ model, config = construct_gemini_generative_model(in_api_key=in_api_key, temperature=temperature, model_choice=model_choice, system_prompt=system_prompt, max_tokens=max_tokens)
1122
+ else:
1123
+ print("Using AWS Bedrock model:", model_choice)
1124
+
1125
+ formatted_initial_table_prompt = initial_table_prompt.format(response_table=normalised_simple_markdown_table, consultation_context=context_textbox, column_name=chosen_cols)
1126
+
1127
+ if prompt2: formatted_prompt2 = prompt2.format(response_table=normalised_simple_markdown_table)
1128
+ else: formatted_prompt2 = prompt2
1129
+
1130
+ if prompt3: formatted_prompt3 = prompt3.format(response_table=normalised_simple_markdown_table)
1131
+ else: formatted_prompt3 = prompt3
1132
+
1133
+ batch_prompts = [formatted_initial_table_prompt, formatted_prompt2, formatted_prompt3][:number_of_requests] # Adjust this list to send fewer requests
1134
+
1135
+ whole_conversation = [system_prompt]
1136
+
1137
+ # Process requests to large language model
1138
+ responses, conversation_history, whole_conversation, whole_conversation_metadata = process_requests(batch_prompts, system_prompt, conversation_history, whole_conversation, whole_conversation_metadata, model, config, model_choice, temperature, reported_batch_no)
1139
+
1140
+ # print("Whole conversation metadata before:", whole_conversation_metadata)
1141
+
1142
+ # print("responses:", responses[-1].text)
1143
+ # print("Whole conversation metadata:", whole_conversation_metadata)
1144
+
1145
+ topic_table_out_path, reference_table_out_path, unique_topics_df_out_path, topic_table_df, markdown_table, reference_df, new_unique_topics_df, batch_file_path_details, is_error = write_llm_output_and_logs(responses, whole_conversation, whole_conversation_metadata, file_name, latest_batch_completed, start_row, end_row, model_choice_clean, temperature, log_files_output_paths, existing_reference_df, existing_unique_topics_df, batch_size, chosen_cols, first_run=True)
1146
+
1147
+ # If error in table parsing, leave function
1148
+ if is_error == True:
1149
+ display_table, new_topic_df, new_unique_topics_df, new_reference_df, out_file_paths, out_file_paths, latest_batch_completed, log_files_output_paths, log_files_output_paths, whole_conversation_metadata_str, final_time, out_file_paths#, final_message_out
1150
+
1151
+
1152
+ #all_topic_tables_df.append(topic_table_df)
1153
+
1154
+ topic_table_df.to_csv(topic_table_out_path, index=None)
1155
+ out_file_paths.append(topic_table_out_path)
1156
+
1157
+ reference_df.to_csv(reference_table_out_path, index=None)
1158
+ out_file_paths.append(reference_table_out_path)
1159
+
1160
+ ## Unique topic list
1161
+
1162
+ new_unique_topics_df = pd.concat([new_unique_topics_df, existing_unique_topics_df]).drop_duplicates('Subtopic')
1163
+
1164
+ new_unique_topics_df.to_csv(unique_topics_df_out_path, index=None)
1165
+ out_file_paths.append(unique_topics_df_out_path)
1166
+
1167
+ #all_markdown_topic_tables.append(markdown_table)
1168
+
1169
+ whole_conversation_metadata.append(whole_conversation_metadata_str)
1170
+ whole_conversation_metadata_str = '. '.join(whole_conversation_metadata)
1171
+
1172
+ # Write final output to text file also
1173
+ try:
1174
+ final_table_output_path = output_folder + batch_file_path_details + "_full_final_response_" + model_choice_clean + "_temp_" + str(temperature) + ".txt"
1175
+
1176
+ with open(final_table_output_path, "w", encoding='utf-8', errors='replace') as f:
1177
+ f.write(responses[-1].text)
1178
+
1179
+ log_files_output_paths.append(final_table_output_path)
1180
+
1181
+ except Exception as e:
1182
+ print(e)
1183
+
1184
+ display_table = responses[-1].text
1185
+ new_topic_df = topic_table_df
1186
+ new_reference_df = reference_df
1187
+
1188
+ else:
1189
+ print("Current batch of responses contains no text, moving onto next. Batch number:", latest_batch_completed, ". Start row:", start_row, ". End row:", end_row)
1190
+
1191
+ # Increase latest file completed count unless we are at the last file
1192
+ if latest_batch_completed != num_batches:
1193
+ print("Completed batch number:", str(latest_batch_completed))
1194
+ latest_batch_completed += 1
1195
+
1196
+ toc = time.perf_counter()
1197
+ final_time = toc - tic
1198
+
1199
+ if final_time > max_time_for_loop:
1200
+ print("Max time reached, breaking loop.")
1201
+ topics_loop.close()
1202
+ tqdm._instances.clear()
1203
+ break
1204
+
1205
+ # Overwrite 'existing' elements to add new tables
1206
+ existing_reference_df = new_reference_df.dropna(how='all')
1207
+ existing_unique_topics_df = new_unique_topics_df.dropna(how='all')
1208
+ existing_topics_table = new_topic_df.dropna(how='all')
1209
+
1210
+ out_time = f"in {final_time:0.1f} seconds."
1211
+ print(out_time)
1212
+
1213
+ out_message.append('All queries successfully completed in')
1214
+
1215
+ final_message_out = '\n'.join(out_message)
1216
+ final_message_out = final_message_out + " " + out_time
1217
+
1218
+ final_message_out = final_message_out + "\n\nGo to to the LLM settings tab to see redaction logs. Please give feedback on the results below to help improve this app."
1219
+
1220
+ return display_table, existing_topics_table, existing_unique_topics_df, existing_reference_df, out_file_paths, out_file_paths, latest_batch_completed, log_files_output_paths, log_files_output_paths, whole_conversation_metadata_str, final_time, out_file_paths #, final_message_out
1221
+
1222
+ # SUMMARISATION FUNCTIONS
1223
+
1224
+ def deduplicate_categories(category_series: pd.Series, join_series:pd.Series, threshold: float = 80) -> pd.DataFrame:
1225
+ """
1226
+ Deduplicates similar category names in a pandas Series based on a fuzzy matching threshold.
1227
+
1228
+ Parameters:
1229
+ category_series (pd.Series): Series containing category names to deduplicate.
1230
+ join_series (pd.Series): Additional series used for joining back to original results
1231
+ threshold (float): Similarity threshold for considering two strings as duplicates.
1232
+
1233
+ Returns:
1234
+ pd.DataFrame: DataFrame with columns ['old_category', 'deduplicated_category'].
1235
+ """
1236
+ # Initialize the result dictionary
1237
+ deduplication_map = {}
1238
+
1239
+ # Iterate through each category in the series
1240
+ for category in category_series.unique():
1241
+ # Skip if the category is already processed
1242
+ if category in deduplication_map:
1243
+ continue
1244
+
1245
+ print("old_category:", category)
1246
+
1247
+ # Find close matches to the current category, excluding the current category itself
1248
+ matches = process.extract(category, [cat for cat in category_series.unique() if cat != category], scorer=fuzz.token_set_ratio, score_cutoff=threshold)
1249
+
1250
+ # Select the match with the highest score
1251
+ if matches: # Check if there are any matches
1252
+ best_match = max(matches, key=lambda x: x[1]) # Get the match with the highest score
1253
+ match, score, _ = best_match # Unpack the best match
1254
+ print("Best match:", match, "score:", score)
1255
+ deduplication_map[match] = category # Map the best match to the current category
1256
+
1257
+ # Create the result DataFrame
1258
+ result_df = pd.DataFrame({
1259
+ 'old_category': category_series + " | " + join_series,
1260
+ 'deduplicated_category': category_series.map(deduplication_map)
1261
+ })
1262
+
1263
+ return result_df
1264
+
1265
+
1266
+ def sample_reference_table_summaries(reference_df:pd.DataFrame,
1267
+ unique_topics_df:pd.DataFrame,
1268
+ random_seed:int,
1269
+ deduplicate_topics:str="Yes",
1270
+ no_of_sampled_summaries:int=150):
1271
+
1272
+ all_summaries = pd.DataFrame()
1273
+
1274
+ # Remove duplicate topics
1275
+ if deduplicate_topics == "Yes":
1276
+
1277
+ # Run through this three times to try to get all duplicate topics
1278
+ for i in range(0, 3):
1279
+ print("Run:", i)
1280
+ # First, combine duplicate topics in reference_df
1281
+ reference_df["old_category"] = reference_df["Subtopic"] + " | " + reference_df["Sentiment"]
1282
+
1283
+ reference_df_unique = reference_df.drop_duplicates("old_category")
1284
+
1285
+ print("reference_df_unique_old_categories:", reference_df_unique["old_category"])
1286
+
1287
+ reference_df_unique[["old_category"]].to_csv(output_folder + "reference_df_unique_old_categories_" + str(i) + ".csv", index=None)
1288
+
1289
+ # Deduplicate categories within each sentiment group
1290
+ deduplicated_topic_map_df = reference_df_unique.groupby("Sentiment").apply(
1291
+ lambda group: deduplicate_categories(group["Subtopic"], group["Sentiment"], threshold=80)
1292
+ ).reset_index(drop=True) # Reset index after groupby
1293
+
1294
+ if deduplicated_topic_map_df['deduplicated_category'].isnull().all():
1295
+ # Check if 'deduplicated_category' contains any values
1296
+
1297
+ print("No deduplicated categories found, skipping the following code.")
1298
+
1299
+ else:
1300
+ # Join deduplicated columns back to original df
1301
+ # Remove rows where 'deduplicated_category' is blank or NaN
1302
+ deduplicated_topic_map_df = deduplicated_topic_map_df.loc[(deduplicated_topic_map_df['deduplicated_category'].str.strip() != '') & ~(deduplicated_topic_map_df['deduplicated_category'].isnull()), :]
1303
+
1304
+ deduplicated_topic_map_df.to_csv(output_folder + "deduplicated_topic_map_df_" + str(i) + ".csv", index=None)
1305
+
1306
+ reference_df = reference_df.merge(deduplicated_topic_map_df, on="old_category", how="left")
1307
+
1308
+ reference_df.rename(columns={"Subtopic": "Subtopic_old", "Sentiment": "Sentiment_old"}, inplace=True)
1309
+ # Extract subtopic and sentiment from deduplicated_category
1310
+ reference_df["Subtopic"] = reference_df["deduplicated_category"].str.extract(r'^(.*?) \|')[0] # Extract subtopic
1311
+ reference_df["Sentiment"] = reference_df["deduplicated_category"].str.extract(r'\| (.*)$')[0] # Extract sentiment
1312
+
1313
+ # Combine with old values to ensure no data is lost
1314
+ reference_df["Subtopic"] = reference_df["deduplicated_category"].combine_first(reference_df["Subtopic_old"])
1315
+ reference_df["Sentiment"] = reference_df["Sentiment"].combine_first(reference_df["Sentiment_old"])
1316
+
1317
+ reference_df.to_csv(output_folder + "reference_df_after_dedup.csv", index=None)
1318
+
1319
+ reference_df.drop(['old_category', 'deduplicated_category', "Subtopic_old", "Sentiment_old"], axis=1, inplace=True, errors="ignore")
1320
+
1321
+ reference_df = reference_df[["Response References", "General Topic", "Subtopic", "Sentiment", "Summary", "Start row of group"]]
1322
+
1323
+ reference_df["General Topic"] = reference_df["General Topic"].str.lower().str.capitalize()
1324
+ reference_df["Subtopic"] = reference_df["Subtopic"].str.lower().str.capitalize()
1325
+ reference_df["Sentiment"] = reference_df["Sentiment"].str.lower().str.capitalize()
1326
+
1327
+
1328
+
1329
+ # Remake unique_topics_df based on new reference_df
1330
+ unique_topics_df = create_unique_table_df_from_reference_table(reference_df)
1331
+
1332
+
1333
+ reference_df_grouped = reference_df.groupby(["General Topic", "Subtopic", "Sentiment"])
1334
+
1335
+ for group_keys, reference_df_group in reference_df_grouped:
1336
+ #print(f"Group: {group_keys}")
1337
+ #print(f"Data: {reference_df_group}")
1338
+
1339
+ if len(reference_df_group["General Topic"]) > 1:
1340
+
1341
+ filtered_reference_df = reference_df_group.reset_index()
1342
+
1343
+ filtered_reference_df_unique = filtered_reference_df.drop_duplicates(["General Topic", "Subtopic", "Sentiment", "Summary"])
1344
+
1345
+ # Sample n of the unique topic summaries. To limit the length of the text going into the summarisation tool
1346
+ filtered_reference_df_unique_sampled = filtered_reference_df_unique.sample(min(no_of_sampled_summaries, len(filtered_reference_df_unique)), random_state=random_seed)
1347
+
1348
+ #topic_summary_table_markdown = filtered_reference_df_unique_sampled.to_markdown(index=False)
1349
+
1350
+ #print(filtered_reference_df_unique_sampled)
1351
+
1352
+ all_summaries = pd.concat([all_summaries, filtered_reference_df_unique_sampled])
1353
+
1354
+ all_summaries.to_csv(output_folder + "all_summaries.csv", index=None)
1355
+
1356
+ summarised_references = all_summaries.groupby(["General Topic", "Subtopic", "Sentiment"]).agg({
1357
+ 'Response References': 'size', # Count the number of references
1358
+ 'Summary': lambda x: '\n'.join([s.split(': ', 1)[1] for s in x if ': ' in s]) # Join substrings after ': '
1359
+ }).reset_index()
1360
+
1361
+ summarised_references = summarised_references.loc[(summarised_references["Sentiment"] != "Not Mentioned") & (summarised_references["Response References"] > 1)]
1362
+
1363
+ summarised_references.to_csv(output_folder + "summarised_references.csv", index=None)
1364
+
1365
+ summarised_references_markdown = summarised_references.to_markdown(index=False)
1366
+
1367
+ return summarised_references, summarised_references_markdown, reference_df, unique_topics_df
1368
+
1369
+ def summarise_output_topics_query(model_choice:str, in_api_key:str, temperature:float, formatted_summary_prompt:str, summarise_topic_descriptions_system_prompt:str):
1370
+ conversation_history = []
1371
+ whole_conversation_metadata = []
1372
+
1373
+ # Prepare Gemini models before query
1374
+ if model_choice in ["gemini-1.5-flash-002", "gemini-1.5-pro-002"]:
1375
+ print("Using Gemini model:", model_choice)
1376
+ model, config = construct_gemini_generative_model(in_api_key=in_api_key, temperature=temperature, model_choice=model_choice, system_prompt=system_prompt, max_tokens=max_tokens)
1377
+ else:
1378
+ print("Using AWS Bedrock model:", model_choice)
1379
+ model = model_choice
1380
+ config = {}
1381
+
1382
+ whole_conversation = [summarise_topic_descriptions_system_prompt]
1383
+
1384
+ # Process requests to large language model
1385
+ responses, conversation_history, whole_conversation, whole_conversation_metadata = process_requests(formatted_summary_prompt, system_prompt, conversation_history, whole_conversation, whole_conversation_metadata, model, config, model_choice, temperature)
1386
+
1387
+ print("Finished summary query")
1388
+
1389
+ # Extract text from the `responses` list
1390
+ response_texts = [resp.text for resp in responses]
1391
+ latest_response_text = response_texts[-1]
1392
+
1393
+ #print("latest_response_text:", latest_response_text)
1394
+ #print("Whole conversation metadata:", whole_conversation_metadata)
1395
+
1396
+ return latest_response_text, conversation_history, whole_conversation_metadata
1397
+
1398
+ def summarise_output_topics(summarised_references:pd.DataFrame,
1399
+ unique_table_df:pd.DataFrame,
1400
+ reference_table_df:pd.DataFrame,
1401
+ model_choice:str,
1402
+ in_api_key:str,
1403
+ topic_summary_table_markdown:str,
1404
+ temperature:float,
1405
+ table_file_name:str,
1406
+ summarised_outputs:list = [],
1407
+ latest_summary_completed:int = 0,
1408
+ out_metadata_str:str = "",
1409
+ output_files:list = [],
1410
+ summarise_topic_descriptions_prompt:str=summarise_topic_descriptions_prompt, summarise_topic_descriptions_system_prompt:str=summarise_topic_descriptions_system_prompt,
1411
+ progress=gr.Progress(track_tqdm=True)):
1412
+ '''
1413
+ Create better summaries of the raw batch-level summaries created in the first run of the model.
1414
+ '''
1415
+ out_metadata = []
1416
+
1417
+ print("In summarise_output_topics function.")
1418
+
1419
+ all_summaries = summarised_references["Summary"].tolist()
1420
+
1421
+ length_all_summaries = len(all_summaries)
1422
+
1423
+ print("latest_summary_completed:", latest_summary_completed)
1424
+ print("length_all_summaries:", length_all_summaries)
1425
+
1426
+ if latest_summary_completed >= length_all_summaries:
1427
+ print("All summaries completed. Creating outputs.")
1428
+
1429
+ model_choice_clean = model_name_map[model_choice]
1430
+ file_name = re.search(r'(.*?)(?:_batch_|_col_)', table_file_name).group(1) if re.search(r'(.*?)(?:_batch_|_col_)', table_file_name) else table_file_name
1431
+ latest_batch_completed = int(re.search(r'batch_(\d+)_', table_file_name).group(1)) if 'batch_' in table_file_name else ""
1432
+ batch_size_number = int(re.search(r'size_(\d+)_', table_file_name).group(1)) if 'size_' in table_file_name else ""
1433
+ in_column_cleaned = re.search(r'col_(.*?)_reference', table_file_name).group(1) if 'col_' in table_file_name else ""
1434
+
1435
+ # Save outputs for each batch. If master file created, label file as master
1436
+ if latest_batch_completed:
1437
+ batch_file_path_details = f"{file_name}_batch_{latest_batch_completed}_size_{batch_size_number}_col_{in_column_cleaned}"
1438
+ else:
1439
+ batch_file_path_details = f"{file_name}_col_{in_column_cleaned}"
1440
+
1441
+ summarised_references["Revised summary"] = summarised_outputs
1442
+
1443
+ join_cols = ["General Topic", "Subtopic", "Sentiment"]
1444
+ join_plus_summary_cols = ["General Topic", "Subtopic", "Sentiment", "Revised summary"]
1445
+
1446
+ summarised_references_j = summarised_references[join_plus_summary_cols].drop_duplicates(join_plus_summary_cols)
1447
+
1448
+ unique_table_df_revised = unique_table_df.merge(summarised_references_j, on = join_cols, how = "left")
1449
+ # If no new summary is available, keep the original
1450
+ unique_table_df_revised["Revised summary"] = unique_table_df_revised["Revised summary"].combine_first(unique_table_df_revised["Summary"])
1451
+
1452
+ unique_table_df_revised = unique_table_df_revised[["General Topic", "Subtopic", "Sentiment", "Response References", "Revised summary"]]
1453
+
1454
+ reference_table_df_revised = reference_table_df.merge(summarised_references_j, on = join_cols, how = "left")
1455
+ # If no new summary is available, keep the original
1456
+ reference_table_df_revised["Revised summary"] = reference_table_df_revised["Revised summary"].combine_first(reference_table_df_revised["Summary"])
1457
+ reference_table_df_revised = reference_table_df_revised.drop("Summary", axis=1)
1458
+
1459
+ # Remove topics that are tagged as 'Not Mentioned'
1460
+ unique_table_df_revised = unique_table_df_revised.loc[unique_table_df_revised["Sentiment"] != "Not Mentioned", :]
1461
+ reference_table_df_revised = reference_table_df_revised.loc[reference_table_df_revised["Sentiment"] != "Not Mentioned", :]
1462
+
1463
+ unique_table_df_revised_path = output_folder + batch_file_path_details + "_summarised_unique_topic_table_" + model_choice_clean + ".csv"
1464
+ unique_table_df_revised.to_csv(unique_table_df_revised_path, index = None)
1465
+
1466
+ reference_table_df_revised_path = output_folder + batch_file_path_details + "_summarised_reference_df_table_" + model_choice_clean + ".csv"
1467
+ reference_table_df_revised.to_csv(reference_table_df_revised_path, index = None)
1468
+
1469
+ output_files.extend([reference_table_df_revised_path, unique_table_df_revised_path])
1470
+
1471
+ return summarised_references, unique_table_df_revised, reference_table_df_revised, output_files, summarised_outputs, latest_summary_completed, out_metadata_str
1472
+
1473
+ tic = time.perf_counter()
1474
+
1475
+ print("Starting with:", latest_summary_completed)
1476
+ print("Last summary number:", length_all_summaries)
1477
+
1478
+ summary_loop_description = "Creating summaries. " + str(latest_summary_completed) + " summaries completed so far."
1479
+ summary_loop = tqdm(range(latest_summary_completed, length_all_summaries), desc="Creating summaries", unit="summaries")
1480
+
1481
+ for summary_no in summary_loop:
1482
+
1483
+ print("Current summary number is:", summary_no)
1484
+
1485
+ summary_text = all_summaries[summary_no]
1486
+ print("summary_text:", summary_text)
1487
+ formatted_summary_prompt = [summarise_topic_descriptions_prompt.format(summaries=summary_text)]
1488
+
1489
+ try:
1490
+ response, conversation_history, metadata = summarise_output_topics_query(model_choice, in_api_key, temperature, formatted_summary_prompt, summarise_topic_descriptions_system_prompt)
1491
+ summarised_output = response
1492
+ except Exception as e:
1493
+ print(e)
1494
+ summarised_output = ""
1495
+
1496
+ summarised_outputs.append(summarised_output)
1497
+ out_metadata.extend(metadata)
1498
+ out_metadata_str = '. '.join(out_metadata)
1499
+
1500
+ latest_summary_completed += 1
1501
+
1502
+ # Check if beyond max time allowed for processing and break if necessary
1503
+ toc = time.perf_counter()
1504
+ time_taken = tic - toc
1505
+
1506
+ if time_taken > max_time_for_loop:
1507
+ print("Time taken for loop is greater than maximum time allowed.")
1508
+ summary_loop.close()
1509
+ tqdm._instances.clear()
1510
+ break
1511
+
1512
+ # If all summaries completeed
1513
+ if latest_summary_completed >= length_all_summaries:
1514
+ print("At last summary.")
1515
+
1516
+ return summarised_references, unique_table_df, reference_table_df, output_files, summarised_outputs, latest_summary_completed, out_metadata_str
tools/prompts.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ system_prompt = """You are a researcher analysing responses from a public consultation. . The subject of this consultation is: {consultation_context}. You are analysing a single question from this consultation that is {column_name}."""
2
+
3
+ initial_table_prompt = """The responses from the consultation are shown in the following table that contains two columns - Reference and Response:
4
+ '{response_table}'
5
+ Based on the above table, create a markdown table to summarise the consultation responses.
6
+ In the first column identify general topics relevant to responses. Create as many general topics as you can.
7
+ In the second column list subtopics relevant to responses. Make the subtopics as specific as possible and make sure they cover every issue mentioned.
8
+ In the third column write the sentiment of the subtopic: Negative, Neutral, or Positive.
9
+ In the fourth column, write a short summary of the subtopic based on relevant responses. Highlight specific issues that appear relevant responses.
10
+ In the fifth column list the Response reference numbers of responses relevant to the Subtopic separated by commas.
11
+
12
+ Do not add any other columns. Return the table in markdown format, and don't include any special characters in the table. Do not add any other text to your response."""
13
+
14
+ prompt2 = ""
15
+
16
+ prompt3 = ""
17
+
18
+ ## Adding existing topics to consultation responses
19
+
20
+ add_existing_topics_system_prompt = """You are a researcher analysing responses from a public consultation. The subject of this consultation is: {consultation_context}. You are analysing a single question from this consultation that is {column_name}."""
21
+
22
+ add_existing_topics_prompt = """Responses from a recent consultation are shown in the following table:
23
+
24
+ '{response_table}'
25
+
26
+ And below is a table of topics currently known to be relevant to this consultation:
27
+
28
+ '{topics}'
29
+
30
+ Your job is to assign responses from the Response column to existing general topics and subtopics, or to new topics if no existing topics are relevant.
31
+ Create a new markdown table to summarise the consultation responses.
32
+ In the first and second columns, assign responses to the General Topics and Subtopics from the Topics table if they are relevant. If you cannot find a relevant topic, add new General Topics and Subtopics to the table. Make the new Subtopics as specific as possible.
33
+ In the third column, write the sentiment of the Subtopic: Negative, Neutral, or Positive.
34
+ In the fourth column, a short summary of the Subtopic based on relevant responses. Highlight specific issues that appear in relevant responses.
35
+ In the fifth column, a list of Response reference numbers relevant to the Subtopic separated by commas.
36
+
37
+ Do not add any other columns. Exclude rows for topics that are not assigned to any response. Return the table in markdown format, and do not include any special characters in the table. Do not add any other text to your response."""
38
+
39
+
40
+ summarise_topic_descriptions_system_prompt = """You are a researcher analysing responses from a public consultation."""
41
+
42
+ summarise_topic_descriptions_prompt = """Below is a table with number of paragraphs related to consultation responses:
43
+
44
+ '{summaries}'
45
+
46
+ Your job is to make a consolidated summary of the above text. Return a summary up to two paragraphs long that includes as much detail as possible from the original text. Return only the summary and no other text.
47
+
48
+ Summary:"""