burtenshaw commited on
Commit
fb09fd5
·
0 Parent(s):

first commit

Browse files
Files changed (5) hide show
  1. README.md +84 -0
  2. app.py +546 -0
  3. build/lib/app.py +218 -0
  4. pyproject.toml +11 -0
  5. requirements.txt +179 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AI Agent Builder with SmolaGents
2
+
3
+ This application allows you to build AI agents using the SmolaGents library from Hugging Face. You can select tools from Hugging Face Hub collections and Spaces, and interact with your agent through a chat interface.
4
+
5
+ ## Features
6
+
7
+ - Add tools from Hugging Face Hub collections
8
+ - Add tools from Hugging Face Spaces
9
+ - Chat with your agent in an interactive interface
10
+ - Customize the model used for your agent
11
+ - Regenerate responses if needed
12
+
13
+ ## Installation
14
+
15
+ 1. Make sure you have Python 3.11 or higher installed
16
+ 2. Install the required dependencies:
17
+
18
+ ```bash
19
+ pip install -r requirements.txt
20
+ ```
21
+
22
+ ## Usage
23
+
24
+ 1. Run the application:
25
+
26
+ ```bash
27
+ python app.py
28
+ ```
29
+
30
+ 2. Open your browser and navigate to the URL displayed in the terminal (usually http://127.0.0.1:7860)
31
+
32
+ 3. Configure your agent:
33
+ - Select a model (default is Qwen/Qwen2.5-Coder-32B-Instruct)
34
+ - Add at least one tool from Hugging Face Hub collections or Spaces
35
+ - Click "Create Agent"
36
+
37
+ 4. Chat with your agent in the chat interface
38
+ - Type your message and press Enter
39
+ - Use the "Regenerate Response" button if you want a different answer
40
+ - Use "Clear Chat" to start a new conversation
41
+
42
+ ## Adding Tools
43
+
44
+ ### Hugging Face Hub Collections
45
+ Enter the collection slug for a tool collection from Hugging Face Hub.
46
+ Example: `huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f`
47
+
48
+ ### Hugging Face Spaces
49
+ Enter the space ID for a Gradio app on Hugging Face Spaces.
50
+ Example: `black-forest-labs/FLUX.1-schnell`
51
+
52
+ This allows you to use any Gradio app on Hugging Face Spaces as a tool for your agent.
53
+
54
+ ## Example Spaces to Try
55
+
56
+ ### Image Generation Spaces
57
+ - `black-forest-labs/FLUX.1-schnell`: Text-to-image generation
58
+ - `stabilityai/stable-diffusion-xl-base-1.0`: High-quality image generation
59
+
60
+ ### Text Analysis Spaces
61
+ - `sentence-transformers/text-similarity`: Compare text similarity
62
+ - `facebook/bart-large-mnli`: Text classification
63
+
64
+ ### Other Useful Spaces
65
+ - `spaces/jxnl/instructor-xl`: Generate structured data
66
+ - `spaces/fffiloni/text-to-speech`: Convert text to speech
67
+
68
+ ## Troubleshooting
69
+
70
+ If you encounter issues with the Space tool not being recognized, make sure:
71
+ 1. You've entered the correct Space ID
72
+ 2. The Space has a Gradio interface
73
+ 3. The Space is publicly accessible
74
+
75
+ ## Requirements
76
+
77
+ - gradio>=5.15.0
78
+ - smolagents>=1.10.0
79
+ - huggingface_hub
80
+
81
+ ## Note
82
+
83
+ For some tools and models, you may need to set up API keys as environment variables. For example:
84
+ - `HF_TOKEN` for Hugging Face Hub access
app.py ADDED
@@ -0,0 +1,546 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import Optional
3
+ import gradio as gr
4
+ import requests
5
+ from smolagents import CodeAgent, Tool
6
+ from smolagents.models import HfApiModel
7
+ from smolagents.monitoring import LogLevel
8
+ from gradio import ChatMessage
9
+
10
+ DEFAULT_MODEL = "Qwen/Qwen2.5-Coder-32B-Instruct"
11
+ HF_API_TOKEN = os.getenv("HF_TOKEN")
12
+
13
+ # Tool descriptions for the UI
14
+ TOOL_DESCRIPTIONS = {
15
+ "Hub Collections": "Add tool collections from Hugging Face Hub.",
16
+ "Spaces": "Add tools from Hugging Face Spaces.",
17
+ }
18
+
19
+
20
+ def search_spaces(query, limit=1):
21
+ """
22
+ Search for Hugging Face Spaces using the API.
23
+ Returns the first result or None if no results.
24
+ """
25
+ try:
26
+ url = f"https://huggingface.co/api/spaces?search={query}&limit={limit}"
27
+ response = requests.get(
28
+ url, headers={"Authorization": f"Bearer {HF_API_TOKEN}"}
29
+ )
30
+ response.raise_for_status()
31
+
32
+ spaces = response.json()
33
+ if not spaces:
34
+ return None
35
+
36
+ # Get the first space
37
+ space = spaces[0]
38
+ space_id = space["id"]
39
+
40
+ # Extract title and description
41
+ title = space_id.split("/")[-1] # Default to the last part of the ID
42
+ description = f"Tool from {space_id}"
43
+
44
+ # Try to get title from different possible locations
45
+ if "title" in space:
46
+ title = space["title"]
47
+ elif "cardData" in space and "title" in space["cardData"]:
48
+ title = space["cardData"]["title"]
49
+
50
+ # Try to get description from different possible locations
51
+ if "description" in space:
52
+ description = space["description"]
53
+ elif "cardData" in space and "description" in space["cardData"]:
54
+ description = space["cardData"]["description"]
55
+
56
+ return {
57
+ "id": space_id,
58
+ "title": title,
59
+ "description": description,
60
+ }
61
+ except Exception as e:
62
+ print(f"Error searching spaces: {e}")
63
+ return None
64
+
65
+
66
+ def get_space_metadata(space_id):
67
+ """
68
+ Get metadata for a specific Hugging Face Space.
69
+ """
70
+ try:
71
+ url = f"https://huggingface.co/api/spaces/{space_id}"
72
+ response = requests.get(
73
+ url, headers={"Authorization": f"Bearer {HF_API_TOKEN}"}
74
+ )
75
+ response.raise_for_status()
76
+
77
+ space = response.json()
78
+
79
+ # Extract title and description from the space data
80
+ # The structure can vary, so we need to handle different cases
81
+ title = space_id
82
+ description = f"Tool from {space_id}"
83
+
84
+ # Try to get title from different possible locations
85
+ if "title" in space:
86
+ title = space["title"]
87
+ elif "cardData" in space and "title" in space["cardData"]:
88
+ title = space["cardData"]["title"]
89
+ else:
90
+ # Use the last part of the space_id as a fallback title
91
+ title = space_id.split("/")[-1]
92
+
93
+ # Try to get description from different possible locations
94
+ if "description" in space:
95
+ description = space["description"]
96
+ elif "cardData" in space and "description" in space["cardData"]:
97
+ description = space["cardData"]["description"]
98
+
99
+ return {
100
+ "id": space_id,
101
+ "title": title,
102
+ "description": description,
103
+ }
104
+ except Exception as e:
105
+ print(f"Error getting space metadata: {e}")
106
+ return None
107
+
108
+
109
+ def create_agent(model_name, space_tools=None):
110
+ """
111
+ Create a CodeAgent with the specified model and tools.
112
+ """
113
+ if not space_tools:
114
+ space_tools = []
115
+
116
+ try:
117
+ # Convert space tools to Tool objects
118
+ tools = []
119
+ for tool_info in space_tools:
120
+ space_id = tool_info["id"]
121
+ tool = Tool.from_space(
122
+ space_id,
123
+ name=tool_info.get("name", space_id),
124
+ description=tool_info.get("description", ""),
125
+ )
126
+ tools.append(tool)
127
+
128
+ # Initialize the HfApiModel with the model name
129
+ model = HfApiModel(model_id=model_name, token=HF_API_TOKEN)
130
+
131
+ # Create the agent with the tools and additional imports
132
+ agent = CodeAgent(
133
+ tools=tools,
134
+ model=model,
135
+ additional_authorized_imports=["PIL", "requests"],
136
+ verbosity_level=LogLevel.DEBUG, # Set higher verbosity for detailed logs
137
+ )
138
+ print(f"Agent created successfully with {len(tools)} tools")
139
+ return agent
140
+ except Exception as e:
141
+ print(f"Error creating agent: {e}")
142
+
143
+ # Try with a fallback model if the specified one fails
144
+ try:
145
+ print("Trying fallback model...")
146
+ fallback_model = HfApiModel(
147
+ model_id="Qwen/Qwen2.5-Coder-7B-Instruct", token=HF_API_TOKEN
148
+ )
149
+
150
+ agent = CodeAgent(
151
+ tools=tools,
152
+ model=fallback_model,
153
+ additional_authorized_imports=["PIL", "requests"],
154
+ verbosity_level=LogLevel.DEBUG, # Set higher verbosity for detailed logs
155
+ )
156
+ print("Agent created successfully with fallback model")
157
+ return agent
158
+ except Exception as e:
159
+ print(f"Error creating agent: {e}")
160
+ return None
161
+
162
+
163
+ # Event handler functions
164
+ def on_search_spaces(query):
165
+ if not query:
166
+ return "Please enter a search term.", "", "", ""
167
+
168
+ try:
169
+ space_info = search_spaces(query)
170
+ if space_info is None:
171
+ return "No spaces found.", "", "", ""
172
+
173
+ # Format the results as markdown
174
+ results_md = "### Search Results:\n"
175
+ results_md += f"- ID: `{space_info['id']}`\n"
176
+ results_md += f"- Title: {space_info['title']}\n"
177
+ results_md += f"- Description: {space_info['description']}\n"
178
+
179
+ # Return values to update the UI
180
+ return (
181
+ results_md,
182
+ space_info["id"],
183
+ space_info["title"],
184
+ space_info["description"],
185
+ )
186
+ except Exception as e:
187
+ print(f"Error in search: {e}")
188
+ return f"Error: {str(e)}", "", "", ""
189
+
190
+
191
+ def on_validate_space(space_id):
192
+ if not space_id:
193
+ return "Please enter a space ID or search term.", "", ""
194
+
195
+ try:
196
+ # First try to get metadata directly if it's a valid space ID
197
+ space_info = get_space_metadata(space_id)
198
+
199
+ # If not found, try to search for it
200
+ if space_info is None:
201
+ # Try to search for the space using the ID as a search term
202
+ space_info = search_spaces(space_id)
203
+ if space_info is None:
204
+ return f"No spaces found for '{space_id}'.", "", ""
205
+
206
+ # Format search result as markdown
207
+ result_md = f"### Found Space via Search:\n"
208
+ result_md += f"- ID: `{space_info['id']}`\n"
209
+ result_md += f"- Title: {space_info['title']}\n"
210
+ result_md += f"- Description: {space_info['description']}\n"
211
+
212
+ return (
213
+ result_md,
214
+ space_info["title"],
215
+ space_info["description"],
216
+ )
217
+
218
+ # Format direct match as markdown
219
+ result_md = f"### Space Validated Successfully:\n"
220
+ result_md += f"- ID: `{space_info['id']}`\n"
221
+ result_md += f"- Title: {space_info['title']}\n"
222
+ result_md += f"- Description: {space_info['description']}\n"
223
+
224
+ return (
225
+ result_md,
226
+ space_info["title"],
227
+ space_info["description"],
228
+ )
229
+ except Exception as e:
230
+ print(f"Error validating space: {e}")
231
+ return f"Error: {str(e)}", "", ""
232
+
233
+
234
+ def on_add_tool(space_id, space_name, space_description, current_tools):
235
+ if not space_id:
236
+ return (
237
+ current_tools,
238
+ "Please enter a space ID.",
239
+ )
240
+
241
+ # Check if this tool is already added
242
+ for tool in current_tools:
243
+ if tool["id"] == space_id:
244
+ return (
245
+ current_tools,
246
+ f"Tool '{space_id}' is already added.",
247
+ )
248
+
249
+ # Add the new tool
250
+ new_tool = {
251
+ "id": space_id,
252
+ "name": space_name if space_name else space_id,
253
+ "description": space_description if space_description else "No description",
254
+ }
255
+
256
+ updated_tools = current_tools + [new_tool]
257
+
258
+ # Format the tools as markdown
259
+ tools_md = "### Added Tools:\n"
260
+ for i, tool in enumerate(updated_tools, 1):
261
+ tools_md += f"{i}. **{tool['name']}** (`{tool['id']}`)\n"
262
+ tools_md += f" {tool['description']}\n\n"
263
+
264
+ return updated_tools, tools_md
265
+
266
+
267
+ def on_create_agent(model, space_tools):
268
+ if not space_tools:
269
+ return (
270
+ None,
271
+ [],
272
+ "",
273
+ "Please add at least one tool before creating an agent.",
274
+ "No agent created yet.",
275
+ )
276
+
277
+ try:
278
+ # Create the agent
279
+ agent = create_agent(model, space_tools)
280
+
281
+ if agent is None:
282
+ return (
283
+ None,
284
+ [],
285
+ "",
286
+ "Failed to create agent. Please try again with different tools or model.",
287
+ "No agent created yet.",
288
+ )
289
+
290
+ # Format the tools for display
291
+ tools_str = ", ".join(
292
+ [f"{tool['name']} ({tool['id']})" for tool in space_tools]
293
+ )
294
+
295
+ # Generate agent status
296
+ agent_status = update_agent_status(agent)
297
+
298
+ return (
299
+ agent,
300
+ [],
301
+ "",
302
+ f"✅ Agent created successfully with {model}!\nTools: {tools_str}",
303
+ agent_status,
304
+ )
305
+ except Exception as e:
306
+ print(f"Error creating agent: {e}")
307
+ return None, [], "", f"Error creating agent: {str(e)}", "No agent created yet."
308
+
309
+
310
+ def add_user_message(message, chat_history):
311
+ """Add the user message to the chat history."""
312
+ # For Gradio chatbot with type="messages", we need to use ChatMessage objects
313
+ if not message:
314
+ return "", chat_history
315
+
316
+ # Add user message to chat history
317
+ chat_history = chat_history + [ChatMessage(role="user", content=message)]
318
+ return message, chat_history
319
+
320
+
321
+ def stream_to_gradio(
322
+ agent,
323
+ task: str,
324
+ reset_agent_memory: bool = False,
325
+ additional_args: Optional[dict] = None,
326
+ ):
327
+ """Runs an agent with the given task and streams the messages from the agent as gradio ChatMessages."""
328
+
329
+ from smolagents.gradio_ui import pull_messages_from_step, handle_agent_output_types
330
+ from smolagents.agent_types import AgentAudio, AgentImage, AgentText
331
+
332
+ for step_log in agent.run(
333
+ task, stream=True, reset=reset_agent_memory, additional_args=additional_args
334
+ ):
335
+ for message in pull_messages_from_step(
336
+ step_log,
337
+ ):
338
+ yield message
339
+
340
+ final_answer = step_log # Last log is the run's final_answer
341
+ final_answer = handle_agent_output_types(final_answer)
342
+
343
+ if isinstance(final_answer, AgentImage):
344
+ yield gr.ChatMessage(
345
+ role="assistant",
346
+ content={"path": final_answer.to_string(), "mime_type": "image/png"},
347
+ )
348
+ elif isinstance(final_answer, AgentText) and os.path.exists(
349
+ final_answer.to_string()
350
+ ):
351
+ yield gr.ChatMessage(
352
+ role="assistant",
353
+ content=gr.Image(final_answer.to_string()),
354
+ )
355
+ elif isinstance(final_answer, AgentAudio):
356
+ yield gr.ChatMessage(
357
+ role="assistant",
358
+ content={"path": final_answer.to_string(), "mime_type": "audio/wav"},
359
+ )
360
+ else:
361
+ yield gr.ChatMessage(
362
+ role="assistant", content=f"**Final answer:** {str(final_answer)}"
363
+ )
364
+
365
+
366
+ def stream_agent_response(agent, message, chat_history):
367
+ """Stream the agent's response to the chat history."""
368
+ if not message or agent is None:
369
+ return chat_history
370
+
371
+ # First yield the current chat history
372
+ yield chat_history
373
+
374
+ try:
375
+ # Stream the agent's response
376
+ for msg in stream_to_gradio(agent, message):
377
+ # Add the message to chat history
378
+ chat_history = chat_history + [msg]
379
+ # Yield updated chat history
380
+ yield chat_history
381
+ except Exception as e:
382
+ # Handle errors
383
+ error_msg = f"Error: {str(e)}"
384
+ chat_history = chat_history + [ChatMessage(role="assistant", content=error_msg)]
385
+ yield chat_history
386
+
387
+
388
+ def on_clear(agent=None):
389
+ """Clear the chat and reset the agent."""
390
+ return (
391
+ agent,
392
+ [],
393
+ "",
394
+ "Agent cleared. Create a new one to continue.",
395
+ "",
396
+ gr.update(interactive=False),
397
+ )
398
+
399
+
400
+ def update_agent_status(agent):
401
+ """Update the agent status display with current information."""
402
+ if agent is None:
403
+ return "No agent created yet. Add a Space tool to get started."
404
+
405
+ # Get agent information
406
+ tools = agent.tools if hasattr(agent, "tools") else []
407
+ tool_count = len(tools)
408
+
409
+ # Create status message
410
+ status = f"Agent ready with {tool_count} tools"
411
+ return status
412
+
413
+
414
+ # Create the Gradio app
415
+ with gr.Blocks(title="AI Agent Builder") as app:
416
+ gr.Markdown("# AI Agent Builder with SmolaGents")
417
+ gr.Markdown("Build your own AI agent by selecting tools from Hugging Face Spaces.")
418
+
419
+ # Agent state
420
+ agent_state = gr.State(None)
421
+ last_message = gr.State("")
422
+ space_tools_state = gr.State([])
423
+
424
+ # Message store for preserving user message
425
+ msg_store = gr.State("")
426
+
427
+ with gr.Row():
428
+ # Left sidebar for tool configuration
429
+ with gr.Column(scale=1):
430
+ gr.Markdown("## Tool Configuration")
431
+ gr.Markdown("Add multiple Hugging Face Spaces as tools for your agent:")
432
+
433
+ # Hidden model input with default value
434
+ model_input = gr.Textbox(
435
+ value=DEFAULT_MODEL,
436
+ label="Model",
437
+ visible=False,
438
+ )
439
+
440
+ # Space tool input
441
+ with gr.Group():
442
+ gr.Markdown("### Add Space as Tool")
443
+ space_tool_input = gr.Textbox(
444
+ label="Space ID or Search Term",
445
+ placeholder=("Enter a Space ID or search term"),
446
+ info="Enter a Space ID (username/space-name) or search term",
447
+ )
448
+ space_name_input = gr.Textbox(
449
+ label="Tool Name (optional)",
450
+ placeholder="Enter a name for this tool",
451
+ )
452
+ space_description_input = gr.Textbox(
453
+ label="Tool Description (optional)",
454
+ placeholder="Enter a description for this tool",
455
+ lines=2,
456
+ )
457
+ add_tool_button = gr.Button("Add Tool", variant="primary")
458
+
459
+ # Display added tools
460
+ gr.Markdown("### Added Tools")
461
+ tools_display = gr.Markdown(
462
+ "No tools added yet. Add at least one tool before creating an agent."
463
+ )
464
+
465
+ # Create agent button
466
+ create_button = gr.Button(
467
+ "Create Agent with Selected Tools", variant="secondary", size="lg"
468
+ )
469
+
470
+ # Status message
471
+ status_msg = gr.Markdown("")
472
+
473
+ # Agent status display
474
+ agent_status = gr.Markdown("No agent created yet.")
475
+
476
+ # Main content area
477
+ with gr.Column(scale=2):
478
+ # Chat interface for the agent
479
+ chatbot = gr.Chatbot(
480
+ label="Agent Chat",
481
+ height=600,
482
+ show_copy_button=True,
483
+ avatar_images=("👤", "🤖"),
484
+ type="messages", # Use messages type for ChatMessage objects
485
+ )
486
+ msg = gr.Textbox(
487
+ label="Your message",
488
+ placeholder="Type a message to your agent...",
489
+ interactive=True,
490
+ )
491
+
492
+ with gr.Row():
493
+ with gr.Column(scale=1, min_width=60):
494
+ clear = gr.Button("🗑️", scale=1)
495
+ with gr.Column(scale=8):
496
+ # Empty column for spacing
497
+ pass
498
+
499
+ # Connect event handlers
500
+ # Connect the space_tool_input submit event to the validation handler
501
+ space_tool_input.submit(
502
+ on_validate_space,
503
+ inputs=[space_tool_input],
504
+ outputs=[status_msg, space_name_input, space_description_input],
505
+ )
506
+
507
+ # Connect the add tool button
508
+ add_tool_button.click(
509
+ on_add_tool,
510
+ inputs=[
511
+ space_tool_input,
512
+ space_name_input,
513
+ space_description_input,
514
+ space_tools_state,
515
+ ],
516
+ outputs=[space_tools_state, tools_display],
517
+ )
518
+
519
+ # Connect the create button to the handler
520
+ create_button.click(
521
+ on_create_agent,
522
+ inputs=[model_input, space_tools_state],
523
+ outputs=[agent_state, chatbot, msg, status_msg, agent_status],
524
+ )
525
+
526
+ # Connect the message input to the chain of handlers
527
+ msg.submit(
528
+ lambda message: (message, message, ""), # Store message and clear input
529
+ inputs=[msg],
530
+ outputs=[msg_store, msg, msg],
531
+ queue=False,
532
+ ).then(
533
+ add_user_message, # Add user message to chat
534
+ inputs=[msg_store, chatbot],
535
+ outputs=[msg_store, chatbot],
536
+ queue=False,
537
+ ).then(
538
+ stream_agent_response, # Generate and stream response
539
+ inputs=[agent_state, msg_store, chatbot],
540
+ outputs=chatbot,
541
+ queue=True,
542
+ )
543
+
544
+
545
+ if __name__ == "__main__":
546
+ app.queue().launch()
build/lib/app.py ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import os
3
+ from smolagents import CodeAgent, ToolCollection, Tool
4
+ from smolagents.models import HfApiModel, LiteLLMModel
5
+
6
+ # Default model to use
7
+ DEFAULT_MODEL = "Qwen/Qwen2.5-Coder-7B-Instruct"
8
+
9
+ # Tool descriptions for the UI
10
+ TOOL_DESCRIPTIONS = {
11
+ "Hub Collections": "Add tool collections from Hugging Face Hub.",
12
+ "Spaces": "Add tools from Hugging Face Spaces.",
13
+ }
14
+
15
+
16
+ # Function to create an agent with selected tools
17
+ def create_agent(model_name, hub_tool=None, space_tool=None):
18
+ tools = []
19
+
20
+ # Add tool from Hub if provided
21
+ if hub_tool:
22
+ try:
23
+ hub_collection = ToolCollection.from_hub(collection_slug=hub_tool)
24
+ tools.extend(hub_collection.tools)
25
+ except Exception as e:
26
+ print(f"Error loading Hub tool: {e}")
27
+
28
+ # Add tool from Space if provided
29
+ if space_tool:
30
+ try:
31
+ space_tool_obj = Tool.from_space(
32
+ space_id=space_tool,
33
+ name=f"space_{space_tool.replace('/', '_')}",
34
+ description=f"Tool from Hugging Face Space: {space_tool}",
35
+ )
36
+ tools.append(space_tool_obj)
37
+ except Exception as e:
38
+ print(f"Error loading Space tool: {e}")
39
+
40
+ # Create and return the agent
41
+ try:
42
+ # Try to use HfApiModel first
43
+ model = HfApiModel(model_id=model_name)
44
+ return CodeAgent(tools=tools, model=model)
45
+ except Exception:
46
+ # Fall back to LiteLLMModel if HfApiModel fails
47
+ try:
48
+ model = LiteLLMModel(model_id=model_name)
49
+ return CodeAgent(tools=tools, model=model)
50
+ except Exception as e:
51
+ print(f"Error creating agent: {e}")
52
+ return None
53
+
54
+
55
+ # Main application
56
+ def main():
57
+ with gr.Blocks(title="AI Agent Builder") as app:
58
+ gr.Markdown("# AI Agent Builder with SmolaGents")
59
+ gr.Markdown(
60
+ "Build your own AI agent by selecting tools from Hugging Face Hub and Spaces."
61
+ )
62
+
63
+ with gr.Tabs():
64
+ with gr.TabItem("Build Agent"):
65
+ with gr.Row():
66
+ with gr.Column(scale=1):
67
+ # Model selection
68
+ model_input = gr.Textbox(
69
+ label="Model Name",
70
+ placeholder="Enter model name or ID",
71
+ value=DEFAULT_MODEL,
72
+ )
73
+
74
+ # Hub tool input
75
+ hub_tool_input = gr.Textbox(
76
+ label="Add Tool Collection from Hub (collection slug)",
77
+ placeholder="e.g., huggingface-tools/diffusion-tools-...",
78
+ )
79
+
80
+ # Space tool input
81
+ space_tool_input = gr.Textbox(
82
+ label="Add Tool from Space (space ID)",
83
+ placeholder="e.g., black-forest-labs/FLUX.1-schnell",
84
+ )
85
+
86
+ # Create agent button
87
+ create_button = gr.Button("Create Agent")
88
+
89
+ # Status message
90
+ status_msg = gr.Markdown("")
91
+
92
+ with gr.Column(scale=2):
93
+ # Chat interface for the agent
94
+ chatbot = gr.Chatbot(label="Agent Chat")
95
+ msg = gr.Textbox(label="Your message")
96
+
97
+ with gr.Row():
98
+ clear = gr.Button("Clear Chat")
99
+ regenerate = gr.Button("Regenerate Response")
100
+
101
+ with gr.TabItem("Tool Descriptions"):
102
+ tool_descriptions_md = """
103
+ ## Hugging Face Hub Tool Collections
104
+
105
+ You can add tool collections from Hugging Face Hub by providing the collection slug.
106
+ Example: `huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f`
107
+
108
+ ## Hugging Face Spaces as Tools
109
+
110
+ You can add tools from Hugging Face Spaces by providing the space ID.
111
+ Example: `black-forest-labs/FLUX.1-schnell`
112
+
113
+ This allows you to use any Gradio app on Hugging Face Spaces as a tool for your agent.
114
+ """
115
+
116
+ gr.Markdown(tool_descriptions_md)
117
+
118
+ # Agent state
119
+ agent_state = gr.State(None)
120
+ last_message = gr.State("")
121
+
122
+ # Event handlers
123
+ def on_create_agent(model, hub_tool, space_tool):
124
+ if not model:
125
+ return None, [], "", "⚠️ Please enter a model name."
126
+
127
+ if not hub_tool and not space_tool:
128
+ return None, [], "", "⚠️ Please add at least one tool from Hub or Space."
129
+
130
+ agent = create_agent(model, hub_tool, space_tool)
131
+
132
+ if agent is None:
133
+ return (
134
+ None,
135
+ [],
136
+ "",
137
+ "❌ Failed to create agent. Check console for details.",
138
+ )
139
+
140
+ tools_info = []
141
+ if hub_tool:
142
+ tools_info.append(f"Hub collection: {hub_tool}")
143
+ if space_tool:
144
+ tools_info.append(f"Space: {space_tool}")
145
+
146
+ tools_str = " | ".join(tools_info)
147
+
148
+ return (
149
+ agent,
150
+ [],
151
+ "",
152
+ f"✅ Agent created successfully with {model}! ({tools_str})",
153
+ )
154
+
155
+ create_button.click(
156
+ on_create_agent,
157
+ inputs=[model_input, hub_tool_input, space_tool_input],
158
+ outputs=[agent_state, chatbot, msg, status_msg],
159
+ )
160
+
161
+ def on_message(message, chat_history, agent, last_msg):
162
+ if not message:
163
+ return "", chat_history, last_msg
164
+
165
+ if agent is None:
166
+ chat_history.append((message, "Please create an agent first."))
167
+ return "", chat_history, last_msg
168
+
169
+ try:
170
+ response = agent.run(message, reset=False)
171
+ chat_history.append((message, response))
172
+ return "", chat_history, message
173
+ except Exception as e:
174
+ error_msg = f"Error: {str(e)}"
175
+ chat_history.append((message, error_msg))
176
+ return "", chat_history, message
177
+
178
+ msg.submit(
179
+ on_message,
180
+ inputs=[msg, chatbot, agent_state, last_message],
181
+ outputs=[msg, chatbot, last_message],
182
+ )
183
+
184
+ def on_regenerate(chat_history, agent, last_msg):
185
+ if not chat_history or not last_msg or agent is None:
186
+ return chat_history, last_msg
187
+
188
+ try:
189
+ # Remove the last exchange
190
+ if chat_history:
191
+ chat_history.pop()
192
+
193
+ # Regenerate the response
194
+ response = agent.run(last_msg, reset=False)
195
+ chat_history.append((last_msg, response))
196
+ return chat_history, last_msg
197
+ except Exception as e:
198
+ error_msg = f"Error regenerating response: {str(e)}"
199
+ chat_history.append((last_msg, error_msg))
200
+ return chat_history, last_msg
201
+
202
+ regenerate.click(
203
+ on_regenerate,
204
+ inputs=[chatbot, agent_state, last_message],
205
+ outputs=[chatbot, last_message],
206
+ )
207
+
208
+ def on_clear():
209
+ return None, [], "", "Agent cleared. Create a new one to continue."
210
+
211
+ clear.click(on_clear, outputs=[agent_state, chatbot, last_message, status_msg])
212
+
213
+ return app
214
+
215
+
216
+ if __name__ == "__main__":
217
+ app = main()
218
+ app.launch()
pyproject.toml ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "agent-builder"
3
+ version = "0.1.0"
4
+ description = "Add your description here"
5
+ readme = "README.md"
6
+ requires-python = ">=3.11"
7
+ dependencies = [
8
+ "gradio>=5.20.0",
9
+ "huggingface-hub>=0.28.1",
10
+ "smolagents>=1.10.0",
11
+ ]
requirements.txt ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file was autogenerated by uv via the following command:
2
+ # uv pip compile --output-file requirements.txt pyproject.toml
3
+ aiofiles==23.2.1
4
+ # via gradio
5
+ annotated-types==0.7.0
6
+ # via pydantic
7
+ anyio==4.8.0
8
+ # via
9
+ # gradio
10
+ # httpx
11
+ # starlette
12
+ beautifulsoup4==4.13.3
13
+ # via markdownify
14
+ certifi==2025.1.31
15
+ # via
16
+ # httpcore
17
+ # httpx
18
+ # requests
19
+ charset-normalizer==3.4.1
20
+ # via requests
21
+ click==8.1.8
22
+ # via
23
+ # duckduckgo-search
24
+ # typer
25
+ # uvicorn
26
+ duckduckgo-search==7.5.0
27
+ # via smolagents
28
+ fastapi==0.115.11
29
+ # via gradio
30
+ ffmpy==0.5.0
31
+ # via gradio
32
+ filelock==3.17.0
33
+ # via huggingface-hub
34
+ fsspec==2025.2.0
35
+ # via
36
+ # gradio-client
37
+ # huggingface-hub
38
+ gradio==5.20.0
39
+ # via agent-builder (pyproject.toml)
40
+ gradio-client==1.7.2
41
+ # via gradio
42
+ groovy==0.1.2
43
+ # via gradio
44
+ h11==0.14.0
45
+ # via
46
+ # httpcore
47
+ # uvicorn
48
+ httpcore==1.0.7
49
+ # via httpx
50
+ httpx==0.28.1
51
+ # via
52
+ # gradio
53
+ # gradio-client
54
+ # safehttpx
55
+ huggingface-hub==0.29.2
56
+ # via
57
+ # agent-builder (pyproject.toml)
58
+ # gradio
59
+ # gradio-client
60
+ # smolagents
61
+ idna==3.10
62
+ # via
63
+ # anyio
64
+ # httpx
65
+ # requests
66
+ jinja2==3.1.6
67
+ # via
68
+ # gradio
69
+ # smolagents
70
+ lxml==5.3.1
71
+ # via duckduckgo-search
72
+ markdown-it-py==3.0.0
73
+ # via rich
74
+ markdownify==1.1.0
75
+ # via smolagents
76
+ markupsafe==2.1.5
77
+ # via
78
+ # gradio
79
+ # jinja2
80
+ mdurl==0.1.2
81
+ # via markdown-it-py
82
+ numpy==2.2.3
83
+ # via
84
+ # gradio
85
+ # pandas
86
+ orjson==3.10.15
87
+ # via gradio
88
+ packaging==24.2
89
+ # via
90
+ # gradio
91
+ # gradio-client
92
+ # huggingface-hub
93
+ pandas==2.2.3
94
+ # via
95
+ # gradio
96
+ # smolagents
97
+ pillow==11.1.0
98
+ # via
99
+ # gradio
100
+ # smolagents
101
+ primp==0.14.0
102
+ # via duckduckgo-search
103
+ pydantic==2.10.6
104
+ # via
105
+ # fastapi
106
+ # gradio
107
+ pydantic-core==2.27.2
108
+ # via pydantic
109
+ pydub==0.25.1
110
+ # via gradio
111
+ pygments==2.19.1
112
+ # via rich
113
+ python-dateutil==2.9.0.post0
114
+ # via pandas
115
+ python-dotenv==1.0.1
116
+ # via smolagents
117
+ python-multipart==0.0.20
118
+ # via gradio
119
+ pytz==2025.1
120
+ # via pandas
121
+ pyyaml==6.0.2
122
+ # via
123
+ # gradio
124
+ # huggingface-hub
125
+ requests==2.32.3
126
+ # via
127
+ # huggingface-hub
128
+ # smolagents
129
+ rich==13.9.4
130
+ # via
131
+ # smolagents
132
+ # typer
133
+ ruff==0.9.9
134
+ # via gradio
135
+ safehttpx==0.1.6
136
+ # via gradio
137
+ semantic-version==2.10.0
138
+ # via gradio
139
+ shellingham==1.5.4
140
+ # via typer
141
+ six==1.17.0
142
+ # via
143
+ # markdownify
144
+ # python-dateutil
145
+ smolagents==1.10.0
146
+ # via agent-builder (pyproject.toml)
147
+ sniffio==1.3.1
148
+ # via anyio
149
+ soupsieve==2.6
150
+ # via beautifulsoup4
151
+ starlette==0.46.0
152
+ # via
153
+ # fastapi
154
+ # gradio
155
+ tomlkit==0.13.2
156
+ # via gradio
157
+ tqdm==4.67.1
158
+ # via huggingface-hub
159
+ typer==0.15.2
160
+ # via gradio
161
+ typing-extensions==4.12.2
162
+ # via
163
+ # anyio
164
+ # beautifulsoup4
165
+ # fastapi
166
+ # gradio
167
+ # gradio-client
168
+ # huggingface-hub
169
+ # pydantic
170
+ # pydantic-core
171
+ # typer
172
+ tzdata==2025.1
173
+ # via pandas
174
+ urllib3==2.3.0
175
+ # via requests
176
+ uvicorn==0.34.0
177
+ # via gradio
178
+ websockets==15.0.1
179
+ # via gradio-client