freddyaboulton HF staff commited on
Commit
17d2a60
Β·
verified Β·
1 Parent(s): 6f357a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -275
README.md CHANGED
@@ -50,278 +50,3 @@ pip install fastrtc[vad, tts]
50
  ## Docs
51
 
52
  [https://fastrtc.org](https://fastrtc.org)
53
-
54
- ## Examples
55
- See the [Cookbook](https://fastrtc.org/pr-preview/pr-60/cookbook/) for examples of how to use the library.
56
-
57
- <table>
58
- <tr>
59
- <td width="50%">
60
- <h3>πŸ—£οΈπŸ‘€ Gemini Audio Video Chat</h3>
61
- <p>Stream BOTH your webcam video and audio feeds to Google Gemini. You can also upload images to augment your conversation!</p>
62
- <video width="100%" src="https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" controls></video>
63
- <p>
64
- <a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat">Demo</a> |
65
- <a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat/blob/main/app.py">Code</a>
66
- </p>
67
- </td>
68
- <td width="50%">
69
- <h3>πŸ—£οΈ Google Gemini Real Time Voice API</h3>
70
- <p>Talk to Gemini in real time using Google's voice API.</p>
71
- <video width="100%" src="https://github.com/user-attachments/assets/ea6d18cb-8589-422b-9bba-56332d9f61de" controls></video>
72
- <p>
73
- <a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini">Demo</a> |
74
- <a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini/blob/main/app.py">Code</a>
75
- </p>
76
- </td>
77
- </tr>
78
-
79
- <tr>
80
- <td width="50%">
81
- <h3>πŸ—£οΈ OpenAI Real Time Voice API</h3>
82
- <p>Talk to ChatGPT in real time using OpenAI's voice API.</p>
83
- <video width="100%" src="https://github.com/user-attachments/assets/178bdadc-f17b-461a-8d26-e915c632ff80" controls></video>
84
- <p>
85
- <a href="https://huggingface.co/spaces/fastrtc/talk-to-openai">Demo</a> |
86
- <a href="https://huggingface.co/spaces/fastrtc/talk-to-openai/blob/main/app.py">Code</a>
87
- </p>
88
- </td>
89
- <td width="50%">
90
- <h3>πŸ€– Hello Computer</h3>
91
- <p>Say computer before asking your question!</p>
92
- <video width="100%" src="https://github.com/user-attachments/assets/afb2a3ef-c1ab-4cfb-872d-578f895a10d5" controls></video>
93
- <p>
94
- <a href="https://huggingface.co/spaces/fastrtc/hello-computer">Demo</a> |
95
- <a href="https://huggingface.co/spaces/fastrtc/hello-computer/blob/main/app.py">Code</a>
96
- </p>
97
- </td>
98
- </tr>
99
-
100
- <tr>
101
- <td width="50%">
102
- <h3>πŸ€– Llama Code Editor</h3>
103
- <p>Create and edit HTML pages with just your voice! Powered by SambaNova systems.</p>
104
- <video width="100%" src="https://github.com/user-attachments/assets/98523cf3-dac8-4127-9649-d91a997e3ef5" controls></video>
105
- <p>
106
- <a href="https://huggingface.co/spaces/fastrtc/llama-code-editor">Demo</a> |
107
- <a href="https://huggingface.co/spaces/fastrtc/llama-code-editor/blob/main/app.py">Code</a>
108
- </p>
109
- </td>
110
- <td width="50%">
111
- <h3>πŸ—£οΈ Talk to Claude</h3>
112
- <p>Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.</p>
113
- <video width="100%" src="https://github.com/user-attachments/assets/fb6ef07f-3ccd-444a-997b-9bc9bdc035d3" controls></video>
114
- <p>
115
- <a href="https://huggingface.co/spaces/fastrtc/talk-to-claude">Demo</a> |
116
- <a href="https://huggingface.co/spaces/fastrtc/talk-to-claude/blob/main/app.py">Code</a>
117
- </p>
118
- </td>
119
- </tr>
120
-
121
- <tr>
122
- <td width="50%">
123
- <h3>🎡 Whisper Transcription</h3>
124
- <p>Have whisper transcribe your speech in real time!</p>
125
- <video width="100%" src="https://github.com/user-attachments/assets/87603053-acdc-4c8a-810f-f618c49caafb" controls></video>
126
- <p>
127
- <a href="https://huggingface.co/spaces/fastrtc/whisper-realtime">Demo</a> |
128
- <a href="https://huggingface.co/spaces/fastrtc/whisper-realtime/blob/main/app.py">Code</a>
129
- </p>
130
- </td>
131
- <td width="50%">
132
- <h3>πŸ“· Yolov10 Object Detection</h3>
133
- <p>Run the Yolov10 model on a user webcam stream in real time!</p>
134
- <video width="100%" src="https://github.com/user-attachments/assets/f82feb74-a071-4e81-9110-a01989447ceb" controls></video>
135
- <p>
136
- <a href="https://huggingface.co/spaces/fastrtc/object-detection">Demo</a> |
137
- <a href="https://huggingface.co/spaces/fastrtc/object-detection/blob/main/app.py">Code</a>
138
- </p>
139
- </td>
140
- </tr>
141
-
142
- <tr>
143
- <td width="50%">
144
- <h3>πŸ—£οΈ Kyutai Moshi</h3>
145
- <p>Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.</p>
146
- <video width="100%" src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls></video>
147
- <p>
148
- <a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi">Demo</a> |
149
- <a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py">Code</a>
150
- </p>
151
- </td>
152
- <td width="50%">
153
- <h3>πŸ—£οΈ Hello Llama: Stop Word Detection</h3>
154
- <p>A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama". Build a Siri-like coding assistant in 100 lines of code!</p>
155
- <video width="100%" src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls></video>
156
- <p>
157
- <a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor">Demo</a> |
158
- <a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py">Code</a>
159
- </p>
160
- </td>
161
- </tr>
162
- </table>
163
-
164
- ## Usage
165
-
166
- This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
167
-
168
- - `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
169
- - `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
170
- - `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system.
171
-
172
-
173
- ## Quickstart
174
-
175
- ### Echo Audio
176
-
177
- ```python
178
- from fastrtc import Stream, ReplyOnPause
179
- import numpy as np
180
-
181
- def echo(audio: tuple[int, np.ndarray]):
182
- # The function will be passed the audio until the user pauses
183
- # Implement any iterator that yields audio
184
- # See "LLM Voice Chat" for a more complete example
185
- yield audio
186
-
187
- stream = Stream(
188
- handler=ReplyOnPause(detection),
189
- modality="audio",
190
- mode="send-receive",
191
- )
192
- ```
193
-
194
- ### LLM Voice Chat
195
-
196
- ```py
197
- from fastrtc import (
198
- ReplyOnPause, AdditionalOutputs, Stream,
199
- audio_to_bytes, aggregate_bytes_to_16bit
200
- )
201
- import gradio as gr
202
- from groq import Groq
203
- import anthropic
204
- from elevenlabs import ElevenLabs
205
-
206
- groq_client = Groq()
207
- claude_client = anthropic.Anthropic()
208
- tts_client = ElevenLabs()
209
-
210
-
211
- # See "Talk to Claude" in Cookbook for an example of how to keep
212
- # track of the chat history.
213
- def response(
214
- audio: tuple[int, np.ndarray],
215
- ):
216
- prompt = groq_client.audio.transcriptions.create(
217
- file=("audio-file.mp3", audio_to_bytes(audio)),
218
- model="whisper-large-v3-turbo",
219
- response_format="verbose_json",
220
- ).text
221
- response = claude_client.messages.create(
222
- model="claude-3-5-haiku-20241022",
223
- max_tokens=512,
224
- messages=[{"role": "user", "content": prompt}],
225
- )
226
- response_text = " ".join(
227
- block.text
228
- for block in response.content
229
- if getattr(block, "type", None) == "text"
230
- )
231
- iterator = tts_client.text_to_speech.convert_as_stream(
232
- text=response_text,
233
- voice_id="JBFqnCBsd6RMkjVDRZzb",
234
- model_id="eleven_multilingual_v2",
235
- output_format="pcm_24000"
236
-
237
- )
238
- for chunk in aggregate_bytes_to_16bit(iterator):
239
- audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1)
240
- yield (24000, audio_array)
241
-
242
- stream = Stream(
243
- modality="audio",
244
- mode="send-receive",
245
- handler=ReplyOnPause(response),
246
- )
247
- ```
248
-
249
- ### Webcam Stream
250
-
251
- ```python
252
- from fastrtc import Stream
253
- import numpy as np
254
-
255
-
256
- def flip_vertically(image):
257
- return np.flip(image, axis=0)
258
-
259
-
260
- stream = Stream(
261
- handler=flip_vertically,
262
- modality="video",
263
- mode="send-receive",
264
- )
265
- ```
266
-
267
- ### Object Detection
268
-
269
- ```python
270
- from fastrtc import Stream
271
- import gradio as gr
272
- import cv2
273
- from huggingface_hub import hf_hub_download
274
- from .inference import YOLOv10
275
-
276
- model_file = hf_hub_download(
277
- repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
278
- )
279
-
280
- # git clone https://huggingface.co/spaces/fastrtc/object-detection
281
- # for YOLOv10 implementation
282
- model = YOLOv10(model_file)
283
-
284
- def detection(image, conf_threshold=0.3):
285
- image = cv2.resize(image, (model.input_width, model.input_height))
286
- new_image = model.detect_objects(image, conf_threshold)
287
- return cv2.resize(new_image, (500, 500))
288
-
289
- stream = Stream(
290
- handler=detection,
291
- modality="video",
292
- mode="send-receive",
293
- additional_inputs=[
294
- gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
295
- ]
296
- )
297
- ```
298
-
299
- ## Running the Stream
300
-
301
- Run:
302
-
303
- ### Gradio
304
-
305
- ```py
306
- stream.ui.launch()
307
- ```
308
-
309
- ### Telephone (Audio Only)
310
-
311
- ```py
312
- stream.fastphone()
313
- ```
314
-
315
- ### FastAPI
316
-
317
- ```py
318
- app = FastAPI()
319
- stream.mount(app)
320
-
321
- # Optional: Add routes
322
- @app.get("/")
323
- async def _():
324
- return HTMLResponse(content=open("index.html").read())
325
-
326
- # uvicorn app:app --host 0.0.0.0 --port 8000
327
- ```
 
50
  ## Docs
51
 
52
  [https://fastrtc.org](https://fastrtc.org)