yellowcandle commited on
Commit
52cfee9
·
unverified ·
1 Parent(s): dd4c06b

Add Whisper V3 Gradio demo

Browse files

- Import gradio and transformers libraries
- Load Whisper V3 large model pipeline for automatic speech recognition
- Define transcribe_audio function to run pipeline on input audio
- Create gradio interface with audio input and text output
- Launch gradio demo

Files changed (2) hide show
  1. app.py +7 -3
  2. requirements.txt +2 -0
app.py CHANGED
@@ -1,7 +1,11 @@
1
  import gradio as gr
 
 
2
 
3
- def greet(name):
4
- return "Hello " + name + "!!"
5
 
6
- demo = gr.Interface(fn=greet, inputs="text", outputs="text")
 
 
 
7
  demo.launch()
 
1
  import gradio as gr
2
+ # Use a pipeline as a high-level helper
3
+ from transformers import pipeline
4
 
5
+ pipe = pipeline("automatic-speech-recognition", model="openai/whisper-large-v3")
 
6
 
7
+ def transcribe_audio(audio):
8
+ return pipe(audio)
9
+
10
+ demo = gr.Interface(fn=transcribe_audio, inputs="audio", outputs="text")
11
  demo.launch()
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ gradio
2
+ transformers