Post
1027
π OpenAI's new Whisper "turbo": 8x faster, 40% VRAM efficient, minimal accuracy loss.
π Run it locally in-browser for private transcriptions! Transcribe interviews, audio & video.
β‘οΈ 40 tokens/sec on my MacBook
π Try it: webml-community/whisper-large-v3-turbo-webgpu
Model: https://huggingface.co/ylacombe/whisper-large-v3-turbo
π Run it locally in-browser for private transcriptions! Transcribe interviews, audio & video.
β‘οΈ 40 tokens/sec on my MacBook
π Try it: webml-community/whisper-large-v3-turbo-webgpu
Model: https://huggingface.co/ylacombe/whisper-large-v3-turbo