Laurent Mazare's picture

Laurent Mazare

lmz

AI & ML interests

None yet

Recent Activity

liked a model 1 day ago
nu-dialogue/j-moshi
liked a model 2 days ago
nu-dialogue/j-moshi-ext
View all activity

Organizations

Whisper Distillation's profile picture Kyutai's profile picture Hugging Face Discord Community's profile picture kmhf's profile picture k's profile picture

lmz's activity

New activity in kyutai/helium-1-preview-2b 12 days ago

fix architecture

2
#3 opened 12 days ago by
TPM-28
New activity in kyutai/helium-1-preview-2b 12 days ago

fix title

1
#2 opened 12 days ago by
eliebak
New activity in kyutai/helium-1-preview-2b 13 days ago
New activity in kyutai/mimi 13 days ago

Number of quantizers

1
#3 opened 4 months ago by
mithileshvaidya
updated a model 19 days ago
reacted to reach-vb's post with ๐Ÿ”ฅ 4 months ago
view post
Post
2869
Less than two days ago Kyutai Labs open sourced Moshi - an ~7.6B on-device Speech to Speech foundation model and Mimi - SoTA streaming speech codec! ๐Ÿ”ฅ

The release includes:

1. Moshiko & Moshika - Moshi finetuned on synthetic data (CC-BY license) ( kyutai/moshi-v01-release-66eaeaf3302bef6bd9ad7acd)
2. Mimi - Streaiming Audio Codec, processes 24 kHz audio, down to a 12.5 Hz representation with a bandwidth of 1.1 kbps (CC-BY license) ( kyutai/mimi)
3. Model checkpoints & Inference codebase written in Rust (Candle), PyTorch & MLX (Apache license) (https://github.com/kyutai-labs/moshi)

How does Moshi work?

1. Moshi processes two audio streams: one for itself and one for the user, with the user's stream coming from audio input and Moshi's stream generated by the model.

2. Along with these audio streams, Moshi predicts text tokens for its speech, enhancing its generation quality.

3. The model uses a small Depth Transformer for codebook dependencies and a large 7B parameter Temporal Transformer for temporal dependencies.

4. The theoretical latency is 160ms, with a practical latency of around 200ms on an L4 GPU.

Model size & inference:

Moshiko/ka are 7.69B param models

bf16 ~16GB VRAM
8-bit ~8GB VRAM
4-bit ~4GB VRAM

You can run inference via Candle ๐Ÿฆ€, PyTorch and MLX - based on your hardware.

The Kyutai team, @adefossez @lmz and team are cracked AF, they're bringing some serious firepower to the open source/ science AI scene, looking forward to what's next! ๐Ÿ
  • 1 reply
ยท
updated a Space 4 months ago
updated a model 6 months ago