SFT (Non-RL) distillation is this good on a sub-100B model?
#2
by
KrishnaKaasyap
- opened
I'm amazed to see just SFT is making Llama 3.3 70B look like a SOTA model!
Imagine what Meta is cooking with 200k H100s for Llama 4!
Smol models using R1 reasoning chains are awesome!
Is there the dataset used to distill these models available for replication anywhere? I couldn't find any in the official documentation!
I wonder why they didn't distill Qwen 2.5 72B, as it is even better in reasoning tasks than Llama 3.3 70B. That would be really nice.
I wonder why they didn't distill Qwen 2.5 72B, as it is even better in reasoning tasks than Llama 3.3 70B. That would be really nice.
I guess it might be because of "License".