Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
73
33
Yu li
Yukkkop
Follow
cartertellem989's profile picture
1 follower
·
11 following
AI & ML interests
None yet
Recent Activity
upvoted
a
collection
about 7 hours ago
FuseChat 3.0
liked
a model
about 10 hours ago
arcee-ai/Virtuoso-Small-v2
reacted
to
mkurman
's
post
with ❤️
1 day ago
Blurred-Thoughts Supervised Fine-Tuning (BT-SFT) 🤖 Can we teach a model to think completely on its own without reinforcement learning? Actually, yes. We can do straightforward supervised fine-tuning using a relatively simple trick: blurring a part of CoT thoughts. But why is this effective? We observed that various models differ in their thinking processes, and fine-tuning one model on another model’s thoughts (CoT) can sometimes be inefficient—often resulting in the model simply memorizing reasoning rather than learning how to actually think. I discovered that this process can still be efficient if we clearly indicate when the model should start and stop thinking and uncover only a part of CoT and the expected answer, blurring the other part of CoT. This approach allows the model to learn only a portion of the thought process while still arriving at an expected answer. To demonstrate this, you can watch my experimental BT-SFT on meditsolutions/Llama-3.2-SUN-2.5B-chat model, which was fine-tuned on 151 million tokens from the Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B dataset. Enjoy! 🚀 PS. If you were curious enough to read this, leave me a comment. It's always nice to chat with open-minded and intelligent ppl.
View all activity
Organizations
None yet
models
1
Yukkkop/Nud
Updated
Oct 5, 2024
datasets
None public yet