alkinun's picture

alkinun

AtAndDev

AI & ML interests

LLMs, Alignment, Merging, Unsloth, DPO, SFT, ORPO, SPIN..

Recent Activity

Organizations

ESPnet's profile picture CVPR Demo Track's profile picture BigScience Biomedical Datasets's profile picture ONNXConfig for all's profile picture Gradio-Themes-Party's profile picture video-p2p-library's profile picture Gradio-Blocks-Party's profile picture scikit-learn's profile picture lora concepts library's profile picture OpenBuddy Community's profile picture Open-Source AI Meetup's profile picture ECCV 2022's profile picture Kornia AI's profile picture Tune a video concepts library's profile picture SIGGRAPH 2022's profile picture Interspeech2022's profile picture Stable Diffusion concepts library's profile picture SIGGRAPH Asia 2022 Demos's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Musika's profile picture Blog-explorers's profile picture OpenSky's profile picture ICCV2023's profile picture ICML2023's profile picture huggingPartyParis's profile picture MultiπŸ€–Transformers's profile picture Team Tonic's profile picture That Time I got Reincarnated as a Hugging Face Organization's profile picture ZeroGPU Explorers's profile picture Pirates Party for all software open source's profile picture MLX Community's profile picture recipe research's profile picture Narra's profile picture Social Post Explorers's profile picture Cognitive Computations's profile picture M4-ai's profile picture Spinner-GPT-4's profile picture Dev Mode Explorers's profile picture Stable Diffusion Community (Unofficial, Non-profit)'s profile picture Hugging Face Discord Community's profile picture Nerdy Face's profile picture OpenEndedLM's profile picture open/ acc's profile picture Data Is Better Together Contributor's profile picture None yet's profile picture

AtAndDev's activity

posted an update 5 days ago
view post
Post
1794
everywhere i go i see his face
reacted to prithivMLmods's post with 😎πŸ”₯ 5 days ago
view post
Post
4787
Deepswipe by
.
.
.
. DeepseekπŸ¬πŸ—Ώ






Everything is now in recovery. πŸ“‰πŸ“ˆ
Β·
reacted to onekq's post with πŸ‘ 9 days ago
view post
Post
2260
So πŸ‹DeepSeekπŸ‹ hits the mainstream media. But it has been a star in our little cult for at least 6 months. Its meteoric success is not overnight, but two years in the making.

To learn their history, just look at their πŸ€— repo https://huggingface.co/deepseek-ai

* End of 2023, they launched the first model (pretrained by themselves) following Llama 2 architecture
* June 2024, v2 (MoE architecture) surpassed Gemini 1.5, but behind Mistral
* September, v2.5 surpassed GPT 4o mini
* December, v3 surpassed GPT 4o
* Now R1 surpassed o1

Most importantly, if you think DeepSeek success is singular and unrivaled, that's WRONG. The following models are also near or equal the o1 bar.

* Minimax-01
* Kimi k1.5
* Doubao 1.5 pro
  • 1 reply
Β·
replied to mitkox's post 9 days ago
view reply

i believe sglang would be even faster but not sure if it supports non-nvidia devices

reacted to chansung's post with πŸ”₯ 11 days ago
view post
Post
1991
Simple Summarization on DeepSeek-R1 from DeepSeek AI

The RL stage is very important.
↳ However, it is difficult to create a truly helpful AI for people solely through RL.
↳ So, we applied a learning pipeline consisting of four stages: providing a good starting point, reasoning RL, SFT, and safety RL, and achieved performance comparable to o1.
↳ Simply fine-tuning other open models with the data generated by R1-Zero (distillation) resulted in performance comparable to o1-mini.

Of course, this is just a brief overview and may not be of much help. All models are accessible on Hugging Face, and the paper can be read through the GitHub repository.


Model: https://huggingface.co/deepseek-ai
Paper: https://github.com/deepseek-ai/DeepSeek-R1
  • 1 reply
Β·
replied to nroggendorff's post 11 days ago
reacted to ezgikorkmaz's post with πŸ‘€πŸš€ 11 days ago
reacted to sharpenb's post with πŸš€ 11 days ago
replied to sharpenb's post 11 days ago
view reply

That non centered emoji...
But cool blog

reacted to sometimesanotion's post with πŸ‘πŸ”₯ 12 days ago
view post
Post
2691
I've managed a #1 score of 41.22% average for 14B parameter models on the Open LLM Leaderboard. As of this writing, sometimesanotion/Lamarck-14B-v0.7 is #8 for all models up to 70B parameters.

It took a custom toolchain around Arcee AI's mergekit to manage the complex merges, gradients, and LoRAs required to make this happen. I really like seeing features of many quality finetunes in one solid generalist model.
Β·
posted an update 12 days ago
view post
Post
494
Deepseek gang on fire fr fr
reacted to onekq's post with πŸ”₯ 12 days ago
view post
Post
2647
This is historical. πŸŽ‰

DeepSeek πŸ‹R1πŸ‹ surpassed OpenAI πŸ“o1πŸ“ on the dual leaderboard. What a year for the open source!

onekq-ai/WebApp1K-models-leaderboard
reacted to onekq's post with πŸ”₯ 12 days ago
view post
Post
4654
πŸ‹DeepSeek πŸ‹ is the real OpenAI 😯
Β·
reacted to chansung's post with πŸš€πŸ‘ 12 days ago
view post
Post
1991
Simple Summarization on DeepSeek-R1 from DeepSeek AI

The RL stage is very important.
↳ However, it is difficult to create a truly helpful AI for people solely through RL.
↳ So, we applied a learning pipeline consisting of four stages: providing a good starting point, reasoning RL, SFT, and safety RL, and achieved performance comparable to o1.
↳ Simply fine-tuning other open models with the data generated by R1-Zero (distillation) resulted in performance comparable to o1-mini.

Of course, this is just a brief overview and may not be of much help. All models are accessible on Hugging Face, and the paper can be read through the GitHub repository.


Model: https://huggingface.co/deepseek-ai
Paper: https://github.com/deepseek-ai/DeepSeek-R1
  • 1 reply
Β·
reacted to chansung's post with πŸ‘ 12 days ago
view post
Post
1981
Simple summarization of Evolving Deeper LLM Thinking (Google DeepMind)

The process starts by posing a question.
1) The LLM generates initial responses.
2) These generated responses are evaluated according to specific criteria (program-based checker).
3) The LLM critiques the evaluated results.
4) The LLM refines the responses based on the evaluation, critique, and original responses.

The refined response is then fed back into step 2). If it meets the criteria, the process ends. Otherwise, the algorithm generates more responses based on the refined ones (with some being discarded, some remaining, and some responses potentially being merged).

Through this process, it demonstrated excellent performance in complex scheduling problems (travel planning, meeting scheduling, etc.). It's a viable method for finding highly effective solutions in specific scenarios.

However, there are two major drawbacks:
πŸ€” An excessive number of API calls are required. (While the cost might not be very high, it leads to significant latency.)
πŸ€” The evaluator is program-based. (This limits its use as a general method. It could potentially be modified/implemented using LLM as Judge, but that would introduce additional API costs for evaluation.)

https://arxiv.org/abs/2501.09891
reacted to JingzeShi's post with πŸ”₯ 12 days ago