Shyam Sunder Kumar

theainerd

AI & ML interests

Natural Language Processing

Recent Activity

Organizations

Neuropark's profile picture Speech Recognition Community Event Version 2's profile picture Open-Source AI Meetup's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture

theainerd's activity

reacted to merve's post with πŸš€ about 16 hours ago
view post
Post
1504
smolagents can see πŸ”₯
we just shipped vision support to smolagents πŸ€— agentic computers FTW

you can now:
πŸ’» let the agent get images dynamically (e.g. agentic web browser)
πŸ“‘ pass images at the init of the agent (e.g. chatting with documents, filling forms automatically etc)
with few LoC change! 🀯
you can use transformers models locally (like Qwen2VL) OR plug-in your favorite multimodal inference provider (gpt-4o, antrophic & co) 🀠

read our blog http://hf.co/blog/smolagents-can-see
upvoted an article about 16 hours ago
view article
Article

We now support VLMs in smolagents!

β€’ 33
reacted to chansung's post with πŸ‘ 5 days ago
view post
Post
1904
Simple summarization of Evolving Deeper LLM Thinking (Google DeepMind)

The process starts by posing a question.
1) The LLM generates initial responses.
2) These generated responses are evaluated according to specific criteria (program-based checker).
3) The LLM critiques the evaluated results.
4) The LLM refines the responses based on the evaluation, critique, and original responses.

The refined response is then fed back into step 2). If it meets the criteria, the process ends. Otherwise, the algorithm generates more responses based on the refined ones (with some being discarded, some remaining, and some responses potentially being merged).

Through this process, it demonstrated excellent performance in complex scheduling problems (travel planning, meeting scheduling, etc.). It's a viable method for finding highly effective solutions in specific scenarios.

However, there are two major drawbacks:
πŸ€” An excessive number of API calls are required. (While the cost might not be very high, it leads to significant latency.)
πŸ€” The evaluator is program-based. (This limits its use as a general method. It could potentially be modified/implemented using LLM as Judge, but that would introduce additional API costs for evaluation.)

https://arxiv.org/abs/2501.09891
replied to chansung's post 6 days ago
reacted to chansung's post with πŸ‘ 6 days ago
view post
Post
1948
Simple Summarization on DeepSeek-R1 from DeepSeek AI

The RL stage is very important.
↳ However, it is difficult to create a truly helpful AI for people solely through RL.
↳ So, we applied a learning pipeline consisting of four stages: providing a good starting point, reasoning RL, SFT, and safety RL, and achieved performance comparable to o1.
↳ Simply fine-tuning other open models with the data generated by R1-Zero (distillation) resulted in performance comparable to o1-mini.

Of course, this is just a brief overview and may not be of much help. All models are accessible on Hugging Face, and the paper can be read through the GitHub repository.


Model: https://huggingface.co/deepseek-ai
Paper: https://github.com/deepseek-ai/DeepSeek-R1
  • 1 reply
Β·
reacted to onekq's post with πŸ”₯ 6 days ago
view post
Post
4563
πŸ‹DeepSeek πŸ‹ is the real OpenAI 😯
Β·