Lewis Tunstall PRO

lewtun

AI & ML interests

LLMs, LLMs, LLMs

Recent Activity

Articles

Organizations

Hugging Face's profile picture AutoNLP's profile picture Natural Language Processing with Transformers's profile picture BigScience Workshop's profile picture Hugging Face Internal Testing Organization's profile picture Ought's profile picture Hugging Face Course's profile picture Testing Benchmarks on the Hub's profile picture NLP en ES's profile picture GEM benchmark's profile picture SetFit's profile picture GEM benchmark submissions's profile picture Benchmarks Hosting's profile picture ALPS test's profile picture Evaluation datasets's profile picture Deep Learning for Particle Physicists's profile picture fast.ai community's profile picture trl internal testing's profile picture DreamBooth Hackathon's profile picture SomosNLP's profile picture Marsyas  (Music Analysis, Retrieval and Synthesis for Audio Signals)'s profile picture ONNXConfig for all's profile picture HF Course Demos's profile picture How to teach Hugging Face?'s profile picture Jet Universe's profile picture Evaluation on the Hub's profile picture The ML Landscape of Top Taggers's profile picture HuggingFaceM4's profile picture HF Canonical Model Maintainers's profile picture TRL's profile picture BigCode's profile picture Hugging Face H4's profile picture Inference Endpoints's profile picture Hugging Face OSS Metrics's profile picture BigCode Data's profile picture Reading Group's profile picture Hugging Face H4 Community's profile picture Hugging Face TB Research's profile picture Hugging Face Smol Cluster's profile picture Open LLM Leaderboard's profile picture EPFL LLM Team's profile picture H4 Alignment Handbook's profile picture h4-argilla-collab's profile picture ZeroGPU Explorers's profile picture Project-Numina's profile picture ORPO Explorers's profile picture Kato's profile picture Distillation Hugs's profile picture Hugging Face Discord Community's profile picture Data Agents's profile picture nltpt's profile picture IOPO Experiments's profile picture Hugging Face FineVideo's profile picture Reliable Agents's profile picture Hugging Face Science's profile picture HF CMU Collab's profile picture

lewtun's activity

posted an update 9 days ago
view post
Post
6417
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute ๐Ÿ”ฅ

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

๐Ÿ“ˆ Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

๐ŸŽ„ Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

๐Ÿงญ Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
  • 2 replies
ยท
reacted to julien-c's post with ๐Ÿค—โค๏ธ๐Ÿ”ฅ 14 days ago
view post
Post
7570
After some heated discussion ๐Ÿ”ฅ, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐Ÿ”ฅ

cc: @reach-vb @pierric @victor and the HF team
ยท
replied to dvilasuero's post 6 months ago
view reply

Welcome to the team @dvilasuero and Argilla! Itโ€™s been really nice collaborating with you on various projects around LLM alignment and Iโ€™m excited to see what weโ€™ll build next together!

reacted to dvilasuero's post with ๐Ÿคโค๏ธ๐Ÿค—๐Ÿš€๐Ÿ”ฅ 6 months ago
view post
Post
8073
Today is a huge day in Argillaโ€™s history. We couldnโ€™t be more excited to share this with the community: weโ€™re joining Hugging Face!

Weโ€™re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.

Over the past year, weโ€™ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyrโ€™s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets

After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, weโ€™re now the same team.

To those of you whoโ€™ve been following us, this wonโ€™t be a huge surprise, but it will be a big deal in the coming months. This acquisition means weโ€™ll double down on empowering the community to build and collaborate on high quality datasets, weโ€™ll bring full support for multimodal datasets, and weโ€™ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.

As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and Amรฉlie.

Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.

Would love to answer any questions you have so feel free to add them below!
ยท
replied to BramVanroy's post 7 months ago
view reply

I am not aware of any public ablations which validate this, but I suspect it has become less important for chat models where one is more interested in the performance via human evaluation instead of academic benchmarks like MMLU (which are OK for selecting base models, but less so for chat/instruct ones)

reacted to JustinLin610's post with ๐Ÿš€๐Ÿ”ฅ 8 months ago
view post
Post
2999
Finally, Qwen1.5-110B is out! With weights and demo!

Blog: https://qwenlm.github.io/blog/qwen1.5-110b/
Demo: Qwen/Qwen1.5-110B-Chat-demo
Base: Qwen/Qwen1.5-110B
Chat: Qwen/Qwen1.5-110B-Chat

This model has some specific features:
* GQA
* 32K token context length
* Multilingual support

We feel good about its performance on benchmarks, including those for base models and chat models, but we still need more of your testing and feedback to help us know its capabilities and limitations!

Additionally, the base model has not learned chatml tokens. Yeah if you use chatml format, you need to be careful about it!

Enjoy and stay tuned for Qwen2!



reacted to Sentdex's post with ๐Ÿ‘ 8 months ago
view post
Post
5763
Benchmarks!

I have lately been diving deep into the main benchmarks we all use to evaluate and compare models.

If you've never actually looked under the hood for how benchmarks work, check out the LM eval harness from EleutherAI: https://github.com/EleutherAI/lm-evaluation-harness

+ check out the benchmark datasets, you can find the ones for the LLM leaderboard on the about tab here: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard, then click the dataset and actually peak at the data that comprises these benchmarks.

It feels to me like benchmarks only represent a tiny portion of what we actually use and want LLMs for, and I doubt I'm alone in that sentiment.

Beyond this, the actual evaluations of responses from models are extremely strict and often use even rudimentary NLP techniques when, at this point, we have LLMs themselves that are more than capable at evaluating and scoring responses.

It feels like we've made great strides in the quality of LLMs themselves, but almost no change in the quality of how we benchmark.

If you have any ideas for how benchmarks could be a better assessment of an LLM, or know of good research papers that tackle this challenge, please share!
  • 3 replies
ยท
reacted to VictorSanh's post with โค๏ธ๐Ÿš€๐Ÿ”ฅ 8 months ago
view post
Post
2785
Glad to see Idefics2 making its way into the awesome OpenVLM Leaderboard which ranks VLMs. ๐Ÿ†
2nd in its category (<10B parameters and open weights)!

While InternLM-XComposer2 uses proprietary data, Idefics2 is built solely using openly available data.

Leaderboard: opencompass/open_vlm_leaderboard
Model: HuggingFaceM4/idefics2-8b
ยท
posted an update 9 months ago
view post
Post
5025
Introducing Zephyr 141B-A35B ๐Ÿช:

HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1

Yesterday, Mistral released their latest base model (via magnet link of course ๐Ÿ˜…) and the community quickly converted it to transformers format and pushed it to the Hub: mistral-community/Mixtral-8x22B-v0.1

Early evals of this model looked extremely strong, so we teamed up with Argilla and KAIST AI to cook up a Zephyr recipe with a few new alignment techniques that came out recently:

๐Ÿง‘โ€๐Ÿณ Align the base model with Odds Ratio Preference Optimisation (ORPO). This novel algorithm developed by @JW17 and @nlee-208 and @j6mes and does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO.

๐Ÿฆซ Use a brand new dataset of 7k high-quality, multi-turn preferences that has been developed by our friends at Argilla. To create this dataset, they took the excellent Capybara SFT dataset from @LDJnr LDJnr/Capybara and converted it into a preference dataset by augmenting the final turn with responses from new LLMs that were then ranked by GPT-4.

What we find especially neat about this approach is that training on 7k samples only takes ~1.3h on 4 H100 nodes, yet produces a model that is very strong on chat benchmarks like IFEval and BBH.

Kudos to @alvarobartt @JW17 and @nlee-208 for this very nice and fast-paced collab!

For more details on the paper and dataset, checkout our collection: HuggingFaceH4/zephyr-orpo-6617eba2c5c0e2cc3c151524
reacted to trisfromgoogle's post with โค๏ธ๐Ÿ”ฅ 9 months ago
view post
Post
1839
Very excited to share the first two official Gemma variants from Google! Today at Google Cloud Next, we announced cutting-edge models for code and research!

First, google/codegemma-release-66152ac7b683e2667abdee11 - a new set of code-focused Gemma models at 2B and 7B, in both pretrained and instruction-tuned variants. These exhibit outstanding performance on academic benchmarks and (in my experience) real-life usage. Read more in the excellent HuggingFace blog: https://huggingface.co/blog/codegemma

Second, ( google/recurrentgemma-release-66152cbdd2d6619cb1665b7a), which is based on the outstanding Google DeepMind research in Griffin: https://arxiv.org/abs/2402.19427. RecurrentGemma is a research variant that enables higher throughput and vastly improved memory usage. We are excited about new architectures, especially in the lightweight Gemma sizes, where innovations like RecurrentGemma can scale modern AI to many more use cases.

For details on the launches of these models, check out our launch blog -- and please do not hesitate to send us feedback. We are excited to see what you build with CodeGemma and RecurrentGemma!

Huge thanks to the Hugging Face team for helping ensure that these models work flawlessly in the Hugging Face ecosystem at launch!
ยท