
BigLAM: BigScience Libraries, Archives and Museums
non-profit
AI & ML interests
🤗 Hugging Face x 🌸 BigScience initiative to create open source community resources for LAMs.
Recent Activity
View all activity
biglam's activity

davanstrien
posted
an
update
5 days ago
Post
2449
Hacked together a way to log trl GRPO training completions to a 🤗 dataset repo. This allows you to:
- Track rewards from multiple reward functions
- Treat the completion and rewards from training as a "proper" dataset and do EDA
- Share results for open science
The implementation is super hacky, but I'm curious if people would find this useful.
To push completions to the Hub, you just need two extra parameters:
Example dataset: davanstrien/test-logs
Colab: https://colab.research.google.com/drive/1wzBFPVthRYYTp-mEYlznLg_e_0Za1M3g
- Track rewards from multiple reward functions
- Treat the completion and rewards from training as a "proper" dataset and do EDA
- Share results for open science
The implementation is super hacky, but I'm curious if people would find this useful.
To push completions to the Hub, you just need two extra parameters:
log_completions=True
log_completions_hub_repo='your-username/repo-name'
Example dataset: davanstrien/test-logs
Colab: https://colab.research.google.com/drive/1wzBFPVthRYYTp-mEYlznLg_e_0Za1M3g

alielfilali01
posted
an
update
6 days ago
Post
628
🚨 Arabic LLM Evaluation 🚨
Few models join the ranking of inceptionai/AraGen-Leaderboard Today.
The new MistralAI model, Saba, is quite impressive, Top10 ! Well done @arthurmensch and team.
Sadly Mistral did not follow its strategy about public weights this time, we hope this changes soon and we get the model with a permissive license.
We added other Mistral models and apparently, we have been sleeping on mistralai/Mistral-Large-Instruct-2411 !
Another impressive model that joined the ranking today is ALLaM-AI/ALLaM-7B-Instruct-preview. After a long wait finally ALLaM is here and it is IMPRESSIVE given its size !
ALLaM is ranked on OALL/Open-Arabic-LLM-Leaderboard as well.
Few models join the ranking of inceptionai/AraGen-Leaderboard Today.
The new MistralAI model, Saba, is quite impressive, Top10 ! Well done @arthurmensch and team.
Sadly Mistral did not follow its strategy about public weights this time, we hope this changes soon and we get the model with a permissive license.
We added other Mistral models and apparently, we have been sleeping on mistralai/Mistral-Large-Instruct-2411 !
Another impressive model that joined the ranking today is ALLaM-AI/ALLaM-7B-Instruct-preview. After a long wait finally ALLaM is here and it is IMPRESSIVE given its size !
ALLaM is ranked on OALL/Open-Arabic-LLM-Leaderboard as well.

davanstrien
posted
an
update
9 days ago
Post
2183
Dataset descriptions for trending Hugging Face datasets? Powered by a Smol model
davanstrien/Smol-Hub-tldr

davanstrien
posted
an
update
11 days ago
Post
1855
How do you make 1M+ Hugging Face models & datasets more discoverable?
davanstrien/Smol-Hub-tldr!
I fine-tuned HuggingFaceTB/SmolLM2-360M to generate one-line summaries from a model or dataset README.
Its own self-description?
"A model for generating concise summaries of model & dataset cards from the Hugging Face Hub"
The goal? Make it easier to find the right models and datasets for your specific needs. It's already powering a semantic search for datasets Space.
It's still a WIP but thanks to @loubnabnl , @anton-l , @eliebak et al, for cooking such a nice base model for fine-tuning small, efficient models for specific domains and tasks. 🙏
davanstrien/Smol-Hub-tldr!
I fine-tuned HuggingFaceTB/SmolLM2-360M to generate one-line summaries from a model or dataset README.
Its own self-description?
"A model for generating concise summaries of model & dataset cards from the Hugging Face Hub"
The goal? Make it easier to find the right models and datasets for your specific needs. It's already powering a semantic search for datasets Space.
It's still a WIP but thanks to @loubnabnl , @anton-l , @eliebak et al, for cooking such a nice base model for fine-tuning small, efficient models for specific domains and tasks. 🙏

davanstrien
posted
an
update
12 days ago
Post
1328
Made some significant updates to my 🤗 semantic datasets search app. If you love falling into a wiki black hole, you might like this...
librarian-bots/huggingface-datasets-semantic-search
librarian-bots/huggingface-datasets-semantic-search

albertvillanova
posted
an
update
21 days ago
Post
3478
🚀 Introducing
@huggingface
Open Deep-Research💥
In just 24 hours, we built an open-source agent that:
✅ Autonomously browse the web
✅ Search, scroll & extract info
✅ Download & manipulate files
✅ Run calculations on data
55% on GAIA validation set! Help us improve it!💡
https://huggingface.co/blog/open-deep-research
In just 24 hours, we built an open-source agent that:
✅ Autonomously browse the web
✅ Search, scroll & extract info
✅ Download & manipulate files
✅ Run calculations on data
55% on GAIA validation set! Help us improve it!💡
https://huggingface.co/blog/open-deep-research

davanstrien
posted
an
update
27 days ago
Post
1820
Why choose between strong LLM reasoning and efficient models?
Use DeepSeek to generate high-quality training data, then distil that knowledge into ModernBERT answerdotai/ModernBERT-base for fast, efficient classification.
Blog post: https://danielvanstrien.xyz/posts/2025/deepseek/distil-deepseek-modernbert.html
Use DeepSeek to generate high-quality training data, then distil that knowledge into ModernBERT answerdotai/ModernBERT-base for fast, efficient classification.
Blog post: https://danielvanstrien.xyz/posts/2025/deepseek/distil-deepseek-modernbert.html

davanstrien
posted
an
update
29 days ago
Post
1905
Updated the ColPali Query Generator Space
davanstrien/ColPali-Query-Generator to use
Qwen/Qwen2.5-VL-7B-Instruct.
Given an input image, it generates several queries along with explanations to justify them. This approach can generate synthetic data for fine-tuning ColPali models.
Given an input image, it generates several queries along with explanations to justify them. This approach can generate synthetic data for fine-tuning ColPali models.

davanstrien
posted
an
update
29 days ago
Post
2026
🌍 Big step for multilingual AI data!
The Hugging Face community has rated educational content in languages spoken by 1.6 billion people! New additions:
• Japanese
• Italian
• Old High German
Learn more and contribute: https://huggingface.co/blog/davanstrien/fineweb2-community
These ratings can help enhance training data for major world languages.
The Hugging Face community has rated educational content in languages spoken by 1.6 billion people! New additions:
• Japanese
• Italian
• Old High German
Learn more and contribute: https://huggingface.co/blog/davanstrien/fineweb2-community
These ratings can help enhance training data for major world languages.

storytracer
authored
a
paper
about 1 month ago
Update README.md
#2 opened about 1 month ago
by
librarian-bot


librarian-bot
updated
a
dataset
about 1 month ago
Update README.md
#2 opened about 1 month ago
by
librarian-bot

Convert dataset to Parquet
#3 opened about 1 month ago
by
davanstrien


davanstrien
updated
a
dataset
about 1 month ago
Convert dataset to Parquet
#1 opened about 1 month ago
by
davanstrien


davanstrien
updated
a
dataset
about 1 month ago
Convert dataset to Parquet
#1 opened about 1 month ago
by
davanstrien


davanstrien
posted
an
update
about 1 month ago
Post
3068
Introducing scandi-fine-web-cleaner
davanstrien/scandi-fine-web-cleaner, the first model trained on FineWeb-C community annotations!
FineWeb2 is a massive multilingual dataset for pre-training language models. Like any web-scale dataset, it contains low-quality content. How can we improve it?
Over the past months, an amazing community of 400+ annotators has been labelling content quality (using Argilla) across 23 languages through the FineWeb-C initiative.
Today, I'm happy to share the first classifier trained on this data.
🔍 What we've built:
- A lightweight classifier that efficiently removes low-quality content
- 90%+ precision demonstrated on Danish & Swedish
- Can process the 43M+ documents in Danish FineWeb2 with minimal compute
🌍 Why this matters: The approach can be reproduced for any of the 23 languages in FineWeb-C ( data-is-better-together/fineweb-c). We can improve training data quality at scale without massive compute resources by starting with community annotations and training small, efficient classifiers.
Want to build a classifier for your language? Check out the full blog post with code examples and implementation details: https://danielvanstrien.xyz/posts/2025/FineWeb-c/scandinavian-content-filtering-fineweb.html
FineWeb2 is a massive multilingual dataset for pre-training language models. Like any web-scale dataset, it contains low-quality content. How can we improve it?
Over the past months, an amazing community of 400+ annotators has been labelling content quality (using Argilla) across 23 languages through the FineWeb-C initiative.
Today, I'm happy to share the first classifier trained on this data.
🔍 What we've built:
- A lightweight classifier that efficiently removes low-quality content
- 90%+ precision demonstrated on Danish & Swedish
- Can process the 43M+ documents in Danish FineWeb2 with minimal compute
🌍 Why this matters: The approach can be reproduced for any of the 23 languages in FineWeb-C ( data-is-better-together/fineweb-c). We can improve training data quality at scale without massive compute resources by starting with community annotations and training small, efficient classifiers.
Want to build a classifier for your language? Check out the full blog post with code examples and implementation details: https://danielvanstrien.xyz/posts/2025/FineWeb-c/scandinavian-content-filtering-fineweb.html