We applied the same data-driven approach that led to SOTA English performance in๐ท FineWeb to thousands of languages.
๐ฅ FineWeb2 has 8TB of compressed text data and outperforms other multilingual datasets in our experiments.
The dataset is released under the permissive ๐ ODC-By 1.0 license, and the ๐ป code to reproduce it and our evaluations is public.
We will very soon announce a big community project, and are working on a ๐ blogpost walking you through the entire dataset creation process. Stay tuned!
How do I test an LLM for my unique needs? If you work in finance, law, or medicine, generic benchmarks are not enough. This blog post uses Argilla, Distilllabel and ๐ค๏ธLighteval to generate evaluation dataset and evaluate models.
[New crazy blog post alert] We are releasing an extensive blog post on the science of creating high quality web-scale datasets, detailing all the steps and learnings that came in our recent 15 trillion tokens ๐ทFineWeb release
Inspired by the distill.pub interactive graphics papers, we settled to write the most extensive, enjoyable and in-depth tech report we could draft on so prepare for a 45-mmin read with interactive graphics and all.
And it's not all, in this article we also introduce ๐FineWeb-Edu a filtered subset of Common Crawl with 1.3T tokens containing only web pages with very high educational content. Up to our knowledge, FineWeb-Edu out-performs all openly release web-scale datasets by a significant margin on knowledge- and reasoning-intensive benchmarks like MMLU, ARC, and OpenBookQA
We also make a number of surprising observations on the "quality" of the internet it-self which may challenge some of the general assumptions on web data (not saying more, I'll let you draw your conclusions ;)
The most exciting thing here? mistralai/Mixtral-8x22B-Instruct-v0.1 model got a first place among pretrained models with an impressive average score of 79.15!๐ฅ Not far behind is the Mixtral-8x22B-v0.1, achieving second place with an average score of 74.47! Well done, Mistral AI! ๐
The second news is that CohereForAI/c4ai-command-r-plus model in 4-bit quantization got a great average score of 70.08. Cool stuff, Cohere! ๐ (and I also have the screenshot for this, don't miss it)
The last news, which might seem small but is still significant, the Leaderboard frontpage now supports Python 3.12.1. This means we're on our way to speed up the Leaderboard's performance! ๐
If you have any comments or suggestions, feel free to also tag me on X (Twitter), I'll try to help โ [at]ailozovskaya
In a basic chatbots, errors are annoyances. In medical LLMs, errors can have life-threatening consequences ๐ฉธ
It's therefore vital to benchmark/follow advances in medical LLMs before even thinking about deployment.
This is why a small research team introduced a medical LLM leaderboard, to get reproducible and comparable results between LLMs, and allow everyone to follow advances in the field.
Contamination free code evaluations with LiveCodeBench! ๐ฅ๏ธ
LiveCodeBench is a new leaderboard, which contains: - complete code evaluations (on code generation, self repair, code execution, tests) - my favorite feature: problem selection by publication date ๐
This feature means that you can get model scores averaged only on new problems out of the training data. This means... contamination free code evals! ๐
Is is time for the open-source AI robots revolution ๐?
With @haixuantao and @Leyo weโve been playing with a low-cost DJI robot controlled by three local open-source AI models (Whisper, Idefics2, Parler-TTS - all Apache2) and orchestrated by Dora-cs.
๐ Evaluate your RL agents - who's best at Atari?๐
The new RL leaderboard evaluates agents in 87 possible environments (from Atari ๐ฎ to motion control simulations๐ถand more)!
When you submit your model, it's run and evaluated in real time - and the leaderboard displays small videos of the best model's run, which is super fun to watch! โจ
Kudos to @qgallouedec for creating and maintaining the leaderboard! Let's find out which agent is the best at games! ๐