Stefano Fiorucci PRO

anakin87

AI & ML interests

Contributing to Haystack LLM framework šŸ—ļø. Language Models: orchestration, post-training, synthetic data...

Recent Activity

Articles

Organizations

deepset's profile picture Blog-explorers's profile picture ZeroGPU Explorers's profile picture Hugging Face Discord Community's profile picture

Posts 13

view post
Post
1527
ššžš° šˆš­ššš„š¢ššš§ š’š¦ššš„š„ š‹ššš§š š®ššš šž šŒšØššžš„š¬: š†šžš¦š¦šš ššžšØš šžš§šžš¬š¢š¬ šœšØš„š„šžšœš­š¢šØš§ šŸ’ŽšŸŒšŸ‡®šŸ‡¹

I am happy to release two new language models for the Italian Language!

šŸ’Ŗ Gemma 2 9B Neogenesis ITA
anakin87/gemma-2-9b-neogenesis-ita
Building on the impressive work by VAGO Solutions, I applied Direct Preference Optimization with a mix of Italian and English data.
Using Spectrum, I trained 20% of model layers.

šŸ“Š Evaluated on the Open ITA LLM leaderboard ( mii-llm/open_ita_llm_leaderboard), this model achieves strong performance.
To beat it on this benchmark, you'd need a 27B model šŸ˜Ž


šŸ¤ Gemma 2 2B Neogenesis ITA
anakin87/gemma-2-2b-neogenesis-ita
This smaller variant is fine-tuned from the original Gemma 2 2B it by Google.
Through a combination of Supervised Fine-Tuning and Direct Preference Optimization, I trained 25% of the layers using Spectrum.

šŸ“ˆ Compared to the original model, it shows improved Italian proficiency, good for its small size.


Both models were developed during the recent #gemma competition on Kaggle.
šŸ““ Training code: https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond


šŸ™ Thanks @FinancialSupport and mii-llm for the help during evaluation.
view post
Post
515
Hey, it has been a while... I was busy participating in šŸ’Ž š†šžš¦š¦šš šœšØš¦š©šžš­š¢š­š¢šØš§!

Here's the idea: Gemma open models have a large vocabulary size (256K), so improving them for a specific language or cultural context should be pretty affordable - no need for continued pre-training.

My submission: šŸ’ŽšŸŒšŸ‡®šŸ‡¹ ššžšØš šžš§šžš¬š¢š¬ - ššØš¬š­-š“š«ššš¢š§š¢š§š  š†šžš¦š¦šš šŸšØš« šˆš­ššš„š¢ššš§ ššš§š š›šžš²šØš§š
šŸ““ Kaggle notebook: https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond

In this notebook, I show how I improve the performance of Gemma 2 2B on Italian via Post-Training.
I believe this method is adaptable to other languages and model sizes.

š˜’š˜¦š˜ŗ š˜šš˜µš˜¦š˜±š˜“
šŸ“Š Choose reference metrics
šŸ§‘ā€šŸ”¬ Data curation for Instruction Fine Tuning: identify existing datasets + generate synthetic data
šŸ‹ļøā€ā™‚ļø Efficient Instruction Fine Tuning with Spectrum
šŸ§‘ā€šŸ”¬ Data curation for Preference Tuning: identify existing datasets + generate synthetic data
šŸ‘šŸ‘Ž Efficient Direct Preference Optimization with Spectrum
šŸ“ˆ Evaluation


šŸ¤— Hugging Face collection (with models and datasets): anakin87/gemma-neogenesis-67824b7bf13ac9cfe091fe2e

I'm also planning a šŸŽ Gemma Giveaway (on LinkedIn - https://www.linkedin.com/in/stefano-fiorucci) in the next few days - sharing techniques, datasets, and models I used for my project... so stay tuned! šŸ“»