Jeff Boudier's picture

Jeff Boudier

jeffboudier

AI & ML interests

Hugging Face!

Recent Activity

Articles

Organizations

Hugging Face's profile picture Renault Group's profile picture Intel's profile picture Spaces-explorers's profile picture Qualcomm's profile picture julsimon-test's profile picture AWS Inferentia and Trainium's profile picture Spotify's profile picture Amazon SageMaker Community's profile picture Hugging Face Infinity's profile picture Demo Corp's profile picture Habana AI's profile picture Hugging Face Optimum's profile picture Hugging Test Lab's profile picture WIP's profile picture Evaluation on the Hub's profile picture HuggingFaceM4's profile picture Hackathon Team 1's profile picture Open-Source AI Meetup's profile picture AMD's profile picture model-attribution-challenge-admin's profile picture model-attribution-challenge's profile picture Inference Endpoints's profile picture Hugging Face OSS Metrics's profile picture EU org's profile picture Enterprise Explorers's profile picture Optimum Nvidia's profile picture Social Post Explorers's profile picture Optimum-Intel's profile picture Hugging Face Machine Learning Optimization's profile picture Hugging Face Party @ PyTorch Conference's profile picture Google Cloud ๐Ÿค๐Ÿป Hugging Face's profile picture Huggingface HUGS's profile picture Nerdy Face's profile picture open/ acc's profile picture

jeffboudier's activity

reacted to burtenshaw's post with ๐Ÿค—โค๏ธ 6 days ago
view post
Post
2534
People are flexing their end of year stats, so I made this app to show hub stats in a tidy design!

Thanks @Ameeeee and @jfcalvo for the feature from Argilla!
burtenshaw/recap
  • 1 reply
ยท
reacted to julien-c's post with ๐Ÿค—โค๏ธ 14 days ago
view post
Post
7575
After some heated discussion ๐Ÿ”ฅ, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐Ÿ”ฅ

cc: @reach-vb @pierric @victor and the HF team
ยท
reacted to clem's post with ๐Ÿ”ฅ 22 days ago
view post
Post
4084
Six predictions for AI in 2025 (and a review of how my 2024 predictions turned out):

- There will be the first major public protest related to AI
- A big company will see its market cap divided by two or more because of AI
- At least 100,000 personal AI robots will be pre-ordered
- China will start to lead the AI race (as a consequence of leading the open-source AI race).
- There will be big breakthroughs in AI for biology and chemistry.
- We will begin to see the economic and employment growth potential of AI, with 15M AI builders on Hugging Face.

How my predictions for 2024 turned out:

- A hyped AI company will go bankrupt or get acquired for a ridiculously low price
โœ… (Inflexion, AdeptAI,...)

- Open-source LLMs will reach the level of the best closed-source LLMs
โœ… with QwQ and dozens of others

- Big breakthroughs in AI for video, time-series, biology and chemistry
โœ… for video ๐Ÿ”ดfor time-series, biology and chemistry

- We will talk much more about the cost (monetary and environmental) of AI
โœ…Monetary ๐Ÿ”ดEnvironmental (๐Ÿ˜ข)

- A popular media will be mostly AI-generated
โœ… with NotebookLM by Google

- 10 millions AI builders on Hugging Face leading to no increase of unemployment
๐Ÿ”œcurrently 7M of AI builders on Hugging Face
ยท
reacted to andito's post with โค๏ธ 27 days ago
view post
Post
3242
Let's go! We are releasing SmolVLM, a smol 2B VLM built for on-device inference that outperforms all models at similar GPU RAM usage and tokens throughputs.

- SmolVLM generates tokens 7.5 to 16 times faster than Qwen2-VL! ๐Ÿคฏ
- Other models at this size crash a laptop, but SmolVLM comfortably generates 17 tokens/sec on a macbook! ๐Ÿš€
- SmolVLM can be fine-tuned on a Google collab! Or process millions of documents with a consumer GPU!
- SmolVLM even outperforms larger models in video benchmarks, despite not even being trained on videos!

Check out more!
Demo: HuggingFaceTB/SmolVLM
Blog: https://huggingface.co/blog/smolvlm
Model: HuggingFaceTB/SmolVLM-Instruct
Fine-tuning script: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
posted an update about 1 month ago
replied to clem's post 2 months ago
replied to clem's post 2 months ago
view reply

๐Ÿ“† Wed Oct 30th - 9am PT / 12pm ET / 18h CET
Can't wait!

reacted to clem's post with โค๏ธ๐Ÿค—๐Ÿ”ฅ๐Ÿš€ 2 months ago
view post
Post
4435
This is no Woodstock AI but will be fun nonetheless haha. Iโ€™ll be hosting a live workshop with team members next week about the Enterprise Hugging Face hub.

1,000 spots available first-come first serve with some surprises during the stream!

You can register and add to your calendar here: https://streamyard.com/watch/JS2jHsUP3NDM
ยท
reacted to victor's post with ๐Ÿš€โค๏ธ๐Ÿ”ฅ๐Ÿค— 2 months ago
view post
Post
2669
NEW - Inference Playground

Maybe like me you have always wanted a super easy way to compare llama3.2-1B vs. llama3.2-3B? or the same model with different temperatures?

Trying and comparing warm Inference API models has never been easier!
Just go to https://hf.co/playground, set your token and you're ready to go.
We'll keep improving, feedback welcome ๐Ÿ˜Š
  • 2 replies
ยท
posted an update 2 months ago
posted an update 3 months ago
view post
Post
450
Inference Endpoints got a bunch of cool updates yesterday, this is my top 3
reacted to m-ric's post with ๐Ÿ”ฅ 3 months ago
view post
Post
3382
๐Ÿ”ฅ ๐๐ฐ๐ž๐ง ๐ซ๐ž๐ฅ๐ž๐š๐ฌ๐ž๐ฌ ๐ญ๐ก๐ž๐ข๐ซ ๐Ÿ.๐Ÿ“ ๐Ÿ๐š๐ฆ๐ข๐ฅ๐ฒ ๐จ๐Ÿ ๐ฆ๐จ๐๐ž๐ฅ๐ฌ: ๐๐ž๐ฐ ๐’๐Ž๐“๐€ ๐Ÿ๐จ๐ซ ๐š๐ฅ๐ฅ ๐ฌ๐ข๐ณ๐ž๐ฌ ๐ฎ๐ฉ ๐ญ๐จ ๐Ÿ•๐Ÿ๐!

The Chinese LLM maker just dropped a flurry of different models, ensuring there will be a Qwen SOTA model for every application out there:
Qwen2.5: 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B
Qwen2.5-Coder: 1.5B, 7B, and 32B on the way
Qwen2.5-Math: 1.5B, 7B, and 72B.

And they didn't sleep: the performance is top of the game for each weight category!

๐Š๐ž๐ฒ ๐ข๐ง๐ฌ๐ข๐ ๐ก๐ญ๐ฌ:

๐ŸŒ All models have ๐Ÿญ๐Ÿฎ๐Ÿด๐—ธ ๐˜๐—ผ๐—ธ๐—ฒ๐—ป ๐—ฐ๐—ผ๐—ป๐˜๐—ฒ๐˜…๐˜ ๐—น๐—ฒ๐—ป๐—ด๐˜๐—ต

๐Ÿ“š Models pre-trained on 18T tokens, even longer than the 15T of Llama-3

๐Ÿ’ช The flagship ๐—ค๐˜„๐—ฒ๐—ป๐Ÿฎ.๐Ÿฑ-๐Ÿณ๐Ÿฎ๐—• ๐—ถ๐˜€ ~๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฒ๐˜๐—ถ๐˜๐—ถ๐˜ƒ๐—ฒ ๐˜„๐—ถ๐˜๐—ต ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿญ-๐Ÿฐ๐Ÿฌ๐Ÿฑ๐—•, ๐—ฎ๐—ป๐—ฑ ๐—ต๐—ฎ๐˜€ ๐—ฎ ๐Ÿฏ-๐Ÿฑ% ๐—บ๐—ฎ๐—ฟ๐—ด๐—ถ๐—ป ๐—ผ๐—ป ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿญ-๐Ÿณ๐Ÿฌ๐—• ๐—ผ๐—ป ๐—บ๐—ผ๐˜€๐˜ ๐—ฏ๐—ฒ๐—ป๐—ฐ๐—ต๐—บ๐—ฎ๐—ฟ๐—ธ๐˜€.

๐Ÿ‡ซ๐Ÿ‡ท On top of this, it ๐˜๐—ฎ๐—ธ๐—ฒ๐˜€ ๐˜๐—ต๐—ฒ #๐Ÿญ ๐˜€๐—ฝ๐—ผ๐˜ ๐—ผ๐—ป ๐—บ๐˜‚๐—น๐˜๐—ถ๐—น๐—ถ๐—ป๐—ด๐˜‚๐—ฎ๐—น ๐˜๐—ฎ๐˜€๐—ธ๐˜€ so it might become my standard for French

๐Ÿ’ป Qwen2.5-Coder is only 7B but beats competing models up to 33B (DeeSeek-Coder 33B-Instruct). Let's wait for their 32B to come out!

๐Ÿงฎ Qwen2.5-Math sets a new high in the ratio of MATH benchmark score to # of parameters. They trained it by "aggregating more high-quality mathematical data, particularly in Chinese, from web sources, books, and codes across multiple recall cycles."

๐Ÿ“„ Technical report to be released "very soon"

๐Ÿ”“ All models have the most permissive license apache2.0, except the 72B models that have a custom license mentioning "you can use it for free EXCEPT if your product has over 100M users"

๐Ÿค— All models are available on the HF Hub! โžก๏ธ Qwen/qwen25-66e81a666513e518adb90d9e
  • 2 replies
ยท