Ci Splunk PRO

Csplk

AI & ML interests

None yet

Recent Activity

Organizations

Blog-explorers's profile picture MetricLY's profile picture Hugging Face Discord Community's profile picture

Csplk's activity

reacted to davidberenstein1957's post with โค๏ธ about 7 hours ago
view post
Post
1226
๐Ÿš€ Find banger tools for your smolagents!

I created the Tools gallery, which makes tools specifically developed by/for smolagents searchable and visible. This will help with:
- inspiration
- best practices
- finding cool tools

Space: davidberenstein1957/smolagents-and-tools
  • 1 reply
ยท
reacted to hlarcher's post with ๐Ÿ”ฅ 27 days ago
view post
Post
1069
We are introducing multi-backend support in Hugging Face Text Generation Inference!
With new TGI architecture we are now able to plug new modeling backends to get best performances according to selected model and available hardware. This first step will very soon be followed by the integration of new backends (TRT-LLM, llama.cpp, vLLM, Neuron and TPU).

We are polishing the TensorRT-LLM backend which achieves impressive performances on NVIDIA GPUs, stay tuned ๐Ÿค— !

Check out the details: https://huggingface.co/blog/tgi-multi-backend
reacted to CultriX's post with โค๏ธ about 1 month ago
view post
Post
2040
# Space for Multi-Agent Workflows using AutoGen

Hi all, I created this "AutoGen Multi-Agent Workflow" space that allows you to experiment with multi-agent workflows.

By default, it allows code generation with built-in quality control and automatic documentation generation. It achieves this by leveraging multiple AI agents working together to produce high-quality code snippets, ensuring they meet the specified requirements.

In addition to the default, the space allows users to set custom system messages for each assistant, potentially completely changing the workflow.

# Workflow Steps
1. User Input:
- The user defines a prompt, such as "Write a random password generator using python."
- Outcome: A clear task for the primary assistant to accomplish.

2. Primary Assistant Work:
- The primary assistant begins working on the provided prompt.
It generates an initial code snippet based on the user's request.
- Outcome: An initial proposal for the requested code.

3. Critic Feedback:
- The critic reviews the generated code provides feedback or (if the output meets the criteria), broadcasts the APPROVED message.
(This process repeats until the output is APPROVED or 10 messages have been exchanged).
- Outcome: A revised Python function that incorporates the critic's feedback.

4. Documentation Generation:
- Once the code is approved, it is passed to a documentation assistant.
The documentation assistant generates a concise documentation for the final code.
- Outcome: A short documentation including function description, parameters, and return values.

Enjoy!
CultriX/AutoGen-MultiAgent-Example
ยท
replied to singhsidhukuldeep's post about 1 month ago
reacted to cfahlgren1's post with ๐Ÿ”ฅ about 1 month ago
view post
Post
1754
Wow, I just added Langfuse tracing to the Deepseek Artifacts app and it's really nice ๐Ÿ”ฅ

It allows me to visualize and track more things along with the cfahlgren1/react-code-instructions dataset.

It was just added as a one click Docker Space template, so it's super easy to self host ๐Ÿ’ช
reacted to davidberenstein1957's post with โค๏ธ about 2 months ago
view post
Post
4222
Introducing the Synthetic Data Generator, a user-friendly application that takes a no-code approach to creating custom datasets with Large Language Models (LLMs). The best part: A simple step-by-step process, making dataset creation a non-technical breeze, allowing anyone to create datasets and models in minutes and without any code.

Blog: https://huggingface.co/blog/synthetic-data-generator
Space: argilla/synthetic-data-generator
  • 4 replies
ยท
reacted to alielfilali01's post with โค๏ธ 2 months ago
view post
Post
3464
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
ยท
reacted to dylanebert's post with ๐Ÿ”ฅ 3 months ago
view post
Post
1653
Generate meshes with AI locally in Blender

๐Ÿ“ข New open-source release

meshgen, a local blender integration of LLaMa-Mesh, is open source and available now ๐Ÿค—

get started here: https://github.com/huggingface/meshgen
reacted to prithivMLmods's post with ๐Ÿ‘ 3 months ago
view post
Post
2927
Weekend Dribble ๐Ÿ“ฆ๐Ÿบ

Adapters for Product Ad Backdrops, Smooth Polaroids, Minimalist Sketch cards, Super Blends!!

๐ŸคDemo on: prithivMLmods/FLUX-LoRA-DLC

Stranger Zones :
๐Ÿ‘‰๐Ÿผ{ Super Blend } : strangerzonehf/Flux-Super-Blend-LoRA

๐Ÿ‘‰๐Ÿผ{ Product Concept Ad } : prithivMLmods/Flux-Product-Ad-Backdrop
๐Ÿ‘‰๐Ÿผ{ Frosted Mock-ups } : prithivMLmods/Flux.1-Dev-Frosted-Container-LoRA
๐Ÿ‘‰๐Ÿผ{ Polaroid Plus } : prithivMLmods/Flux-Polaroid-Plus
๐Ÿ‘‰๐Ÿผ{Sketch Cards} : prithivMLmods/Flux.1-Dev-Sketch-Card-LoRA

๐Ÿ‘‰Stranger Zone: https://huggingface.co/strangerzonehf

๐Ÿ‘‰Flux LoRA Collections: prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

.
.
.
@prithivMLmods ๐Ÿค—
reacted to csabakecskemeti's post with ๐Ÿ‘ 3 months ago
view post
Post
1245
Some time ago, I built a predictive LLM router that routes chat requests between small and large LLM models based on prompt classification. It dynamically selects the most suitable model depending on the complexity of the user input, ensuring optimal performance while maintaining conversation context. I also fine-tuned a RoBERTa model to use with the package, but you can plug and play any classifier of your choice.

Project's homepage:
https://devquasar.com/llm-predictive-router/
Pypi:
https://pypi.org/project/llm-predictive-router/
Model:
DevQuasar/roberta-prompt_classifier-v0.1
Training data:
DevQuasar/llm_router_dataset-synth
Git:
https://github.com/csabakecskemeti/llm_predictive_router_package

Feel free to check it out, and/or contribute.
replied to prithivMLmods's post 3 months ago
view reply

You really have been bringing the goodies lately (formerly too!) thanks !

reacted to prithivMLmods's post with โค๏ธ 3 months ago
view post
Post
3951
Minimalistic Adapters ๐ŸŽƒ

๐Ÿš€Demo Here:
prithivMLmods/FLUX-LoRA-DLC

๐Ÿš€Model:
{ Quote Tuner } : prithivMLmods/Flux.1-Dev-Quote-LoRA
{ Stamp Art } : prithivMLmods/Flux.1-Dev-Stamp-Art-LoRA
{ Hand Sticky } : prithivMLmods/Flux.1-Dev-Hand-Sticky-LoRA
{ Poster HQ } : prithivMLmods/Flux.1-Dev-Poster-HQ-LoRA
{ Ctoon Min } : prithivMLmods/Flux.1-Dev-Ctoon-LoRA

๐Ÿš€Collection:
{ Flux LoRA Collection} : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
{ LoRA Space Collection } : prithivMLmods/lora-space-collections-6714b72e0d49e1c97fbd6a32

๐Ÿš€For More Visit
https://huggingface.co/strangerzonehf
.
.
.
๐Ÿค—@prithivMLmods
  • 3 replies
ยท
reacted to merve's post with ๐Ÿ”ฅ๐Ÿ‘€ 3 months ago
view post
Post
5143
OmniVision-968M: a new local VLM for edge devices, fast & small but performant
๐Ÿ’จ a new vision language model with 9x less image tokens, super efficient
๐Ÿ“– aligned with DPO for reducing hallucinations
โšก๏ธ Apache 2.0 license ๐Ÿ”ฅ

Demo hf.co/spaces/NexaAIDev/omnivlm-dpo-demo
Model https://huggingface.co/NexaAIDev/omnivision-968M
  • 4 replies
ยท
reacted to cfahlgren1's post with ๐Ÿ”ฅ 3 months ago
view post
Post
2243
Why use Google Drive when you can have:

โ€ข Free storage with generous limits๐Ÿ†“
โ€ข Dataset Viewer (Sorting, Filtering, FTS) ๐Ÿ”
โ€ข Third Party Library Support
โ€ข SQL Console ๐ŸŸง
โ€ข Security ๐Ÿ”’
โ€ข Community, Reach, and Visibility ๐Ÿ“ˆ

It's a no brainer!

Check out our post on what you get instantly out of the box when you create a dataset.
https://huggingface.co/blog/researcher-dataset-sharing
  • 1 reply
ยท
reacted to merve's post with โค๏ธ 3 months ago
view post
Post
1977
Amazing past days at open ML, it's raining coding models, let's have a recap ๐ŸŒง๏ธ Find all models and datasets here merve/nov-15-releases-67372d0ebdc354756a52ecd0

Models
๐Ÿ’ป Coding: Qwen team released two Qwen2.5-Coder checkpoints of 32B and 7B. Infly released OpenCoder: 1.5B and 8B coding models with instruction SFT'd versions and their datasets! ๐Ÿ’—

๐Ÿ–ผ๏ธ Image/Video Gen: Alibaba vision lab released In-context LoRA -- 10 LoRA models on different themes based on Flux. Also Mochi the sota video generation model with A2.0 license now comes natively supported in diffusers ๐Ÿ‘

๐Ÿ–ผ๏ธ VLMs/Multimodal: NexaAIDev released Omnivision 968M a new vision language model aligned with DPO for reducing hallucinations, also comes with GGUF ckpts ๐Ÿ‘ Microsoft released LLM2CLIP, a new CLIP-like model with longer context window allowing complex text inputs and better search

๐ŸŽฎ AGI?: Etched released Oasis 500M, a diffusion based open world model that takes keyboard input and outputs gameplay ๐Ÿคฏ

Datasets
Common Corpus: A text dataset with 2T tokens with permissive license for EN/FR on various sources: code, science, finance, culture ๐Ÿ“–
reacted to chansung's post with ๐Ÿ”ฅ 3 months ago
view post
Post
1951
๐ŸŽ™๏ธ Listen to the audio "Podcast" of every single Hugging Face Daily Papers.

Now, "AI Paper Reviewer" project can automatically generates audio podcasts on any papers published on arXiv, and this is integrated into the GitHub Action pipeline. I sounds pretty similar to hashtag#NotebookLM in my opinion.

๐ŸŽ™๏ธ Try out yourself at https://deep-diver.github.io/ai-paper-reviewer/

This audio podcast is powered by Google technologies: 1) Google DeepMind Gemini 1.5 Flash model to generate scripts of a podcast, then 2) Google Cloud Vertex AI's Text to Speech model to synthesize the voice turning the scripts into the natural sounding voices (with latest addition of "Journey" voice style)

"AI Paper Reviewer" is also an open source project. Anyone can use it to build and own a personal blog on any papers of your interests. Hence, checkout the project repository below if you are interested in!
: https://github.com/deep-diver/paper-reviewer

This project is going to support other models including open weights soon for both text-based content generation and voice synthesis for the podcast. The only reason I chose Gemini model is that it offers a "free-tier" which is enough to shape up this projects with non-realtime batch generations. I'm excited to see how others will use this tool to explore the world of AI research, hence feel free to share your feedback and suggestions!
ยท
reacted to abhishek's post with ๐Ÿ”ฅ 3 months ago
view post
Post
5934
INTRODUCING Hugging Face AutoTrain Client ๐Ÿ”ฅ
Fine-tuning models got even easier!!!!
Now you can fine-tune SOTA models on all compatible dataset-model pairs on Hugging Face Hub using Python on Hugging Face Servers. Choose from a number of GPU flavors, millions of models and dataset pairs and 10+ tasks ๐Ÿค—

To try, install autotrain-advanced using pip. You can ignore dependencies and install without --no-deps and then you'd need to install some dependencies by hand.

"pip install autotrain-advanced"

Github repo: https://github.com/huggingface/autotrain-advanced
  • 6 replies
ยท
reacted to prithivMLmods's post with ๐Ÿง  3 months ago
view post
Post
4642
Quintet Drop : : ๐Ÿค—

{ Flux LoRA DLC โ›ต } : prithivMLmods/FLUX-LoRA-DLC

-- Purple Dreamy
{ pop of color } : prithivMLmods/Purple-Dreamy-Flux-LoRA

-- Golden Dust
{ shimmer contrast } : prithivMLmods/Golden-Dust-Flux-LoRA

-- Lime Green
{ depth to the composition } : prithivMLmods/Lime-Green-Flux-LoRA

-- Flare Strike
{ Fractured Line } : prithivMLmods/Fractured-Line-Flare

-- Orange Chroma
{ studio lighting } : prithivMLmods/Orange-Chroma-Flux-LoRA
.
.
.
{ collection } : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

@prithivMLmods
replied to di-zhang-fdu's post 3 months ago
view reply

For users with Chinese IP addresses, consider adding this URL to the rules of your U.S. node, as the response headers from this site will report the user's physical location to GPT.

Interested in what this means, can you say more about this part on Chinese IPs?