Mariusz Kurman PRO
mkurman
AI & ML interests
AI Tech Lead | MD
Recent Activity
liked
a model
3 days ago
HuggingFaceTB/SmolVLM-500M-Instruct
liked
a model
3 days ago
HuggingFaceTB/SmolVLM-256M-Instruct
Organizations
mkurman's activity
reacted to
kadirnar's
post with ๐ฅ
6 days ago
replied to
their
post
8 days ago
You can also experiment with these models using my Monte Carlo Tree Search generation pipeline:
https://github.com/mkurman/mcts-pytorch
posted
an
update
8 days ago
Post
1192
ReasonFlow ๐ง
Are you fascinated by reasoning models? If so, you won't want to miss my latest project! I've implemented multiple path generations to supercharge the reasoning capabilities of O1-like models. Explore how this work can elevate your model in complex reasoning tasks!
https://github.com/mkurman/ReasonFlow
Use it with:
mkurman/phi4-MedIT-10B-o1
- or -
mkurman/llama-3.2-MEDIT-3B-o1
Are you fascinated by reasoning models? If so, you won't want to miss my latest project! I've implemented multiple path generations to supercharge the reasoning capabilities of O1-like models. Explore how this work can elevate your model in complex reasoning tasks!
https://github.com/mkurman/ReasonFlow
Use it with:
mkurman/phi4-MedIT-10B-o1
- or -
mkurman/llama-3.2-MEDIT-3B-o1
reacted to
Jaward's
post with ๐๐ฅ
15 days ago
Post
1353
Huge AI win in medicine๐
"Large language of life model" just dropped!!
Full paper: https://www.nature.com/articles/s41586-024-08391-z
"Large language of life model" just dropped!!
Full paper: https://www.nature.com/articles/s41586-024-08391-z
reacted to
prithivMLmods's
post with ๐๐ฅ
20 days ago
Post
5885
Reasoning SmolLM2 ๐
๐ฏFine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
๐ฅBlog : https://huggingface.co/blog/prithivMLmods/smollm2-ft
๐ผ Models :
+ SmolLM2-CoT-360M : prithivMLmods/SmolLM2-CoT-360M
+ Reasoning-SmolLM2-135M : prithivMLmods/Reasoning-SmolLM2-135M
+ SmolLM2-CoT-360M-GGUF : prithivMLmods/SmolLM2-CoT-360M-GGUF
๐ค Other Details :
+ Demo : prithivMLmods/SmolLM2-CoT-360M
+ Fine-tune nB : prithivMLmods/SmolLM2-CoT-360M
๐ฏFine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
๐ฅBlog : https://huggingface.co/blog/prithivMLmods/smollm2-ft
๐ผ Models :
+ SmolLM2-CoT-360M : prithivMLmods/SmolLM2-CoT-360M
+ Reasoning-SmolLM2-135M : prithivMLmods/Reasoning-SmolLM2-135M
+ SmolLM2-CoT-360M-GGUF : prithivMLmods/SmolLM2-CoT-360M-GGUF
๐ค Other Details :
+ Demo : prithivMLmods/SmolLM2-CoT-360M
+ Fine-tune nB : prithivMLmods/SmolLM2-CoT-360M
reacted to
openfree's
post with ๐ฅ
21 days ago
Post
5184
# ๐งฌ Protein Genesis AI: Design Proteins with Just a Prompt
## ๐ค Current Challenges in Protein Design
Traditional protein design faces critical barriers:
- ๐ฐ High costs ($1M - $10M+) & long development cycles (2-3 years)
- ๐ฌ Complex equipment and expert knowledge required
- ๐ Low success rates (<10%)
- โฐ Time-consuming experimental validation
## โจ Our Solution: Protein Genesis AI
Transform protein design through simple natural language input:
### Key Features
- ๐ค AI-powered automated design
- ๐ Real-time analysis & optimization
- ๐ฌ Instant 3D visualization
- ๐พ Immediate PDB file generation
## ๐ฏ Applications
### Medical & Industrial
- ๐ฅ Drug development
- ๐ Antibody design
- ๐ญ Industrial enzymes
- โป๏ธ Environmental solutions
### Research & Education
- ๐ฌ Basic research
- ๐ Educational tools
- ๐งซ Experimental design
- ๐ Data analysis
## ๐ซ Key Advantages
- ๐จโ๐ป No coding or technical expertise needed
- โก Results in minutes (vs. years)
- ๐ฐ 90% cost reduction
- ๐ Accessible anywhere
## ๐ Who Needs This?
- ๐ข Biotech companies
- ๐ฅ Pharmaceutical research
- ๐ Academic institutions
- ๐งช Research laboratories
## ๐ Why It Matters
Protein Genesis AI democratizes protein design by transforming complex processes into simple text prompts. This breakthrough accelerates scientific discovery, potentially leading to faster drug development and innovative biotechnology solutions. The future of protein design starts with a simple prompt! ๐
openfree/ProteinGenesis
## ๐ค Current Challenges in Protein Design
Traditional protein design faces critical barriers:
- ๐ฐ High costs ($1M - $10M+) & long development cycles (2-3 years)
- ๐ฌ Complex equipment and expert knowledge required
- ๐ Low success rates (<10%)
- โฐ Time-consuming experimental validation
## โจ Our Solution: Protein Genesis AI
Transform protein design through simple natural language input:
"Design a protein that targets cancer cells"
"Create an enzyme that breaks down plastic"
### Key Features
- ๐ค AI-powered automated design
- ๐ Real-time analysis & optimization
- ๐ฌ Instant 3D visualization
- ๐พ Immediate PDB file generation
## ๐ฏ Applications
### Medical & Industrial
- ๐ฅ Drug development
- ๐ Antibody design
- ๐ญ Industrial enzymes
- โป๏ธ Environmental solutions
### Research & Education
- ๐ฌ Basic research
- ๐ Educational tools
- ๐งซ Experimental design
- ๐ Data analysis
## ๐ซ Key Advantages
- ๐จโ๐ป No coding or technical expertise needed
- โก Results in minutes (vs. years)
- ๐ฐ 90% cost reduction
- ๐ Accessible anywhere
## ๐ Who Needs This?
- ๐ข Biotech companies
- ๐ฅ Pharmaceutical research
- ๐ Academic institutions
- ๐งช Research laboratories
## ๐ Why It Matters
Protein Genesis AI democratizes protein design by transforming complex processes into simple text prompts. This breakthrough accelerates scientific discovery, potentially leading to faster drug development and innovative biotechnology solutions. The future of protein design starts with a simple prompt! ๐
openfree/ProteinGenesis
reacted to
singhsidhukuldeep's
post with ๐
21 days ago
Post
3407
Exciting breakthrough in e-commerce recommendation systems!
Walmart Global Tech researchers have developed a novel Triple Modality Fusion (TMF) framework that revolutionizes how we make product recommendations.
>> Key Innovation
The framework ingeniously combines three distinct data types:
- Visual data to capture product aesthetics and context
- Textual information for detailed product features
- Graph data to understand complex user-item relationships
>> Technical Architecture
The system leverages a Large Language Model (Llama2-7B) as its backbone and introduces several sophisticated components:
Modality Fusion Module
- All-Modality Self-Attention (AMSA) for unified representation
- Cross-Modality Attention (CMA) mechanism for deep feature integration
- Custom FFN adapters to align different modality embeddings
Advanced Training Strategy
- Curriculum learning approach with three complexity levels
- Parameter-Efficient Fine-Tuning using LoRA
- Special token system for behavior and item representation
>> Real-World Impact
The results are remarkable:
- 38.25% improvement in Electronics recommendations
- 43.09% boost in Sports category accuracy
- Significantly higher human evaluation scores compared to traditional methods
Currently deployed in Walmart's production environment, this research demonstrates how combining multiple data modalities with advanced LLM architectures can dramatically improve recommendation accuracy and user satisfaction.
Walmart Global Tech researchers have developed a novel Triple Modality Fusion (TMF) framework that revolutionizes how we make product recommendations.
>> Key Innovation
The framework ingeniously combines three distinct data types:
- Visual data to capture product aesthetics and context
- Textual information for detailed product features
- Graph data to understand complex user-item relationships
>> Technical Architecture
The system leverages a Large Language Model (Llama2-7B) as its backbone and introduces several sophisticated components:
Modality Fusion Module
- All-Modality Self-Attention (AMSA) for unified representation
- Cross-Modality Attention (CMA) mechanism for deep feature integration
- Custom FFN adapters to align different modality embeddings
Advanced Training Strategy
- Curriculum learning approach with three complexity levels
- Parameter-Efficient Fine-Tuning using LoRA
- Special token system for behavior and item representation
>> Real-World Impact
The results are remarkable:
- 38.25% improvement in Electronics recommendations
- 43.09% boost in Sports category accuracy
- Significantly higher human evaluation scores compared to traditional methods
Currently deployed in Walmart's production environment, this research demonstrates how combining multiple data modalities with advanced LLM architectures can dramatically improve recommendation accuracy and user satisfaction.
reacted to
Sri-Vigneshwar-DJ's
post with ๐ฅ
21 days ago
Post
2339
Combining smolagents with Anthropicโs best practices simplifies building powerful AI agents:
1. Code-Based Agents: Write actions as Python code, reducing steps by 30%.
2. Prompt Chaining: Break tasks into sequential subtasks with validation gates.
3. Routing: Classify inputs and direct them to specialized handlers.
4. Fallback: Handle tasks even if classification fails.
https://huggingface.co/blog/Sri-Vigneshwar-DJ/building-effective-agents-with-anthropics-best-pra
1. Code-Based Agents: Write actions as Python code, reducing steps by 30%.
2. Prompt Chaining: Break tasks into sequential subtasks with validation gates.
3. Routing: Classify inputs and direct them to specialized handlers.
4. Fallback: Handle tasks even if classification fails.
https://huggingface.co/blog/Sri-Vigneshwar-DJ/building-effective-agents-with-anthropics-best-pra
reacted to
ezgikorkmaz's
post with ๐ฅ
21 days ago
Post
1901
If you are interested in adversarial deep reinforcement learning find the compact reading list below:
https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning
https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning
posted
an
update
22 days ago
Post
1905
I kindly invite you to try my experimental Llama 3.2 3B with o1-like thinking.
It utilizes Thoughts when needed, so don't be surprised when it's not. It also has a minor bug that requires further fine-tuning (sometimes it starts with the <|python_tag|> instead of <Thought>).
Enjoy!
Give some likes and whatever to make me feel better and motivated to keep going ๐
mkurman/llama-3.2-MEDIT-3B-o1
It utilizes Thoughts when needed, so don't be surprised when it's not. It also has a minor bug that requires further fine-tuning (sometimes it starts with the <|python_tag|> instead of <Thought>).
Enjoy!
Give some likes and whatever to make me feel better and motivated to keep going ๐
mkurman/llama-3.2-MEDIT-3B-o1
reacted to
reddgr's
post with ๐
about 2 months ago
Post
1855
Thought it would only make sense to share this here. Lately, one of my favorite activities has been annotating prompts and putting them into datasets (
reddgr/tl-test-learn-prompts
reddgr/rq-request-question-prompts
reddgr/nli-chatbot-prompt-categorization), which I then use to classify and select chatbot conversations for my website. It's quite fun to use this widget on the
lmsys/lmsys-chat-1m, but I also use it on my 2 years of talking to chatbots (soon to be dataset, but still a lot of web scraping and ETL work left)... This one in the picture was probably one of the first prompts I wrote to an LLM:
posted
an
update
about 2 months ago
Post
346
How Do I Contribute (HDIC)
Exciting times to come? We are working on a layer self-esteem technique to score their contribution to the final prediction. For now, it unlocks a lot of knowledge already stored in weights we couldn't force the model to extract by further fine-tuning!
Exciting times to come? We are working on a layer self-esteem technique to score their contribution to the final prediction. For now, it unlocks a lot of knowledge already stored in weights we couldn't force the model to extract by further fine-tuning!
reacted to
AdinaY's
post with ๐ฅ
about 2 months ago
Post
1357
HunyuanVideo ๐น The new open video generation model by Tencent!
๐ tencent/HunyuanVideo
zh-ai-community/video-models-666afd86cfa4e4dd1473b64c
โจ 13B parameters: Probably the largest open video model to date
โจ Unified architecture for image & video generation
โจ Powered by advanced features: MLLM Text Encoder, 3D VAE, and Prompt Rewrite
โจ Delivers stunning visuals, diverse motion, and unparalleled stability
๐ Fully open with code & weights
๐ tencent/HunyuanVideo
zh-ai-community/video-models-666afd86cfa4e4dd1473b64c
โจ 13B parameters: Probably the largest open video model to date
โจ Unified architecture for image & video generation
โจ Powered by advanced features: MLLM Text Encoder, 3D VAE, and Prompt Rewrite
โจ Delivers stunning visuals, diverse motion, and unparalleled stability
๐ Fully open with code & weights
reacted to
singhsidhukuldeep's
post with ๐ค
about 2 months ago
Post
1316
Exciting breakthrough in Document AI! Researchers from UNC Chapel Hill and Bloomberg have developed M3DocRAG, a revolutionary framework for multi-modal document understanding.
The innovation lies in its ability to handle complex document scenarios that traditional systems struggle with:
- Process 40,000+ pages across 3,000+ documents
- Answer questions requiring information from multiple pages
- Understand visual elements like charts, tables, and figures
- Support both closed-domain (single document) and open-domain (multiple documents) queries
Under the hood, M3DocRAG operates through three sophisticated stages:
>> Document Embedding:
- Converts PDF pages to RGB images
- Uses ColPali to project both text queries and page images into a shared embedding space
- Creates dense visual embeddings for each page while maintaining visual information integrity
>> Page Retrieval:
- Employs MaxSim scoring to compute relevance between queries and pages
- Implements inverted file indexing (IVFFlat) for efficient search
- Reduces retrieval latency from 20s to under 2s when searching 40K+ pages
- Supports approximate nearest neighbor search via Faiss
>> Question Answering:
- Leverages Qwen2-VL 7B as the multi-modal language model
- Processes retrieved pages through a visual encoder
- Generates answers considering both textual and visual context
The results are impressive:
- State-of-the-art performance on MP-DocVQA benchmark
- Superior handling of non-text evidence compared to text-only systems
- Significantly better performance on multi-hop reasoning tasks
This is a game-changer for industries dealing with large document volumesโfinance, healthcare, and legal sectors can now process documents more efficiently while preserving crucial visual context.
The innovation lies in its ability to handle complex document scenarios that traditional systems struggle with:
- Process 40,000+ pages across 3,000+ documents
- Answer questions requiring information from multiple pages
- Understand visual elements like charts, tables, and figures
- Support both closed-domain (single document) and open-domain (multiple documents) queries
Under the hood, M3DocRAG operates through three sophisticated stages:
>> Document Embedding:
- Converts PDF pages to RGB images
- Uses ColPali to project both text queries and page images into a shared embedding space
- Creates dense visual embeddings for each page while maintaining visual information integrity
>> Page Retrieval:
- Employs MaxSim scoring to compute relevance between queries and pages
- Implements inverted file indexing (IVFFlat) for efficient search
- Reduces retrieval latency from 20s to under 2s when searching 40K+ pages
- Supports approximate nearest neighbor search via Faiss
>> Question Answering:
- Leverages Qwen2-VL 7B as the multi-modal language model
- Processes retrieved pages through a visual encoder
- Generates answers considering both textual and visual context
The results are impressive:
- State-of-the-art performance on MP-DocVQA benchmark
- Superior handling of non-text evidence compared to text-only systems
- Significantly better performance on multi-hop reasoning tasks
This is a game-changer for industries dealing with large document volumesโfinance, healthcare, and legal sectors can now process documents more efficiently while preserving crucial visual context.
reacted to
cfahlgren1's
post with ๐ฅ
about 2 months ago
Post
1936
You can just ask things ๐ฃ๏ธ
"show me messages in the coding category that are in the top 10% of reward model scores"
Download really high quality instructions from the Llama3.1 405B synthetic dataset ๐ฅ
argilla/magpie-ultra-v1.0
"show me messages in the coding category that are in the top 10% of reward model scores"
Download really high quality instructions from the Llama3.1 405B synthetic dataset ๐ฅ
argilla/magpie-ultra-v1.0
replied to
their
post
about 2 months ago
That is an excellent question. I was just googling and searching in Arxiv. Now, I try Elicit, โtalkโ with papers and listen to โpodcastsโ on NotebookLM.
replied to
their
post
about 2 months ago
Thanks!
reacted to
AdinaY's
post with โค๏ธ
about 2 months ago
Post
1487
2023 & 2024 Top Downloaded (all time) Open Models on the hub are both from the Chinese community ๐
2023 ๐ Bge base by BAAI
BAAI/bge-base-en-v1.5
2024 ๐ Qwen 2.5 by Alibaba Qwen
Qwen/Qwen2.5-1.5B-Instruct
Canโt wait to see what incredible models the Chinese community will bring in 2025๐
โจ Follow https://huggingface.co/zh-ai-community to get the latest updates from the Chinese community
โจ Explore the 2024 Year in Review huggingface/open-source-ai-year-in-review-2024
2023 ๐ Bge base by BAAI
BAAI/bge-base-en-v1.5
2024 ๐ Qwen 2.5 by Alibaba Qwen
Qwen/Qwen2.5-1.5B-Instruct
Canโt wait to see what incredible models the Chinese community will bring in 2025๐
โจ Follow https://huggingface.co/zh-ai-community to get the latest updates from the Chinese community
โจ Explore the 2024 Year in Review huggingface/open-source-ai-year-in-review-2024