Karthika Rajagopal R S

KarthikaRajagopal

AI & ML interests

NLP, reinforcement learning, Generative AI

Recent Activity

reacted to singhsidhukuldeep's post with 👀 about 24 hours ago
Excited to share groundbreaking research in Knowledge Graph-based Retrieval-Augmented Generation (KG-RAG)! Researchers from the University of Science and Technology of China have developed FRAG - a novel flexible modular framework that revolutionizes how Large Language Models (LLMs) reason with knowledge graphs. What makes FRAG special? It intelligently adapts retrieval strategies based on query complexity without requiring expensive KG fine-tuning. The framework uses a reasoning-aware module to classify queries as simple or complex, then applies tailored retrieval pipelines. Under the hood: - For simple queries: Uses breadth-first search and ranking to efficiently find relevant paths - For complex queries: Employs shortest path algorithms to minimize computational overhead - Features a preprocessing-retrieval-postprocessing pipeline with flexible components - Leverages traditional algorithms like PersonalizedPageRank for subgraph extraction - Implements edge and path ranking models for precise information filtering The results are impressive - FRAG achieves state-of-the-art performance while maintaining high efficiency and low resource consumption. On benchmark datasets like WebQSP and CWQ, it outperforms existing approaches by significant margins. Most importantly, FRAG maintains flexibility and modularity while improving retrieval quality - no expensive LLM fine-tuning required! This makes it highly practical for real-world applications. This work represents a major step forward in making LLMs more reliable and capable of complex reasoning tasks. Looking forward to seeing how this technology evolves!
View all activity

Organizations

Stanford AI's profile picture AI FILMS's profile picture MusicAI's profile picture BigScience Biomedical Datasets's profile picture lora concepts library's profile picture Open-Source AI Meetup's profile picture Keras Dreambooth Event's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture LocalLLaMA's profile picture MLX Community's profile picture Paris AI Running Club's profile picture Stable Diffusion Community (Unofficial, Non-profit)'s profile picture

KarthikaRajagopal's activity

reacted to singhsidhukuldeep's post with 👀 about 24 hours ago
view post
Post
1515
Excited to share groundbreaking research in Knowledge Graph-based Retrieval-Augmented Generation (KG-RAG)!

Researchers from the University of Science and Technology of China have developed FRAG - a novel flexible modular framework that revolutionizes how Large Language Models (LLMs) reason with knowledge graphs.

What makes FRAG special? It intelligently adapts retrieval strategies based on query complexity without requiring expensive KG fine-tuning. The framework uses a reasoning-aware module to classify queries as simple or complex, then applies tailored retrieval pipelines.

Under the hood:
- For simple queries: Uses breadth-first search and ranking to efficiently find relevant paths
- For complex queries: Employs shortest path algorithms to minimize computational overhead
- Features a preprocessing-retrieval-postprocessing pipeline with flexible components
- Leverages traditional algorithms like PersonalizedPageRank for subgraph extraction
- Implements edge and path ranking models for precise information filtering

The results are impressive - FRAG achieves state-of-the-art performance while maintaining high efficiency and low resource consumption. On benchmark datasets like WebQSP and CWQ, it outperforms existing approaches by significant margins.

Most importantly, FRAG maintains flexibility and modularity while improving retrieval quality - no expensive LLM fine-tuning required! This makes it highly practical for real-world applications.

This work represents a major step forward in making LLMs more reliable and capable of complex reasoning tasks. Looking forward to seeing how this technology evolves!
  • 1 reply
·
New activity in KarthikaRajagopal/fake_news.h5 about 2 months ago
New activity in KarthikaRajagopal/kaggle_fake_train about 2 months ago
reacted to sayakpaul's post with ❤️❤️ about 2 months ago
view post
Post
2666
It's been a while we shipped native quantization support in diffusers 🧨

We currently support bistandbytes as the official backend but using others like torchao is already very simple.

This post is just a reminder of what's possible:

1. Loading a model with a quantization config
2. Saving a model with quantization config
3. Loading a pre-quantized model
4. enable_model_cpu_offload()
5. Training and loading LoRAs into quantized checkpoints

Docs:
https://huggingface.co/docs/diffusers/main/en/quantization/bitsandbytes
  • 1 reply
·
New activity in KarthikaRajagopal/kaggle_fake_train about 2 months ago

[bot] Conversion to Parquet

#1 opened about 2 months ago by
parquet-converter