Harpreet Sahota's picture

Harpreet Sahota PRO

harpreetsahota

AI & ML interests

Deep learning, laguage models, prompt engineering, agents, multi-agent systems

Recent Activity

liked a model 3 days ago
facebook/sam2.1-hiera-tiny
liked a model 5 days ago
JeffreyXiang/TRELLIS-image-large
updated a dataset 6 days ago
Voxel51/fisheye8k
View all activity

Articles

Organizations

AI Maker Space's profile picture Blog-explorers's profile picture DLD's profile picture Voxel51's profile picture Social Post Explorers's profile picture

harpreetsahota's activity

reacted to their post with ๐Ÿ”ฅ๐Ÿš€ 7 months ago
view post
Post
2136
The Coachella of Computer Vision, CVPR, is right around the corner. In anticipation of the conference, I curated a dataset of the papers.

I'll have a technical blog post out tomorrow doing some analysis on the dataset, but I'm so hyped that I wanted to get it out to the community ASAP.

The dataset consists of the following fields:

- An image of the first page of the paper
- title: The title of the paper
- authors_list: The list of authors
- abstract: The abstract of the paper
- arxiv_link: Link to the paper on arXiv
- other_link: Link to the project page, if found
- category_name: The primary category this paper according to [arXiv taxonomy](https://arxiv.org/category_taxonomy)
- all_categories: All categories this paper falls into, according to arXiv taxonomy
- keywords: Extracted using GPT-4o

Here's how I created the dataset ๐Ÿ‘‡๐Ÿผ

Generic code for building this dataset can be found [here](https://github.com/harpreetsahota204/CVPR-2024-Papers).

This dataset was built using the following steps:

- Scrape the CVPR 2024 website for accepted papers
- Use DuckDuckGo to search for a link to the paper's abstract on arXiv
- Use arXiv.py (python wrapper for the arXiv API) to extract the abstract and categories, and download the pdf for each paper
- Use pdf2image to save the image of paper's first page
- Use GPT-4o to extract keywords from the abstract

Voxel51/CVPR_2024_Papers
posted an update 7 months ago
view post
Post
2136
The Coachella of Computer Vision, CVPR, is right around the corner. In anticipation of the conference, I curated a dataset of the papers.

I'll have a technical blog post out tomorrow doing some analysis on the dataset, but I'm so hyped that I wanted to get it out to the community ASAP.

The dataset consists of the following fields:

- An image of the first page of the paper
- title: The title of the paper
- authors_list: The list of authors
- abstract: The abstract of the paper
- arxiv_link: Link to the paper on arXiv
- other_link: Link to the project page, if found
- category_name: The primary category this paper according to [arXiv taxonomy](https://arxiv.org/category_taxonomy)
- all_categories: All categories this paper falls into, according to arXiv taxonomy
- keywords: Extracted using GPT-4o

Here's how I created the dataset ๐Ÿ‘‡๐Ÿผ

Generic code for building this dataset can be found [here](https://github.com/harpreetsahota204/CVPR-2024-Papers).

This dataset was built using the following steps:

- Scrape the CVPR 2024 website for accepted papers
- Use DuckDuckGo to search for a link to the paper's abstract on arXiv
- Use arXiv.py (python wrapper for the arXiv API) to extract the abstract and categories, and download the pdf for each paper
- Use pdf2image to save the image of paper's first page
- Use GPT-4o to extract keywords from the abstract

Voxel51/CVPR_2024_Papers
replied to jamarks's post 8 months ago
reacted to jamarks's post with ๐Ÿคฏ๐Ÿค—๐Ÿ”ฅ๐Ÿš€ 8 months ago
view post
Post
2170
FiftyOne Datasets <> Hugging Face Hub Integration!

As of yesterday's release of FiftyOne 0.23.8, the FiftyOne open source library for dataset curation and visualization is now integrated with the Hugging Face Hub!

You can now load Parquet datasets from the hub and have them converted directly into FiftyOne datasets. To load MNIST, for example:

pip install -U fiftyone


import fiftyone as fo
import fiftyone.utils.huggingface as fouh

dataset = fouh.load_from_hub(
    "mnist",
    format="ParquetFilesDataset",
    classification_fields="label",
)
session = fo.launch_app(dataset)


You can also load FiftyOne datasets directly from the hub. Here's how you load the first 1000 samples from the VisDrone dataset:

import fiftyone as fo
import fiftyone.utils.huggingface as fouh

dataset = fouh.load_from_hub("jamarks/VisDrone2019-DET", max_samples=1000)

# Launch the App
session = fo.launch_app(dataset)


And tying it all together, you can push your FiftyOne datasets directly to the hub:

import fiftyone.zoo as foz
import fiftyone.utils.huggingface as fouh

dataset = foz.load_zoo_dataset("quickstart")
fouh.push_to_hub(dataset, "my-dataset")


Major thanks to @tomaarsen @davanstrien @severo @osanseviero and @julien-c for helping to make this happen!!!

Full documentation and details here: https://docs.voxel51.com/integrations/huggingface.html#huggingface-hub
ยท
reacted to danielhanchen's post with โค๏ธ 10 months ago
view post
Post
Gemma QLoRA finetuning is now 2.4x faster and uses 58% less VRAM than FA2 through ๐ŸฆฅUnsloth! Had to rewrite our Cross Entropy Loss kernels to work on all vocab sizes, re-design our manual autograd engine to accept all activation functions, and more! I wrote all about our learnings in our blog post: https://unsloth.ai/blog/gemma.

Also have a Colab notebook with no OOMs, and has 2x faster inference for Gemma & how to merge and save to llama.cpp GGUF & vLLM: https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing

And uploaded 4bit pre-quantized versions for Gemma 2b and 7b: unsloth/gemma-7b-bnb-4bit unsloth/gemma-2b-bnb-4bit

from unsloth import FastLanguageModel
model, tokenzer = FastLanguageModel.from_pretrained("unsloth/gemma-7b")
model = FastLanguageModel.get_peft_model(model)
  • 4 replies
ยท
reacted to their post with โค๏ธ 10 months ago
view post
Post
google/gemma-7b-it is super good!

I wasn't convinced at first, but after vibe-checking it...I'm quite impressed.

I've got a notebook here, which is kind of a framework for vibe-checking LLMs.

In this notebook, I take Gemma for a spin on a variety of prompts:
โ€ข [nonsensical tokens]( harpreetsahota/diverse-token-sampler
โ€ข [conversation where I try to get some PII)( harpreetsahota/red-team-prompts-questions)
โ€ข [summarization ability]( lighteval/summarization)
โ€ข [instruction following]( harpreetsahota/Instruction-Following-Evaluation-for-Large-Language-Models
โ€ข [chain of thought reasoning]( ssbuild/alaca_chain-of-thought)

I then used LangChain evaluators (GPT-4 as judge), and track everything in LangSmith. I made public links to the traces where you can inspect the runs.

I hope you find this helpful, and I am certainly open to feedback, criticisms, or ways to improve.

Cheers:

You can find the notebook here: https://colab.research.google.com/drive/1RHzg0FD46kKbiGfTdZw9Fo-DqWzajuoi?usp=sharing
reacted to merve's post with ๐Ÿ‘ 10 months ago
view post
Post
I've tried DoRA (https://arxiv.org/abs/2402.09353) with SDXL using PEFT, outputs are quite detailed ๐Ÿคฉ๐ŸŒŸ
as usual trained on lego dataset I compiled, I compared them with previously trained pivotal tuned model and the normal DreamBooth model before that ๐Ÿ˜Š

Notebook by @linoyts https://colab.research.google.com/drive/134mt7bCMKtCYyYzETfEGKXT1J6J50ydT?usp=sharing
Integration to PEFT by @BenjaminB https://github.com/huggingface/peft/pull/1474 (more info in the PR)
reacted to Wauplin's post with ๐Ÿ‘๐Ÿคโค๏ธ 10 months ago
view post
Post
๐Ÿš€ Just released version 0.21.0 of the huggingface_hub Python library!

Exciting updates include:
๐Ÿ–‡๏ธ Dataclasses everywhere for improved developer experience!
๐Ÿ’พ HfFileSystem optimizations!
๐Ÿงฉ PyTorchHubMixin now supports configs and safetensors!
โœจ audio-to-audio supported in the InferenceClient!
๐Ÿ“š Translated docs in Simplified Chinese and French!
๐Ÿ’” Breaking changes: simplified API for listing models and datasets!

Check out the full release notes for more details: Wauplin/huggingface_hub#4 ๐Ÿค–๐Ÿ’ป
ยท
reacted to clem's post with โค๏ธ 10 months ago
view post
Post
Terribly excited about open-source + on-device AI these days! Great to see @qualcomm release 80+ models optimized and curated for their devices and chips on HF: https://huggingface.co/qualcomm

  • 1 reply
ยท
posted an update 10 months ago
view post
Post
google/gemma-7b-it is super good!

I wasn't convinced at first, but after vibe-checking it...I'm quite impressed.

I've got a notebook here, which is kind of a framework for vibe-checking LLMs.

In this notebook, I take Gemma for a spin on a variety of prompts:
โ€ข [nonsensical tokens]( harpreetsahota/diverse-token-sampler
โ€ข [conversation where I try to get some PII)( harpreetsahota/red-team-prompts-questions)
โ€ข [summarization ability]( lighteval/summarization)
โ€ข [instruction following]( harpreetsahota/Instruction-Following-Evaluation-for-Large-Language-Models
โ€ข [chain of thought reasoning]( ssbuild/alaca_chain-of-thought)

I then used LangChain evaluators (GPT-4 as judge), and track everything in LangSmith. I made public links to the traces where you can inspect the runs.

I hope you find this helpful, and I am certainly open to feedback, criticisms, or ways to improve.

Cheers:

You can find the notebook here: https://colab.research.google.com/drive/1RHzg0FD46kKbiGfTdZw9Fo-DqWzajuoi?usp=sharing
reacted to philschmid's post with โค๏ธ 11 months ago
view post
Post
What's the best way to fine-tune open LLMs in 2024? Look no further! ๐Ÿ‘€ย I am excited to share โ€œHow to Fine-Tune LLMs in 2024 with Hugging Faceโ€ using the latest research techniques, including Flash Attention, Q-LoRA, OpenAI dataset formats (messages), ChatML, Packing, all built with Hugging Face TRL. ๐Ÿš€

It is created for consumer-size GPUs (24GB) covering the full end-to-end lifecycle with:
๐Ÿ’กDefine and understand use cases for fine-tuning
๐Ÿง‘๐Ÿปโ€๐Ÿ’ปย Setup of the development environment
๐Ÿงฎย Create and prepare dataset (OpenAI format)
๐Ÿ‹๏ธโ€โ™€๏ธย Fine-tune LLM using TRL and the SFTTrainer
๐Ÿฅ‡ย Test and evaluate the LLM
๐Ÿš€ย Deploy for production with TGI

๐Ÿ‘‰ย  https://www.philschmid.de/fine-tune-llms-in-2024-with-trl

Coming soon: Advanced Guides for multi-GPU/multi-Node full fine-tuning and alignment using DPO & KTO. ๐Ÿ”œ
ยท
reacted to abidlabs's post with ๐Ÿค—โค๏ธ 11 months ago
view post
Post
๐„๐ฆ๐›๐ซ๐š๐œ๐ž๐ ๐›๐ฒ ๐‡๐ฎ๐ ๐ ๐ข๐ง๐  ๐…๐š๐œ๐ž: ๐ญ๐ก๐ž ๐ˆ๐ง๐ฌ๐ข๐๐ž ๐’๐ญ๐จ๐ซ๐ฒ ๐จ๐Ÿ ๐Ž๐ฎ๐ซ ๐’๐ญ๐š๐ซ๐ญ๐ฎ๐ฉโ€™๐ฌ ๐€๐œ๐ช๐ฎ๐ข๐ฌ๐ข๐ญ๐ข๐จ๐ง

In late 2021, our team of five engineers, scattered around the globe, signed the papers to shut down our startup, Gradio. For many founders, this would have been a moment of sadness or even bitter reflection.

But we were celebrating. We were getting acquired by Hugging Face!

We had been working very hard towards this acquisition, but for weeks, the acquisition had been blocked by a single investor. The more we pressed him, the more he buckled down, refusing to sign off on the acquisition. Until, unexpectedly, the investor conceded, allowing us to join Hugging Face.

For the first time since our acquisition, Iโ€™m writing down the story in detail, hoping that it may shed some light into the obscure world of startup acquisitions and what decisions founders can make to improve their odds for a successful acquisition.

To understand how we got acquired by Hugging Face, you need to know why we started Gradio.

๐€๐ง ๐ˆ๐๐ž๐š ๐Ÿ๐ซ๐จ๐ฆ ๐ญ๐ก๐ž ๐‡๐ž๐š๐ซ๐ญ

Two years before the acquisition, in early 2019, I was working on a research project at Stanford. It was the third year of my PhD, and my labmates and I had trained a machine learning model that could predict patient biomarkers (such as whether patients had certain diseases or an implanted pacemaker) from an ultrasound image of their heart โ€” as well as a cardiologist.

Naturally, cardiologists were skeptical... read the rest of the story here: https://twitter.com/abidlabs/status/1745533306492588303
  • 1 reply
ยท
posted an update 11 months ago
view post
Post
โœŒ๐ŸผTwo new models dropped today ๐Ÿ‘‡๐Ÿฝ

1) ๐Ÿ‘ฉ๐Ÿพโ€๐Ÿ’ป ๐ƒ๐ž๐œ๐ข๐‚๐จ๐๐ž๐ซ-๐Ÿ”๐

๐Ÿ‘‰๐Ÿฝ Supports ๐Ÿ– ๐ฅ๐š๐ง๐ ๐ฎ๐š๐ ๐ž๐ฌ: C, C# C++, GO, Rust, Python, Java, and Javascript.

๐Ÿ‘‰๐Ÿฝ Released under the ๐€๐ฉ๐š๐œ๐ก๐ž ๐Ÿ.๐ŸŽ ๐ฅ๐ข๐œ๐ž๐ง๐ฌ๐ž

๐ŸฅŠ ๐๐ฎ๐ง๐œ๐ก๐ž๐ฌ ๐š๐›๐จ๐ฏ๐ž ๐ข๐ญ๐ฌ ๐ฐ๐ž๐ข๐ ๐ก๐ญ ๐œ๐ฅ๐š๐ฌ๐ฌ ๐จ๐ง ๐‡๐ฎ๐ฆ๐š๐ง๐„๐ฏ๐š๐ฅ: Beats out CodeGen 2.5 7B and StarCoder 7B on most supported languages. Has a 3-point lead over StarCoderBase 15.5B for Python

๐Ÿ’ป ๐‘ป๐’“๐’š ๐’Š๐’• ๐’๐’–๐’•:

๐Ÿƒ ๐Œ๐จ๐๐ž๐ฅ ๐‚๐š๐ซ๐: Deci/DeciCoder-6B

๐Ÿ““ ๐๐จ๐ญ๐ž๐›๐จ๐จ๐ค: https://colab.research.google.com/drive/1QRbuser0rfUiFmQbesQJLXVtBYZOlKpB

๐Ÿชง ๐‡๐ฎ๐ ๐ ๐ข๐ง๐ ๐…๐š๐œ๐ž ๐’๐ฉ๐š๐œ๐ž: Deci/DeciCoder-6B-Demo

2) ๐ŸŽจ ๐ƒ๐ž๐œ๐ข๐ƒ๐ข๐Ÿ๐Ÿ๐ฎ๐ฌ๐ข๐จ๐ง ๐ฏ๐Ÿ.๐ŸŽ

๐Ÿ‘‰๐Ÿฝ Produces quality images on par with Stable Diffusion v1.5, but ๐Ÿ.๐Ÿ” ๐ญ๐ข๐ฆ๐ž๐ฌ ๐Ÿ๐š๐ฌ๐ญ๐ž๐ซ ๐ข๐ง ๐Ÿ’๐ŸŽ% ๐Ÿ๐ž๐ฐ๐ž๐ซ ๐ข๐ญ๐ž๐ซ๐š๐ญ๐ข๐จ๐ง๐ฌ

๐Ÿ‘‰๐Ÿฝ Employs a ๐ฌ๐ฆ๐š๐ฅ๐ฅ๐ž๐ซ ๐š๐ง๐ ๐Ÿ๐š๐ฌ๐ญ๐ž๐ซ ๐”-๐๐ž๐ญ ๐œ๐จ๐ฆ๐ฉ๐จ๐ง๐ž๐ง๐ญ ๐ฐ๐ก๐ข๐œ๐ก ๐ก๐š๐ฌ ๐Ÿ–๐Ÿ”๐ŸŽ ๐ฆ๐ข๐ฅ๐ฅ๐ข๐จ๐ง ๐ฉ๐š๐ซ๐š๐ฆ๐ž๐ญ๐ž๐ซ๐ฌ.

๐Ÿ‘‰๐Ÿฝ Uses an optimized scheduler, ๐’๐ช๐ฎ๐ž๐ž๐ณ๐ž๐๐ƒ๐๐Œ++, which ๐œ๐ฎ๐ญ๐ฌ ๐๐จ๐ฐ๐ง ๐ญ๐ก๐ž ๐ง๐ฎ๐ฆ๐›๐ž๐ซ ๐จ๐Ÿ ๐ฌ๐ญ๐ž๐ฉ๐ฌ ๐ง๐ž๐ž๐๐ž๐ ๐ญ๐จ ๐ ๐ž๐ง๐ž๐ซ๐š๐ญ๐ž ๐š ๐ช๐ฎ๐š๐ฅ๐ข๐ญ๐ฒ ๐ข๐ฆ๐š๐ ๐ž ๐Ÿ๐ซ๐จ๐ฆ ๐Ÿ๐Ÿ” ๐ญ๐จ ๐Ÿ๐ŸŽ.

๐Ÿ‘‰๐Ÿฝ Released under the ๐‚๐ซ๐ž๐š๐ญ๐ข๐ฏ๐ž๐Œ๐‹ ๐Ž๐ฉ๐ž๐ง ๐‘๐€๐ˆ๐‹++-๐Œ ๐‹๐ข๐œ๐ž๐ง๐ฌ๐ž.

๐Ÿ’ป ๐‘ป๐’“๐’š ๐’Š๐’• ๐’๐’–๐’•:

๐Ÿƒ ๐Œ๐จ๐๐ž๐ฅ ๐‚๐š๐ซ๐: Deci/DeciDiffusion-v2-0

๐Ÿ““ ๐๐จ๐ญ๐ž๐›๐จ๐จ๐ค: https://colab.research.google.com/drive/11Ui_KRtK2DkLHLrW0aa11MiDciW4dTuB

๐Ÿชง ๐‡๐ฎ๐ ๐ ๐ข๐ง๐ ๐…๐š๐œ๐ž ๐’๐ฉ๐š๐œ๐ž: Deci/DeciDiffusion-v2-0

Help support the projects by liking the model cards and the spaces!

Cheers and happy hacking!