YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information
This is the model repository for YOLOv9, containing the following checkpoints:
- GELAN-C (a newer, lighter architecture)
- GELAN-E
- YOLO9-C
- YOLO9-E
How to Use
Clone YOLOv9 repository.
git clone https://github.com/WongKinYiu/yolov9.git
cd yolov9
Download the weights using hf_hub_download
Β and use the loading function in helpers of YOLOv9.
from huggingface_hub import hf_hub_download
hf_hub_download("merve/yolov9", filename="yolov9-c.pt", local_dir="./")
Load the model.
# make sure you have the following dependencies
import torch
import numpy as np
from models.common import DetectMultiBackend
from utils.general import non_max_suppression, scale_boxes
from utils.torch_utils import select_device, smart_inference_mode
from utils.augmentations import letterbox
import PIL.Image
@smart_inference_mode()
def predict(image_path, weights='yolov9-c.pt', imgsz=640, conf_thres=0.1, iou_thres=0.45):
# Initialize
device = select_device('0')
model = DetectMultiBackend(weights='yolov9-c.pt', device="0", fp16=False, data='data/coco.yaml')
stride, names, pt = model.stride, model.names, model.pt
# Load image
image = np.array(PIL.Image.open(image_path))
img = letterbox(img0, imgsz, stride=stride, auto=True)[0]
img = img[:, :, ::-1].transpose(2, 0, 1)
img = np.ascontiguousarray(img)
img = torch.from_numpy(img).to(device).float()
img /= 255.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
# Inference
pred = model(img, augment=False, visualize=False)
# Apply NMS
pred = non_max_suppression(pred[0][0], conf_thres, iou_thres, classes=None, max_det=1000)
Citation
@article{wang2024yolov9,
title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},
author={Wang, Chien-Yao and Liao, Hong-Yuan Mark},
booktitle={arXiv preprint arXiv:2402.13616},
year={2024}
}
The Colab notebook can be found here. π§‘
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.