Introduction

lidar_map

SAIL-VL is a state-of-the-art vision-language model (VLM) developed by the Bytedance Douyin Content Team. The goal of SAIL-VL is to develope a high-performance vision language model that facilitates deployment on mobile devices and ensures accessibility and affordability for a broad audience. Through careful tuning of data and training recipes, SAIL-VL demonstrates that even a small VLM can benefit significantly from data scaling. Our model outperforms Qwen2-VL, InternVL2 and even recent SoTA models of comparable sizes. Details and stronger models are comming soon~

In a word, SAIL-VL is a foundational VLM for vision-language applications. Welcome to explore its capabilities and feel free to contact us for any questions or opportunities.

News๐Ÿš€๐Ÿš€๐Ÿš€

Model Card

Model Architecture:

Architecture ViT LLM Adapter Token Merge Resolution
๐Ÿค—SAIL-VL-2B ๐Ÿค—InternViT-300M ๐Ÿค—Qwen2.5-1.5B 2-layer MLP 2x2 448x448xN

Training Recipes Overview:

Sail-VL benefits from high-quality data and carefully curated training recipes. We find the data quality, quantity and the design of curriculum training pipeline are crucial for model performance. With the proper design and data, the model's capacity scales effectively with data expansion at all stages, leading to enhanced performance. More details will be released soon.

Evaluation

SAIL-VL not only outperforms the Qwen2-VL and InternVL2 series of models of comparable size, but is also competitive compared with recently released SoTAs, Aquila and InternVL-2.5.

Performace Overview:

lidar_map The result is evaluated by our team with a VLMEvalKit variant.

Detail Evaluations:

Benchmark InternVL-2 Qwen2-VL Aquila-VL-2B InternVL-2.5 SAIL-VL-2B
OpenCompassAvg 55.94 57.36 60.35 61.42 62.67
Total Avg 60.93 63.04 62.76 65.77 66.27
GeneralQA Avg 58.04 59.75 62.39 62.96 63.79
OCR Avg 74.50 75.80 71.78 76.80 78.19
MMBench_DEV_CN_V11 69.2 69.5 73.61 71.44 72.06
MMBench_DEV_EN_V11 71.36 71.28 75.93 74.61 76.63
MathVista_MINI 47.5 48.2 59.3 52 63.1
MMStar 49.87 46.67 55 53.4 56.73
MMMU_VAL 33.56 38.89 41.11 42 42.67
MMVet 40.83 48.3 43.85 61.38 46.88
HallusionBench 38.57 41.42 42.09 42.79 45.03
AI2D_TEST 74.22 73.35 75 74.9 77.69
OCRBench 78.3 78.6 77.2 80.2 83.2
RealWorldQA 57.25 62.61 63.92 61.05 63.14
InfoVQA_VAL 57.82 63.64 48.14 61.85 62.01
ChartQA_TEST 76.96 74.16 79.76 79.44 82.96
LLaVA_Bench 52.8 57.8 54 57.5 53.5
MME 66.98 69.55 64.74 75.25 71.51
DocVQA_VAL 86.23 85.38 74.31 87.67 86.06
TextVQA_VAL 73.48 79.66 76.27 76.76 77.21

Details for average performance section:

  • OpenCompass-Avg includes public avaliable validation sets from OpenCompass: AI2D_TEST, HallusionBench, MMBench_DEV_CN_V11, MMBench_DEV_EN_V11, MME, MMMU_DEV_VAL, MMStar, MMVet, MathVista_MINI, evaluated by our team.

  • GeneralQA-Avg includes MMBench_DEV_CN_V11, MMBench_DEV_EN_V11, MME, MMMU, MMStar and RealWorldQA.

  • OCR-Avg includes AI2D_TEST, InfoVQA_VAL, ChartQA_TEST, DocVQA_VAL, OCRBench, TextVQA_VAL.

Demo Cases

We visualize some of examples from LLaVA-Bench to show the capabilities of our model. Our model is able to give detail and complex answer for a variety of questions.

Image Question Answer
Compose an engaging travel blog post about a recent trip to this place, highlighting cultural experiences and must-see attractions, including both the attraction seen in the photo and other must-see attractions as well. Last summer, I had the incredible opportunity to visit Hawaii, and one of the most breathtaking experiences was visiting the famous Diamond Head crater. This iconic landmark is a must-see attraction in Hawaii, and I'm thrilled to share my journey with you. \n As I approached the crater, I was immediately struck by its sheer size and the dramatic landscape surrounding it. The crater itself is a vast, flat-topped mountain with a distinctive bowl shape, and it's surrounded by a rugged coastline. The crater floor is a lush green, contrasting beautifully with the brown and gray hues of the surrounding mountains. \n The most impressive aspect of the crater is the view it offers. From the top, you can see the entire island of Oahu, including the bustling city of Honolulu in the distance. The city's layout is a testament to Hawaii's unique blend of urban development and natural beauty. \n As I walked along the crater's rim, I was treated to breathtaking views of the ocean. The water was a stunning blue, with white-capped waves crashing against the shore. The contrast between the deep blue of the ocean and the lighter blue of the crater's edge was truly mesmerizing. \n One of the highlights of my visit was the crater's interior. While the top is flat, the interior is a steep, rocky mountain. It's a great spot for hiking and offers spectacular views of the surrounding landscape. \n In addition to Diamond Head, there are many other must-see attractions in Hawaii. One of my favorite places was the beautiful Kailua Beach, where I spent a relaxing afternoon swimming and sunbathing. The crystal-clear waters and pristine beaches make it a perfect spot for a day of relaxation. \n Another attraction I couldn't miss was the Pearl Harbor National Memorial. This site is a powerful reminder of Hawaii's history and the sacrifices made by its people during World War II. \n Overall, my trip to Hawaii was a truly unforgettable experience. From the stunning landscapes of Diamond Head to the vibrant cities of Oahu, Hawaii is a place that truly lives up to its nickname as the "Hawaii of the Pacific." I can't wait to return and explore more of this beautiful island paradise.
What is the name of the man in the photo? The man in the photo is Elon Musk. He is holding a small dog in his arms, which is a well-known meme featuring him. Elon Musk is a well-known entrepreneur and CEO of Tesla, SpaceX, and The Boring Company. He is often depicted in this pose, with his arms raised and a small dog in his hands, as a symbol of his love for animals and his playful personality.
Please explain the meme in detail. This meme features a French Bulldog lying on a blue wooden deck. The dog is positioned in the center of the image, with its head resting on the ground and its body stretched out. The meme text reads "MONDAY. JUST...MONDAY." in white capital letters. The text is arranged in two lines, with "MONDAY." at the top and "JUST...MONDAY." at the bottom. The text is slightly tilted to the right, adding a playful touch to the meme. The French Bulldog's relaxed posture and the text "JUST...MONDAY." suggest a sense of resignation or acceptance towards the start of the workweek. The dog's position on the deck, which is a common place for dogs to rest, further emphasizes the idea of a lazy start to the day. The blue wooden deck provides a contrasting background that makes the dog and the text stand out. The meme is a humorous take on the common phrase "just another Monday," which is often used to express the feeling of being tired or unenthusiastic about the start of the workweek. The French Bulldog's presence adds a touch of humor and relatability, as many people can relate to the idea of a dog being the first to wake up and start the day. The meme is a lighthearted way to acknowledge the start of the workweek and the often-sarcastic attitude that comes with it.
Discuss the historical impact and the significance of this painting in the art world. This painting, "Mona Lisa" by Leonardo da Vinci, is one of the most famous and recognizable works of art in the world. Created around 1503-1506, it is a portrait of Lisa Gherardini, the wife of Francesco del Giocondo, and is considered one of the greatest portraits in the history of art. \n The painting's historical impact is immense. It was created during a time of great cultural and artistic flourishing in Italy, a period known as the Renaissance. The Mona Lisa's enigmatic smile and the subtle, almost imperceptible smile on her face have captivated viewers for centuries, sparking countless interpretations and theories about her expression. This ambiguity has contributed to the painting's enduring popularity and its status as a symbol of mystery and intrigue. \n The significance of the Mona Lisa in the art world is multifaceted. It represents the pinnacle of Renaissance portraiture, showcasing the artist's mastery of technique, light, and shadow. The painting's realistic depiction of the sitter's features, combined with the subtle use of color and the soft, diffused lighting, demonstrates da Vinci's innovative approach to portraiture. \n The Mona Lisa's location in the Louvre in Paris, France, has made it one of the most visited artworks in the world. Its presence in the Louvre has contributed to its status as a cultural icon, symbolizing the power of art to transcend time and space. \n The painting's influence on art and culture is profound. It has inspired countless imitations and has been the subject of numerous analyses and interpretations. Its enigmatic smile has become a cultural reference point, appearing in literature, film, and popular culture. \n In conclusion, the Mona Lisa's historical impact and significance are immense. It represents the pinnacle of Renaissance portraiture, showcases da Vinci's innovative approach to art, and has had a profound influence on art and culture worldwide. Its enduring popularity and the countless interpretations it has inspired make it one of the most significant works of art in the history of the world.

How to Use

The basic usage and dynamic crop strategy of SAIL-VL follows InternVL2, you can easily switch Intern-VL series of models to our model. Here is a simple example of using our model:

Requirements:

pip3 install einops transformers timm

Code:

import numpy as np
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer

IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)

def build_transform(input_size):
    MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
    transform = T.Compose([
        T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
        T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
        T.ToTensor(),
        T.Normalize(mean=MEAN, std=STD)
    ])
    return transform

def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
    best_ratio_diff = float('inf')
    best_ratio = (1, 1)
    area = width * height
    for ratio in target_ratios:
        target_aspect_ratio = ratio[0] / ratio[1]
        ratio_diff = abs(aspect_ratio - target_aspect_ratio)
        if ratio_diff < best_ratio_diff:
            best_ratio_diff = ratio_diff
            best_ratio = ratio
        elif ratio_diff == best_ratio_diff:
            if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
                best_ratio = ratio
    return best_ratio

def dynamic_preprocess(image, min_num=1, max_num=10, image_size=448, use_thumbnail=False):
    orig_width, orig_height = image.size
    aspect_ratio = orig_width / orig_height

    # calculate the existing image aspect ratio
    target_ratios = set(
        (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
        i * j <= max_num and i * j >= min_num)
    target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])

    # find the closest aspect ratio to the target
    target_aspect_ratio = find_closest_aspect_ratio(
        aspect_ratio, target_ratios, orig_width, orig_height, image_size)

    # calculate the target width and height
    target_width = image_size * target_aspect_ratio[0]
    target_height = image_size * target_aspect_ratio[1]
    blocks = target_aspect_ratio[0] * target_aspect_ratio[1]

    # resize the image
    resized_img = image.resize((target_width, target_height))
    processed_images = []
    for i in range(blocks):
        box = (
            (i % (target_width // image_size)) * image_size,
            (i // (target_width // image_size)) * image_size,
            ((i % (target_width // image_size)) + 1) * image_size,
            ((i // (target_width // image_size)) + 1) * image_size
        )
        # split the image
        split_img = resized_img.crop(box)
        processed_images.append(split_img)
    assert len(processed_images) == blocks
    if use_thumbnail and len(processed_images) != 1:
        thumbnail_img = image.resize((image_size, image_size))
        processed_images.append(thumbnail_img)
    return processed_images

def load_image(image_file, input_size=448, max_num=10):
    image = Image.open(image_file).convert('RGB')
    transform = build_transform(input_size=input_size)
    images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
    pixel_values = [transform(image) for image in images]
    pixel_values = torch.stack(pixel_values)
    return pixel_values

path = "BytedanceDouyinContent/SAIL-VL-2B"
model = AutoModel.from_pretrained(
    path,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)

# set the max number of tiles in `max_num`
pixel_values = load_image('./test.png', max_num=10).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)

# pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}         Assistant: {response}')

question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}         Assistant: {response}')

# single-image single-round conversation
question = '<image>         Please describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}         Assistant: {response}')

# single-image multi-round conversation
question = '<image>         Please describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}         Assistant: {response}')

question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}         Assistant: {response}')

Acknowledge

Our model is built upon numerous outstanding open-source projects, and we are grateful for their contributions. We extend special thanks to the InternVL team and Qwen team for their great base models, and to the BAAI team (Infinity-MM) for their generous release of data.

Citation

@misc{
    sailvl,
    title = {SAIL-VL: Scalable Vision Language Model Training with High Quality Data Curation},
    url = {https://huggingface.co/BytedanceDouyinContent/SAIL-VL-2B/},
    author = {Bytedance Douyin Content Team},
    month = {December},
    year = {2024}
}

License

This project is licensed under Apache License 2.0.

Contact

If you have any question, please feel free to contact us: [email protected]

Downloads last month
129
Safetensors
Model size
2.09B params
Tensor type
BF16
ยท
Inference API
Unable to determine this model's library. Check the docs .

Model tree for BytedanceDouyinContent/SAIL-VL-2B

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(61)
this model