--- inference: false ---

# LLaVA Model Card ## Model details This is a fork from origianl [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b). This repo added `code/inference.py` and `code/requirements.txt` to provide customize inference script and environment for SageMaker deployment. **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-v1.5-7B was trained in September 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## How to Deploy on SageMaker Following `deploy_llava.ipynb` , bundle llava model weights and code into a `model.tar.gz` and upload to S3: ```python from sagemaker.s3 import S3Uploader # upload model.tar.gz to s3 s3_model_uri = S3Uploader.upload(local_path="./model.tar.gz", desired_s3_uri=f"s3://{sess.default_bucket()}/llava-v1.5-7b") print(f"model uploaded to: {s3_model_uri}") ``` Then use `HuggingfaceModel` to deploy our real-time inference endpoint on SageMaker: ```python from sagemaker.huggingface.model import HuggingFaceModel # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data=s3_model_uri, # path to your model and script role=role, # iam role with permissions to create an Endpoint transformers_version="4.28.1", # transformers version used pytorch_version="2.0.0", # pytorch version used py_version='py310', # python version used model_server_workers=1 ) # deploy the endpoint endpoint predictor = huggingface_model.deploy( initial_instance_count=1, instance_type="ml.g5.xlarge", ) ``` ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.