How to Effectively Run the Idefics 3 Model on AWS SageMaker for Inference

#15
by Mehyaar - opened

Title: How to Effectively Run the Idefics 3 Model on AWS SageMaker for Inference on 19k Images

Hi everyone,

I’m currently working on a project where I need to run the Idefics 3 model for inference on a dataset of 19,000 images. I plan to use AWS SageMaker for this task.

Could anyone provide guidance on the following:

  1. Configuration: What are the best practices for configuring AWS SageMaker to handle such a large inference task efficiently?

  2. Instance Selection: Are there specific instance types or configurations that would be particularly suitable for running the Idefics 3 model on a dataset of this size?

  3. Performance Optimization: Any tips or considerations for optimizing performance and managing costs during this process?

  4. Integration: Are there any specific steps or scripts required to integrate and run the Idefics 3 model smoothly on SageMaker?

Any insights or experiences you can share would be incredibly helpful!

Thank you in advance!

Best,
Mehyar

Mehyaar changed discussion status to closed
Mehyaar changed discussion status to open
HuggingFaceM4 org

The most important parameter for you is size= {"longest_edge": N*364} (detailed more in the model card) to choose the number of tokens you'll use for each image, which influences the efficiency at inference.

Are there any specific steps required to run the Idefics model on SageMaker?

I've never used SageMaker so I don't know sorry

Alright thanks mate !

Mehyaar changed discussion status to closed

Sign up or log in to comment