Yet Another Smaller Version Request

#7
by chilegazelle - opened

Yet another request, just in case the previous one gets closed. MiniMaxAI/MiniMax-VL-01 is undoubtedly the best VL model, but its size makes deployment really tough. It would be amazing to have a smaller distilled version or at least a well-optimized quantized one to make it more accessible while preserving its VL capabilities. Hoping this gets considered!

A good example of this is what DeepSeek did with DeepSeek-R1. They applied knowledge distillation to create smaller, more efficient versions while keeping strong reasoning capabilities. Something similar for MiniMax-VL-01 could make deployment much easier while maintaining its VL strengths.

Sign up or log in to comment