GPTQ failed submission request

#669
by AlanRobotics - opened
deleted

@AlanRobotics I'm not staff, but I couldn't find the model "multimodal" in your profile. So out of curiosity can you tell me why I can't find it? Was it deleted, renamed or something else? Thanks.

Open LLM Leaderboard org

Hi!
In the logs, we get the following error at generation time - are you sure your model is correctly implemented?

  File "python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "python3.10/site-packages/auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py", line 221, in forward
    return forward_call(*args, **kwargs)
  File "python3.10/site-packages/auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py", line 221, in forward
    self.autogptq_cuda.vecquant4matmul_old(x, self.qweight, out, self.scales.float(), self.qzeros, self.group_size)
RuntimeError: Unrecognized tensor type ID: AutocastCUDA

Which version of auto_gptq are you using?

@Phil337 hi, model is private right now

@clefourrier Thanks for answer, maybe auto-gptq==0.4.2, but when I quantized with later versions my jobs crashed too. What's your recommendation?

Open LLM Leaderboard org
edited Apr 12

We should be using auto_gptq >= 0.4.2, so no mismatch there.

What do you get when you try inference with your models?

Open LLM Leaderboard org

Closing for inactivity

clefourrier changed discussion status to closed

Hello @clefourrier ! Would it be possilbe also add AWQ lib to you environment so that we can submit AWQ models

Open LLM Leaderboard org

Hi! Yes, we can consider adding it as a feature - could you open a separate discussion so we can keep track of it? (it won't be for the next month though)

Sign up or log in to comment