모델 CPU로드시 나타나는 TensorSize mismatch

#2
by AMITA94 - opened

이 모델을 로컬에서 실험해보고 싶어서 quantization coonfigure만 CPU로 동작할 수 있도록 load_in_8bit에서 load_in_8bit_fp32_cpu_offload=True로 변경하였습니다.

이후 모델을 로드하는 도중 다음과 같은 에러가 발생하는데 해결 방법이 있을까요?

  File "inference.py", line 14, in <module>
    model = AutoModelForCausalLM.from_pretrained(
  File "/home/waiker/shyoon/venv/solar_translate/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 467, in from_pretrained
    return model_class.from_pretrained(
  File "/home/waiker/shyoon/venv/solar_translate/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2777, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/waiker/shyoon/venv/solar_translate/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3118, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/home/waiker/shyoon/venv/solar_translate/lib/python3.8/site-packages/transformers/modeling_utils.py", line 702, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/home/waiker/shyoon/venv/solar_translate/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 281, in set_module_tensor_to_device
    raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([1024, 8192]) in "weight" (which has shape torch.Size([8192, 8192])), this look incorrect.```                                                                                                       

Sign up or log in to comment