Errors occure when running "model = transformers.AutoModel.from_pretrained("/my_path/model", trust_remote_code=True)"

#8
by finefine - opened

Env: Transformers version: 4.48.1(current newest)

2 problems:

  1. raise TypeError: TypeError: no_grad.init() takes 1 positional argument but 2 were given

---------------------------------------------------------------------------TypeError Traceback (most recent call last)
Cell In[2], line 1----> 1 model = transformers.AutoModel.from_pretrained("/data/yy/llm/cde_model/cde-small-v2", trust_remote_code=True)
File ~/miniconda3/envs/yi_dev/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:526, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 523 if kwargs.get("quantization_config", None) is not None: 524 _ = kwargs.pop("quantization_config")--> 526 config, kwargs = AutoConfig.from_pretrained( 527 pretrained_model_name_or_path, 528 return_unused_kwargs=True, 529 trust_remote_code=trust_remote_code, 530 code_revision=code_revision, 531 _commit_hash=commit_hash, 532 **hub_kwargs, 533 **kwargs, 534 ) 536 # if torch_dtype=auto was passed here, ensure to pass it on 537 if kwargs_orig.get("torch_dtype", None) == "auto":

File ~/miniconda3/envs/yi_dev/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1063, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 1061 if has_remote_code and trust_remote_code: 1062 class_ref = config_dict["auto_map"]["AutoConfig"]-> 1063 config_class = get_class_from_dynamic_module( 1064 class_ref, pretrained_model_name_or_path, code_revision=code_revision, **kwargs 1065 ) 1066 if os.path.isdir(pretrained_model_name_or_path): 1067 config_class.register_for_auto_class()

File ~/miniconda3/envs/yi_dev/lib/python3.10/site-packages/transformers/dynamic_module_utils.py:553, in get_class_from_dynamic_module(class_reference, pretrained_model_name_or_path, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, repo_type, code_revision, **kwargs) 540 # And lastly we get the class inside our newly created module 541 final_module = get_cached_module_file( 542 repo_id, 543 module_file + ".py", (...) 551 repo_type=repo_type, 552 )--> 553 return get_class_in_module(class_name, final_module, force_reload=force_download)
File ~/miniconda3/envs/yi_dev/lib/python3.10/site-packages/transformers/dynamic_module_utils.py:250, in get_class_in_module(class_name, module_path, force_reload) 248 # reload in both cases, unless the module is already imported and the hash hits 249 if getattr(module, "transformers_module_hash", "") != module_hash:--> 250 module_spec.loader.exec_module(module) 251 module.transformers_module_hash = module_hash 252 return getattr(module, class_name)

File :883, in exec_module(self, module)
File :241, in _call_with_frames_removed(f, *args, **kwds)
File ~/.cache/huggingface/modules/transformers_modules/cde-small-v2/model.py:269 264 else: 265 return t[min_row:max_row] 268 @torch .no_grad--> 269 def maxsim( 270 X: torch.Tensor, y: torch.Tensor, 271 maximize: bool, chunk_size: int = 8_000, 272 debug_mem_usage: bool = False) -> torch.Tensor: 273 device = X.device 274 n_samples = X.shape[0]TypeError: no_grad.init() takes 1 positional argument but 2 were given

The key issue occurs at line 268 of model.py:
268 @torch .no_grad
269 def maxsim(...

If I change the @torch .no_grad into @torch .no_grad() followed by copilot's suggestion, then the second problem showed up:

  1. forcely download answerdotai/ModernBERT-base model, and if my internet connection works bad, downloading will desperately fail.
    I'm wondering if Either jxm/cde-small-v2 and answerdotai/ModernBERT-base are essential, where could I place the answerdotai/ModernBERT-base model wich is downloaded manually?

I have tried to have the ModernBERT-base folder including config.json etc being the child folder of the set cde-small-v2 folder, but that doesn't work.

Owner

Hi @finefine ! torch.no_grad is a decorator that can be applied to functions. It's not working properly for you, which makes me think that something is wrong with your torch. Please update to the latest PyTorch and transformers installation and these problems should go away.

jxm changed discussion status to closed

Sign up or log in to comment