Ran into an issues while I was trying to sample more than one sentence

#27
by joeysss - opened

RuntimeError: shape mismatch: value tensor of shape [5, 8, 192, 256] cannot be broadcast to indexing result of shape [1, 8, 192, 256]

I know this was caused by sampling because I tried to change "num_return_sequences" to 1 and the error disappeared.
Is it a bug or just my bugs?

Full traceback:

Traceback (most recent call last):
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 75, in _wrap
fn(i, *args)
File "/data/home/xxx/paper_code/eval_scripts/open_source_coder_inference_mp.py", line 211, in inference_proc
raise e
File "/data/home/xxx/paper_code/eval_scripts/open_source_coder_inference_mp.py", line 186, in inference_proc
output = model.generate(
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/generation/utils.py", line 1914, in generate
result = self._sample(
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/generation/utils.py", line 2651, in _sample
outputs = self(
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 1068, in forward
outputs = self.model(
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 908, in forward
layer_outputs = decoder_layer(
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 650, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 341, in forward
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/cache_utils.py", line 1071, in update
return update_fn(
File "/data/home/xxx/anaconda3/envs/train/lib/python3.10/site-packages/transformers/cache_utils.py", line 1046, in _static_update
k_out[:, :, cache_position] = key_states

Your issue seems similar to mine here: https://huggingface.co/google/gemma-2-9b-it/discussions/40#66bd81baac86b9411ec14281
Have you found a way around?

@RaccoonOnion Not yet I`ve simply given up using this one since it was just one of my baseline candidates.

Google org

Hi @joeysss , Could you please try again by updating the transformers version to the latest one using !pip install -U transformers as mentioned by @RaccoonOnion on provided link or can share the reproducible code to replicate the error if the issue still persists. Thank you.

Hi @Renu11 , the issue no longer persist; it works fine now.

Sign up or log in to comment