repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 24,703 | closed | Suppress warnings from LUKE for unexpected keys | # What does this PR do?
Suppress the warnings when instantiating the LUKE models by adding `_keys_to_ignore_on_load_unexpected`.
## Promblem
Currently, when you instantiate certain LUKE models from the Hugging Face Hub, such as
```
from transformers import AutoModel
model = transformers.AutoModel.from_pretrained("studio-ousia/mluke-base-lite")
```
you receive a warning indicating that a bunch of weights were not loaded.
```
Some weights of the model checkpoint at studio-ousia/mluke-base-lite were not used when initializing LukeModel: [
'luke.encoder.layer.0.attention.self.w2e_query.weight', 'luke.encoder.layer.0.attention.self.w2e_query.bias',
'luke.encoder.layer.0.attention.self.e2w_query.weight', 'luke.encoder.layer.0.attention.self.e2w_query.bias',
'luke.encoder.layer.0.attention.self.e2e_query.weight', 'luke.encoder.layer.0.attention.self.e2e_query.bias',
...]
```
This seems to depend on the logging setting and is observed on Google Colabo Notebooks.
https://colab.research.google.com/drive/1kYN3eGhx5tzEMnGkUz2jPsdmFyEBwxFA?usp=sharing
This behavior is expected since these weights are optional and only loaded when `use_entity_aware_attention` is set to `True`. However, it has caused confusion among users, as evidenced by the following issues:
https://github.com/studio-ousia/luke/issues/174
https://huggingface.co/studio-ousia/mluke-base/discussions/2#63be8cc6c26a8a4d713ee08a
## Solution
I added `_keys_to_ignore_on_load_unexpected` to `LukePreTrainedModel ` to ignore some unexpected keys in the pretrained weights.
| 07-07-2023 04:05:17 | 07-07-2023 04:05:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I believe this should not be done this way. These keys should be used only if the default behavior in the modeling code will have different keys than the canonical (original) checkpoints on the Hub.
But before further discussion, let's check one thing first:
the config in `studio-ousia/mluke-base-lite` has
```bash
use_entity_aware_attention": true,
```
Are you sure this is the checkpoint that causes confusion ..?<|||||>~~My wording above is not precise. I will update that comment.~~
These keys should be used only if:
- a model loading from a checkpoint that is saved with `from_pretrained` (without changing the config during loading) will have some unexpected weight keys.
- a HF checkpoint is created that has some extra keys (in order to respect the original non-HF checkpoint) which is not really used in the model (and the HF modeling code is written to avoid having such un-used keys)
<|||||>I have run
```python
from transformers import AutoModel
model = transformers.AutoModel.from_pretrained("studio-ousia/mluke-base-lite")
```
but didn't receive any warning.<|||||>Thanks @ydshieh for taking a look for the PR!
> Are you sure this is the checkpoint that causes confusion ..?
When I look at the latest version of the config on the following models, I find `"use_entity_aware_attention": false`.
https://huggingface.co/studio-ousia/mluke-base-lite/blob/3775c9b1470636e206c38cbb1b964ba883421164/config.json#L33
> but didn't receive any warning.
The following Google Colabo notebook shows the warning.
https://colab.research.google.com/drive/1kYN3eGhx5tzEMnGkUz2jPsdmFyEBwxFA?usp=sharing
Probably it depends on some logging settings given by the environment, but it does show the warnings in some cases.
<|||||>> These keys should be used only if:
> - a model loading from a checkpoint that is saved with from_pretrained (without changing the config during loading) will have some unexpected weight keys.
> - a HF checkpoint is created that has some extra keys (in order to respect the original non-HF checkpoint) which is not really used in the model (and the HF modeling code is written to avoid having such un-used keys)
I believe that this PR is similar to the second point mentioned above.
The HF checkpoint is derived from the original checkpoint generated by the [original repository](https://github.com/studio-ousia/luke). The checkpoint contains additional keys (`luke.encoder.layer.*.attention.self.*_query.*`), which are only utilized when the entity-aware attention mechanism is enabled during fine-tuning.
Entity-aware attention is an optional feature and is disabled by default, because that is the setting used in the [original paper](https://aclanthology.org/2022.acl-long.505/).
I would like to address the problem of the confusing and overwhelming warnings even when it is the default behavior.
I would appreciate your further elaboration on why this cannot be addressed using `_keys_to_ignore_on_load_unexpected`, or any alternative solutions you might have in mind.<|||||>OK I see. We have to use `LukeForMaskedLM` or `AutoModelForMaskedLM` to see the warning.<|||||>We can't change these kinds of keys due to a Hub model repo. author uploading problematic weights/config file.
You can ask the author to correct (cleanup) the model weights and re-upload.
If we change in the way like done in this PR, we won't have any warning when a real problem occurs, and the bugs won't be detected.<|||||>> The HF checkpoint is derived from the original checkpoint generated by the [original repository](https://github.com/studio-ousia/luke). The checkpoint contains additional keys (luke.encoder.layer.*.attention.self.*_query.*), which are only utilized when the entity-aware attention mechanism is enabled during fine-tuning.
I didn't check the original repo. (which is not me adding that model into `transformers`). But the Hub repo like [luke-base](https://huggingface.co/studio-ousia/luke-base/blob/main/config.json) has
```bash
use_entity_aware_attention": true,
```
Also, the default value in `LukeConfig.__init__` is `True.`<|||||>Let me share more context on this problem.
The weights uploaded on the HF repo are supposed to work either when `use_entity_aware_attention` is `True` or `False` and the config files just specify the default value.
The warnings are raised as expected currently, but I want to suppress the warnings as the correct behavior.
I am from the same group of the author of LukeModel and the HF weights are uploaded by me, so I am sure that it follows the intention of the original model.
In summary, when some weights should be ignored as the correct behavior, what is the right way to handle that?<|||||>> If we change in the way like done in this PR, we won't have any warning when a real problem occurs, and the bugs won't be detected.
I understand that this is a risk, but couldn't that be mitigated by specifying the correct regex?<|||||>The problem here is the config and the model weight on the hub has inconsistent values. If the model is created with that value set to false, there would not have those extra keys in the model.
It is unclear how the Hub author ends up with sich inconsistency. The fix should happen there.
Hope this explanation makes things clear.
But thank you for your willingness to fix and help transformers better โค๏ธ<|||||>I believe there is still some misunderstanding.
> The problem here is the config and the model weight on the hub has inconsistent values.
The inconsistency is intended as having optional extra weights is a part of the model features.
Users can either choose to use the extra weights or not.
> If the model is created with that value set to false, there would not have those extra keys in the model.
Those extra keys (weights) are optional.
Even though the model has `use_entity_aware_attention=False` by default, we'd like to give users an option to enable `use_entity_aware_attention=True` to check the effect.<|||||>To be clearer, the extra weights are in this part.
https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/luke/modeling_luke.py#L523-L526
These weights are NOT used in pretraining time, but can be optionally introduced at the fine-tuning time.
For users to be able to freely choose between the options, the weights should include the extra weights but it causes unnecessary warnings when `use_entity_aware_attention = False`...<|||||>I apologize for any confusion caused by my previous explanation, but I would like to request @NielsRogge's opinion on how to handle these warnings. He helped introduce LUKE in transformers.<|||||>> These weights are NOT used in pretraining time,
So those weights are not even trained during pretraining time ..? I am a bit confused here. Or it's trained for Luke but not mLuke?
> These weights are NOT used in pretraining time, but can be optionally introduced at the fine-tuning time.
For users to be able to freely choose between the options, the weights should include the extra weights
In this case, the original model weights (the checkpoint on the Hub repo `studio-ousia/mluke-base-lite`) should not include those extra weights (which is the opposite currently), and config should have `use_entity_aware_attention=False` (which is currently).
- **When a user want to fine-tune with the option** with `use_entity_aware_attention`, it can load the checkpoint with this set to `True` **at runtime**: then the model will have these extra weights at runtime (but with different warning saying some weights are randomly initialized).
I am wondering what prevents you to remove those extra weights on `studio-ousia/mluke-base-lite` if it is never used.
<|||||>Thank you for your patience.
I know the model is doing something unusual...
#### What is entity-aware attention?
LUKE and mLUKE take word tokens as well as entity tokens.
At pretraining time, they undergo the computation of self attention (token-to-token attention) equally.
At fine-tuning time, we can optionally add entity-aware attention.
This mechanism uses different attention weights for word-to-word, word-to-entity, entity-to-word, and entity-to-entity tokens.
The weights for these different types of attention are initialized by **copying the token-to-token attention obtained during pretraining**.
This is done by the following lines of the conversion script.
https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/luke/convert_luke_original_pytorch_checkpoint_to_pytorch.py#L61-L67
So, the checkpoints include these copied weights regardless of whether users enable entity-aware attention at fine-tuning time.
Also this is the reason why we do not want to initialize the new weights randomly.
> So those weights are not even trained during pretraining time ..? I am a bit confused here. Or it's trained for Luke but not mLuke?
Both LUKE and mLUKE are pretrained without entity-aware attention, but they can still use entity-aware attention by initializing new weights with the corresponding pretrained ones.
#### Why is the default value of `use_entity_aware_attention` different in LUKE and mLUKE?
We set the default value to be consistent with the original papers that proposed each model.
[LUKE](https://aclanthology.org/2020.emnlp-main.523/) uses entity-aware attention because it performs better in monolingual settings, but [mLUKE](https://aclanthology.org/2022.acl-long.505/) does not as it did not give consistent gains in cross-lingual tasks.
> I am wondering what prevents you to remove those extra weights on studio-ousia/mluke-base-lite if it is never used.
Although we set the default value of `use_entity_aware_attention` to be `False` in `studio-ousia/mluke-base-lite`, we still want to allow users to try if entity-aware attention is useful in their own settings.
However as reported in the PR description, some users find the warning confusing...
So we would like to remove this confusion.
Perhaps there are alternative approaches to achieve this goal other than setting `_keys_to_ignore_on_load_unexpected` such as
- redefining the behavior of the initialization of `LukeModel` so that it copies the token-to-token attention weights with the weights of entity-aware attention missing in the checkpoint but `use_entity_aware_attention=True`. Then we can remove the copied weights from the checkpoints.
- adding more detailed warning messages on what the ignored weights mean.
I would greatly appreciate any advice!<|||||>Hi @ryokan0123 . Thank you for the detailed information. Looking the following 3 points you mentioned:
To make sure, is those extra weights in `studio-ousia/mluke-base-lite` are neither pretrained (yes as you mentioned) nor fine-tuned. If this is the case:
> 1. Both LUKE and mLUKE are pretrained without entity-aware attention
> 2. by initializing new weights with the corresponding pretrained ones.
> 3. (Although we set the default value of use_entity_aware_attention to be False ...) we still want to allow users to try if entity-aware attention is useful in their own settings.
what you described could be easily achieved (point 3.) for a user to just specify `config.use_entity_aware_attention` at runtime - **this doesn't require the weights to be in the checkpoint**. It will just show an warning
```
Some weights of were not initialized from the model checkpoint at ... {pretrained_model_name_or_path} and are newly initialized ...
```
And this (different) warning make sense and should be kept.
Let me know if you have further question to the above suggested way to (optionally) use/enable non-trained `entity_aware_attention` weights.
<|||||>Yes, I know that is possible.
However, the important point is that **those new weights must be initialized by copying the weights obtained during pretraining**.
This is exactly what we want to do here.
By randomly initializing the new weights, the model performance would degrade as the model has to learn how to attend to other tokens from scratch in fine-tuning.
We cannot randomly initialize the new weights and that's why we copy the weights here.
https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/luke/convert_luke_original_pytorch_checkpoint_to_pytorch.py#L61-L67
So, to achieve this and suppress warnings, I think there are some options๐ค
- leave the copied weights in the checkpoint and set `_keys_to_ignore_on_load_unexpected ` (this PR, an easy path)
- remove the copied weights from the checkpoint and override `init_weights` or `post_init` in `LukeModel` to include the copying operation (which needs a bit of work)<|||||>Ok, thank you for the detailed information. I finally understand why you need those weights in the check point, as they are copied from some trained weight.
I will have to think a bit more, but I feel the best is to add extra log message to explain the situation.
I will come back to you.<|||||>@sgugger @amyeroberts
Could you take a look the following and see if you have any comment. I tried to make it short, but still need to explain things ๐
Summary:
- In `studio-ousia/mluke-base-lite` (`LukeModel`) - checkpoint for original author):
- the checkpoint contains some keys `w2e_query` etc. (for `entity_aware_attention`)
- the config has `entity_aware_attention=False`:
- `from_pretrained` gives `unexpected keys during loading` warning.
- `entity_aware_attention` is never used during pre-training
- the checkpoint contains those `w2e_query` weights **by coping weight values from other pre-trained weights**
- (so they still make some sense and might be helpful for fine-tuning)
- The model author wants to avoid confusing warning (of nexpected keys).
Two suggested actions:
- (easy) add `_keys_to_ignore_on_load_unexpected` as done in this PR
- (more work)
- remove those `w2e_query` weights from the checkpoint `studio-ousia/mluke-base-lite`
- overwrite `from_pretrained` to copy some weights values to the target weights (at the end of `from_pretrained)` - when `config.use_entity_aware_attention=True` + `w2e_query` key is found
- we will have a warning of `missing key` during loading, but we add a explanation to mention weights being copied
The second approach may not be worth the effort (too much work). The first one isn't really good as `_keys_to_ignore_on_load_unexpected` is not designed to be used for such situation (IMO).
<|||||>Note that on main, the code sample provided at the beginning does not issue any warnings (just infos) since the class used (LukeModel) is not the same as the class of the checkpoint (LukeModelForMaskedLM). It's only when loading a model `LukeModelForMaskedLM` that the warning appears.
As for how to deal with this, the checkpoint mentioned does not use those extra weights (as seen [here](https://huggingface.co/studio-ousia/mluke-base-lite/blob/main/config.json#L33) in the config) so it should probably not have them in the state dict. You can use the `variant` parameter in `from_pretrained` to offer two different files for the weights if you wanted to make one version with the extra weights, for users who would like to continue fine-tuning with those extra weights. That weight file should be named `pytorch_model.<variant_name>.bin`.<|||||>I see, it seems the sample code only issues warnings on Colab notebooks.
Apologies for the confusion.
Thank you, @sgugger, for the suggested solution. Using the variant parameter seems a better solution.
I would also appreciate @ydshieh taking the time to handle this PR!
I will consider the suggested solution, so close this PR. |
transformers | 24,702 | open | bf16 with DeepSpeed stage 3 with CPU offload breaks LLaMA 13b+ training | TL;DR deepspeed stage 3 with cpu offload and bf16 breaks llama 13b+ when fine-tuning. The loss starts high and then immediately drops to 0 after the first step and learning rate stays 0 the entire time.
### System Info
- `transformers` version: 4.30.2
- Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.2
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Using 8 A100s in a training script.
- Using distributed or parallel set-up in script?: Using deepspeed stage 3 with CPU offload
### Who can help?
@sgugger @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm running a really straightforward, bare-bones fine-tuning script to train llama 13b. The problem is, if I turn bf16 on, I run into the following problem:
```
0%| | 0/82 [00:00<?, ?it/s]
1%| | 1/82 [01:39<2:14:49, 99.87s/it]
{'loss': 8.2056, 'learning_rate': 0.0, 'epoch': 0.0}
1%| | 1/82 [01:39<2:14:49, 99.87s/it]
2%|โ | 2/82 [02:55<1:54:16, 85.70s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.0}
2%|โ | 2/82 [02:55<1:54:16, 85.70s/it]
4%|โ | 3/82 [04:10<1:46:23, 80.81s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
4%|โ | 3/82 [04:10<1:46:23, 80.81s/it]
5%|โ | 4/82 [05:26<1:42:33, 78.90s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
5%|โ | 4/82 [05:26<1:42:33, 78.90s/it]
6%|โ | 5/82 [06:40<1:38:56, 77.10s/it]
{'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.01}
```
This continues for the remainder of the training, with the loss and learning rate never changing. By the end, the model outputs gibberish.
Here is my launch script:
```bash
deepspeed train.py \
--model_name_or_path /home/ashaw8/compute/models/$MODEL_NAME \
--dataset_path datasets/$TOPIC/$MODEL_NAME \
--run_name $RUN_NAME \
--bf16 True \
--output_dir $OUTPUT_DIR \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "no" \
--logging_strategy "steps" \
--logging_steps 1 \
--learning_rate 5e-6 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--deepspeed ds_config.json \
--max_grad_norm 1.0 \
--tf32 False \
--report_to wandb
```
If I set bf16 to false, everything returns to normal and the training works fine, but then I cannot train the large models (e.g. 65b) because I can't reduce the model size with bf16. As far as I can tell, this issue has not been documented anywhere else.
A related issue seems to be documented here where similar problems occurred with fp16. I guess large losses were preventing the optimizer from stepping? The optimizer must be stepping in my case though, if the model is outputting gibberish by the end.
https://github.com/huggingface/transformers/issues/14531
Here is my actual python training script.
```[python]
from dataclasses import dataclass, field
from typing import Dict, Optional, Union
from transformers import (
TrainingArguments as HfTrainingArguments,
HfArgumentParser,
AutoConfig,
AutoTokenizer,
AutoModelForCausalLM,
Trainer,
DataCollatorForLanguageModeling,
)
from datasets import Dataset
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(
default="/mnt/pccfs2/backed_up/models/llama/hf/llama-7b-hf/"
)
@dataclass
class DataArguments:
dataset_path: str = field(
default="/mnt/pccfs2/backed_up/alexshaw/media-training/datasets/dataset",
metadata={"help": "Path to the training data."},
)
@dataclass
class TrainingArguments(HfTrainingArguments):
cache_dir: Optional[str] = field(default=None)
model_max_length: int = field(
default=512,
metadata={
"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."
},
)
dataloader_num_workers: int = field(default=32)
if __name__ == "__main__":
parser = HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments) # type: ignore
)
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
config = AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right",
use_fast=False,
)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=training_args.cache_dir,
)
dataset = Dataset.load_from_disk(data_args.dataset_path)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset, # type: ignore
data_collator=data_collator,
)
trainer.train()
trainer.save_state()
trainer.save_model()
```
Additionally, here is my `ds_config.json`
```json
{
"bf16": {
"enabled": "auto"
},
"fp16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 5,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
The model should start with a higher loss and gradually decrease throughout training. The learning rate should rise to 5e-6 in a few steps. | 07-07-2023 03:09:00 | 07-07-2023 03:09:00 | Can you provide a minimal reproducer as I don't have access to `/mnt/pccfs2/backed_up/alexshaw/media-training/datasets/dataset`? A reproducer should be minimal and run without having us spend time changing and debugging things.
<|||||>Okay, I built this repo that reproduces the issue with as few dependencies as possible.
https://github.com/alexgshaw/simple-trainer
You should be able to clone the repo, pip install the requirements.txt and run bash `train.sh`
When I ran it, it reproduced the error I described above.
Note that I am running with Python version: 3.8.2 and cuda 11.4<|||||>I am running into a similar issue, are there any updates?
I'm using python 3.11 and cuda 11.8 |
transformers | 24,701 | open | In RWForCausalLM.prepare_inputs_for_generation, the past_key_values are always None. | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model_name = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name, trust_remote_code=True, device_map="auto")
# encode context the generation is conditioned on
input_ids = tokenizer.encode('The new movie that got Oscar this year', return_tensors='pt')
# device
device = "cuda" if torch.cuda.is_available() else "cpu"
input_ids = input_ids.to(device)
# model = model.to(device)
# %% Greedy search
# generate text until the output length (which includes the context length) reaches 50
greedy_output = model.generate(input_ids, max_length=50)
print("\nOutput:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))
# Contrastive search
# activate beam search and early_stopping
output = model.generate(input_ids, penalty_alpha=0.01, top_k=4, max_length=50)
print("\nOutput:\n" + 100 * '-')
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Expected behavior
In `transformers/generation/utils.py#L2329`
`model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)`
`RWForCausalLM.prepare_inputs_for_generation()` always return `None` `past_key_values`. So the result doesnโt seem to utilize the kv_cache at all. On the other hand, in `RWForCausalLM.prepare_inputs_for_generation()` they do have tensor shape conversion code. Is this design that `past_key_values` is always None intentional?
Also the output text is also wired:
```
Output(greedy)
----------------------------------------------------------------------------------------------------
The new movie that got Oscar this year is a movie about a man who is a genius and a man who is a genius.
The movie is called โThe Imitation Gameโ and it is about a man who is a genius and a
Output(contrastive with penalty_alpha=0.001)
----------------------------------------------------------------------------------------------------
The new movie that got Oscar this year is a (Source:
- (Source:
- (Source:
- (Source:
- (Source:
- (Source:
- (Source:
```
| 07-07-2023 02:23:06 | 07-07-2023 02:23:06 | Hi @KexinFeng
There is an ongoing work to port Falcon to transformers here: https://github.com/huggingface/transformers/pull/24523 looking at that PR I believe that your issue will be fixed once merged. cc @Rocketknight1 in case I missed something!<|||||>Sorry for the delay, and yes! There is an issue with the custom code version of Falcon, which means that frequently past_key_values are not actually used in generation. This results in much lower generation speed (~3X slower for short-medium sequences).
This issue will be fixed once we add Falcon as a full library model in `transformers`, and we're hoping to merge that PR extremely soon. |
transformers | 24,700 | closed | Pix2StructImageProcessor does not accept list of PIL Images | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.10.133+-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Pix2StructImageProcessor does not work if I pass in a list of PIL images as input. It works after I uncomment line 373-379: https://github.com/huggingface/transformers/blob/66fd3a8d626a32989f4569260db32785c6cbf42a/src/transformers/models/pix2struct/image_processing_pix2struct.py#L373
### Expected behavior
According to the documentation, Pix2StructImageProcessor should be able to process list of PIL images. | 07-07-2023 00:47:31 | 07-07-2023 00:47:31 | Hi. Could you show us the full error log, please. Thanks.
cc @amyeroberts <|||||>Hi @LiJunnan1992
The script below seems to work on the main branch of transformers. Can you share a reproducible snippet? ๐ Thanks!
```python
import requests
from PIL import Image
from transformers import Pix2StructProcessor
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base")
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
images = [Image.open(requests.get(url, stream=True).raw) for _ in range(4)]
inputs = processor(images, return_tensors="pt")
```
This script works as well:
```python
import requests
from PIL import Image
from transformers import Pix2StructProcessor, Pix2StructImageProcessor
processor = Pix2StructProcessor.from_pretrained("google/pix2struct-textcaps-base")
image_processor = Pix2StructImageProcessor()
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
images = [Image.open(requests.get(url, stream=True).raw) for _ in range(4)]
_ = processor(images, return_tensors="pt")
_ = image_processor(images, return_tensors="pt")
```<|||||>The main branch indeed works without error. Closing this issue. Thanks! |
transformers | 24,699 | closed | Bump scipy from 1.8.0 to 1.10.0 in /examples/research_projects/decision_transformer | Bumps [scipy](https://github.com/scipy/scipy) from 1.8.0 to 1.10.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/scipy/scipy/releases">scipy's releases</a>.</em></p>
<blockquote>
<h1>SciPy 1.10.0 Release Notes</h1>
<p>SciPy <code>1.10.0</code> is the culmination of <code>6</code> months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and better
documentation. There have been a number of deprecations and API changes
in this release, which are documented below. All users are encouraged to
upgrade to this release, as there are a large number of bug-fixes and
optimizations. Before upgrading, we recommend that users check that
their own code does not use deprecated SciPy functionality (to do so,
run your code with <code>python -Wd</code> and check for <code>DeprecationWarning</code> s).
Our development attention will now shift to bug-fix releases on the
1.10.x branch, and on adding new features on the main branch.</p>
<p>This release requires Python <code>3.8+</code> and NumPy <code>1.19.5</code> or greater.</p>
<p>For running on PyPy, PyPy3 <code>6.0+</code> is required.</p>
<h1>Highlights of this release</h1>
<ul>
<li>A new dedicated datasets submodule (<code>scipy.datasets</code>) has been added, and is
now preferred over usage of <code>scipy.misc</code> for dataset retrieval.</li>
<li>A new <code>scipy.interpolate.make_smoothing_spline</code> function was added. This
function constructs a smoothing cubic spline from noisy data, using the
generalized cross-validation (GCV) criterion to find the tradeoff between
smoothness and proximity to data points.</li>
<li><code>scipy.stats</code> has three new distributions, two new hypothesis tests, three
new sample statistics, a class for greater control over calculations
involving covariance matrices, and many other enhancements.</li>
</ul>
<h1>New features</h1>
<h1><code>scipy.datasets</code> introduction</h1>
<ul>
<li>A new dedicated <code>datasets</code> submodule has been added. The submodules
is meant for datasets that are relevant to other SciPy submodules ands
content (tutorials, examples, tests), as well as contain a curated
set of datasets that are of wider interest. As of this release, all
the datasets from <code>scipy.misc</code> have been added to <code>scipy.datasets</code>
(and deprecated in <code>scipy.misc</code>).</li>
<li>The submodule is based on <a href="https://www.fatiando.org/pooch/latest/">Pooch</a>
(a new optional dependency for SciPy), a Python package to simplify fetching
data files. This move will, in a subsequent release, facilitate SciPy
to trim down the sdist/wheel sizes, by decoupling the data files and
moving them out of the SciPy repository, hosting them externally and</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/scipy/scipy/commit/dde50595862a4f9cede24b5d1c86935c30f1f88a"><code>dde5059</code></a> REL: 1.10.0 final [wheel build]</li>
<li><a href="https://github.com/scipy/scipy/commit/7856f281b016c585b82d03723c4494bcdbdcd4a5"><code>7856f28</code></a> Merge pull request <a href="https://redirect.github.com/scipy/scipy/issues/17696">#17696</a> from tylerjereddy/treddy_110_final_prep</li>
<li><a href="https://github.com/scipy/scipy/commit/205b6243c6d075d05695e7ac6d007e0f03bfbf42"><code>205b624</code></a> DOC: add missing author</li>
<li><a href="https://github.com/scipy/scipy/commit/1ab9f1b10145f0a974d5531700e72d1fb4229b76"><code>1ab9f1b</code></a> DOC: update 1.10.0 relnotes</li>
<li><a href="https://github.com/scipy/scipy/commit/ac2f45fbe1e39a8f52c1ea2e68764009f02973c0"><code>ac2f45f</code></a> MAINT: integrate._qmc_quad: mark as private with preceding underscore</li>
<li><a href="https://github.com/scipy/scipy/commit/3e0ae1a21f51ebee3a77733c42700d87a0c35d7d"><code>3e0ae1a</code></a> REV: integrate.qmc_quad: delay release to SciPy 1.11.0</li>
<li><a href="https://github.com/scipy/scipy/commit/34cdf05c86548de1c4ca1b2798cdc23885af807b"><code>34cdf05</code></a> MAINT: FFT pybind11 fixups</li>
<li><a href="https://github.com/scipy/scipy/commit/843500aabde17aaf1eec65c589d50bd12ee35039"><code>843500a</code></a> Merge pull request <a href="https://redirect.github.com/scipy/scipy/issues/17689">#17689</a> from mdhaber/gh17686</li>
<li><a href="https://github.com/scipy/scipy/commit/089924b61012a106ffa4f58939b0180124051a0b"><code>089924b</code></a> REL: integrate.qmc_quad: remove from release notes</li>
<li><a href="https://github.com/scipy/scipy/commit/3e47110f10e3267d228e9da84174f3cee325e7c3"><code>3e47110</code></a> REL: 1.10.0rc3 unreleased</li>
<li>Additional commits viewable in <a href="https://github.com/scipy/scipy/compare/v1.8.0...v1.10.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 07-06-2023 23:10:07 | 07-06-2023 23:10:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24699). All of your documentation changes will be reflected on that endpoint.<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it.<|||||>@dependabot ignore this major version<|||||>OK, I won't notify you about version 1.x.x again, unless you re-open this PR. ๐ข |
transformers | 24,698 | closed | Assertion `srcIndex < srcSelectDimSize` failed | Hi,
I am running medalpaca (but the error seems to come from llama) on 4 GPUs using device map="auto" and the SFTTrainer and want to prompt tune the model. I have written a custom Dataset class:
class DiagnosesDataset(torch.utils.data.Dataset):
def __init__(self, instances, tokenizer):
self.instances=instances
self.tokenizer=tokenizer
def __getitem__(self, idx):
item={}
prompt= self.instances["prompt"][idx]
labels = self.instances["label"][idx]
item=self.tokenize(prompt+labels)
tokenized_instruction=self.tokenize(prompt)
label_instruction=self.tokenizer(labels)
i=len(tokenized_instruction["input_ids"])
item["labels"][i:]=label_instruction["input_ids"]
return item
def tokenize(self, prompt):
result_prompt=self.tokenizer(prompt,
truncation=True,
max_length=2048,
padding=False,
return_tensors=None)
result_prompt["labels"]=[-100]*len(result_prompt["input_ids"])
return result_prompt
def __len__(self):
return len(self.instances)
The Training Arguments and Peft config:
training_arguments=TrainingArguments(
output_dir="./falcon_output_dir",
per_device_train_batch_size=4,
gradient_accumulation_steps=2,
optim="paged_adamw_32bit",
save_steps=100,
logging_steps=10,
learning_rate=2e-4,
max_steps=10000,
fp16=False,
bf16=False,
lr_scheduler_type="constant",
warmup_ratio=0.03,
group_by_length=True,
remove_unused_columns=False)
peft_config=LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=4,
bias="none",
task_type=TaskType.CAUSAL_LM,
target_modules=["q_proj", "v_proj"])
The SFTTrainer I am using looks like this:
trainer=SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset,
peft_config=peft_config,
packing=True,
args=training_arguments)
trainer.train()
However, when running the model, somewhere there seems to be an issue with some indices (https://discuss.pytorch.org/t/solved-assertion-srcindex-srcselectdimsize-failed-on-gpu-for-torch-cat/1804/27)
The error I am getting is this:
โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
โ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef โ
โ t.py:544 in <module> โ
โ โ
โ 541 โ โ
โ 542 โ โ
โ 543 โ args=parser.parse_args() โ
โ โฑ 544 โ run() โ
โ 545 โ #main() โ
โ 546 โ โ
โ 547 โ #all_data, prompts, golds=preprocess("./dataset.pkl") โ
โ โ
โ /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef โ
โ t.py:153 in run โ
โ โ
โ 150 โ โ packing=True, โ
โ 151 โ โ data_collator=DataCollatorForSeq2Seq(tokenizer, pad_to_multipl โ
โ 152 โ โ args=training_arguments) โ
โ โฑ 153 โ trainer.train() โ
โ 154 โ โ
โ 155 โ logging.info("Run Train loop") โ
โ 156 โ #model_updated=train(model, dataset, args.seed, args.batch_size, a โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/trainer.py:1537 in train โ
โ โ
โ 1534 โ โ inner_training_loop = find_executable_batch_size( โ
โ 1535 โ โ โ self._inner_training_loop, self._train_batch_size, args.a โ
โ 1536 โ โ ) โ
โ โฑ 1537 โ โ return inner_training_loop( โ
โ 1538 โ โ โ args=args, โ
โ 1539 โ โ โ resume_from_checkpoint=resume_from_checkpoint, โ
โ 1540 โ โ โ trial=trial, โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/trainer.py:1802 in _inner_training_loop โ
โ โ
โ 1799 โ โ โ โ โ self.control = self.callback_handler.on_step_begi โ
โ 1800 โ โ โ โ โ
โ 1801 โ โ โ โ with self.accelerator.accumulate(model): โ
โ โฑ 1802 โ โ โ โ โ tr_loss_step = self.training_step(model, inputs) โ
โ 1803 โ โ โ โ โ
โ 1804 โ โ โ โ if ( โ
โ 1805 โ โ โ โ โ args.logging_nan_inf_filter โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/trainer.py:2647 in training_step โ
โ โ
โ 2644 โ โ โ return loss_mb.reduce_mean().detach().to(self.args.device โ
โ 2645 โ โ โ
โ 2646 โ โ with self.compute_loss_context_manager(): โ
โ โฑ 2647 โ โ โ loss = self.compute_loss(model, inputs) โ
โ 2648 โ โ โ
โ 2649 โ โ if self.args.n_gpu > 1: โ
โ 2650 โ โ โ loss = loss.mean() # mean() to average on multi-gpu para โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/trainer.py:2672 in compute_loss โ
โ โ
โ 2669 โ โ โ labels = inputs.pop("labels") โ
โ 2670 โ โ else: โ
โ 2671 โ โ โ labels = None โ
โ โฑ 2672 โ โ outputs = model(**inputs) โ
โ 2673 โ โ # Save past state if it exists โ
โ 2674 โ โ # TODO: this needs to be fixed and made cleaner later. โ
โ 2675 โ โ if self.args.past_index >= 0: โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl โ
โ โ
โ 1499 โ โ if self._compiled_call_impl is not None: โ
โ 1500 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: โ
โ 1501 โ โ else: โ
โ โฑ 1502 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1503 โ โ
โ 1504 โ def _call_impl(self, *args, **kwargs): โ
โ 1505 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_s โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl โ
โ โ
โ 1508 โ โ if not (self._backward_hooks or self._backward_pre_hooks or s โ
โ 1509 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hoo โ
โ 1510 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks โ
โ โฑ 1511 โ โ โ return forward_call(*args, **kwargs) โ
โ 1512 โ โ # Do not call functions when jit is used โ
โ 1513 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1514 โ โ backward_pre_hooks = [] โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/peft/peft_model.py:739 in forward โ
โ โ
โ 736 โ ): โ
โ 737 โ โ peft_config = self.active_peft_config โ
โ 738 โ โ if not isinstance(peft_config, PromptLearningConfig): โ
โ โฑ 739 โ โ โ return self.base_model( โ
โ 740 โ โ โ โ input_ids=input_ids, โ
โ 741 โ โ โ โ attention_mask=attention_mask, โ
โ 742 โ โ โ โ inputs_embeds=inputs_embeds, โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl โ
โ โ
โ 1499 โ โ if self._compiled_call_impl is not None: โ
โ 1500 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: โ
โ 1501 โ โ else: โ
โ โฑ 1502 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1503 โ โ
โ 1504 โ def _call_impl(self, *args, **kwargs): โ
โ 1505 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_s โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl โ
โ โ
โ 1508 โ โ if not (self._backward_hooks or self._backward_pre_hooks or s โ
โ 1509 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hoo โ
โ 1510 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks โ
โ โฑ 1511 โ โ โ return forward_call(*args, **kwargs) โ
โ 1512 โ โ # Do not call functions when jit is used โ
โ 1513 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1514 โ โ backward_pre_hooks = [] โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/accelerate/hooks.py:165 in new_forward โ
โ โ
โ 162 โ โ โ with torch.no_grad(): โ
โ 163 โ โ โ โ output = old_forward(*args, **kwargs) โ
โ 164 โ โ else: โ
โ โฑ 165 โ โ โ output = old_forward(*args, **kwargs) โ
โ 166 โ โ return module._hf_hook.post_forward(module, output) โ
โ 167 โ โ
โ 168 โ module.forward = new_forward โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/models/llama/modeling_llama.py:691 in โ
โ forward โ
โ โ
โ 688 โ โ return_dict = return_dict if return_dict is not None else self โ
โ 689 โ โ โ
โ 690 โ โ # decoder outputs consists of (dec_features, layer_state, dec_ โ
โ โฑ 691 โ โ outputs = self.model( โ
โ 692 โ โ โ input_ids=input_ids, โ
โ 693 โ โ โ attention_mask=attention_mask, โ
โ 694 โ โ โ position_ids=position_ids, โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl โ
โ โ
โ 1499 โ โ if self._compiled_call_impl is not None: โ
โ 1500 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: โ
โ 1501 โ โ else: โ
โ โฑ 1502 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1503 โ โ
โ 1504 โ def _call_impl(self, *args, **kwargs): โ
โ 1505 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_s โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl โ
โ โ
โ 1508 โ โ if not (self._backward_hooks or self._backward_pre_hooks or s โ
โ 1509 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hoo โ
โ 1510 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks โ
โ โฑ 1511 โ โ โ return forward_call(*args, **kwargs) โ
โ 1512 โ โ # Do not call functions when jit is used โ
โ 1513 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1514 โ โ backward_pre_hooks = [] โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/transformers/models/llama/modeling_llama.py:532 in โ
โ forward โ
โ โ
โ 529 โ โ โ position_ids = position_ids.view(-1, seq_length).long() โ
โ 530 โ โ โ
โ 531 โ โ if inputs_embeds is None: โ
โ โฑ 532 โ โ โ inputs_embeds = self.embed_tokens(input_ids) โ
โ 533 โ โ # embed positions โ
โ 534 โ โ if attention_mask is None: โ
โ 535 โ โ โ attention_mask = torch.ones( โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1502 in _wrapped_call_impl โ
โ โ
โ 1499 โ โ if self._compiled_call_impl is not None: โ
โ 1500 โ โ โ return self._compiled_call_impl(*args, **kwargs) # type: โ
โ 1501 โ โ else: โ
โ โฑ 1502 โ โ โ return self._call_impl(*args, **kwargs) โ
โ 1503 โ โ
โ 1504 โ def _call_impl(self, *args, **kwargs): โ
โ 1505 โ โ forward_call = (self._slow_forward if torch._C._get_tracing_s โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/module.py:1511 in _call_impl โ
โ โ
โ 1508 โ โ if not (self._backward_hooks or self._backward_pre_hooks or s โ
โ 1509 โ โ โ โ or _global_backward_pre_hooks or _global_backward_hoo โ
โ 1510 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks โ
โ โฑ 1511 โ โ โ return forward_call(*args, **kwargs) โ
โ 1512 โ โ # Do not call functions when jit is used โ
โ 1513 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ
โ 1514 โ โ backward_pre_hooks = [] โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/accelerate/hooks.py:165 in new_forward โ
โ โ
โ 162 โ โ โ with torch.no_grad(): โ
โ 163 โ โ โ โ output = old_forward(*args, **kwargs) โ
โ 164 โ โ else: โ
โ โฑ 165 โ โ โ output = old_forward(*args, **kwargs) โ
โ 166 โ โ return module._hf_hook.post_forward(module, output) โ
โ 167 โ โ
โ 168 โ module.forward = new_forward โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/modules/sparse.py:162 in forward โ
โ โ
โ 159 โ โ โ โ self.weight[self.padding_idx].fill_(0) โ
โ 160 โ โ
โ 161 โ def forward(self, input: Tensor) -> Tensor: โ
โ โฑ 162 โ โ return F.embedding( โ
โ 163 โ โ โ input, self.weight, self.padding_idx, self.max_norm, โ
โ 164 โ โ โ self.norm_type, self.scale_grad_by_freq, self.sparse) โ
โ 165 โ
โ โ
โ /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py โ
โ thon3.9/site-packages/torch/nn/functional.py:2238 in embedding โ
โ โ
โ 2235 โ โ # torch.embedding_renorm_ โ
โ 2236 โ โ # remove once script supports set_grad_enabled โ
โ 2237 โ โ _no_grad_embedding_renorm_(weight, input, max_norm, norm_type โ
โ โฑ 2238 โ return torch.embedding(weight, input, padding_idx, scale_grad_by_ โ
โ 2239 โ
โ 2240 โ
โ 2241 def embedding_bag( โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Does anyone have an idea, what might be the issue? Any help would be greatly appreciated!
| 07-06-2023 21:57:14 | 07-06-2023 21:57:14 | Hi @MaggieK410, thanks for reporting this issue.
This is typically caused by an indexing issue in the code.
Could you follow the issue template and:
* Provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output
* Format the code examples. All code should be sandwiched between three backticks ` ``` all code goes here ``` `
* Could you also put the error message in code formatting please?
* Provide a checkpoint - which medalpaca model is being tested?
* Ensure the example code is runnable? `dataset` is not defined <|||||>Hi, thank you very much for getting back to me! I have made a mistake when initializing the tokenizer (I added tokens withoud resizing the embedding). As it is solved, I will close this issue. |
transformers | 24,697 | open | `Trainer` class on Mac uses `accelerate` to incorrectly set MPS device | ### System Info
transformers==4.30.2
Mac 2019, Ventura 13.4
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
ISSUE: I am running a generic model training using Trainer on my mac, locally. My model is being moved to MPS, but my tensors are staying on CPU.
I can provide more details about my script, but I kinda expect that this is a general library problem. Here's the lines of code I discovered:
When the [accelerator is instantiated in the Trainer class](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3834-L3836), it doesn't get passed any user-specific arguments, like [this from TrainingArgs for e.g](https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L586-L587) to give the user control over which device to use. As a result, when running locally on Mac, Accelerate does a lot of inference about which device we want to use, and [moves the model to `self.device`](https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L1289) in the non-distributed setting. I'm not sure yet how `self.device` is instantiated in Accelerate, however, `Trainer` doesn't natively move my data to `mps`, so my script is crashing.
### Expected behavior
Ideally, I have a flag I can pass into `Trainer` to help me not MPS if I don't want to, and just stick with CPU. | 07-06-2023 19:22:06 | 07-06-2023 19:22:06 | EDIT:
Adding the flag `--no_cuda` in `TrainingArgs` takes care of this issue.
I suggest making it something like `--use_cpu` or `--no_cuda_or_mps`, because i totally didn't realize it could be used for this purpose and had to dive to the very bottom of the code-base to see.<|||||>I am not really an expert on this topic, but do you think #24660 will help?<|||||>If not, a reproducible script is indeed necessary, please ๐ <|||||>I have a similar issue as the Trainer was automatically using the MPS backend and couldn't figure out a way of running on CPU. (The MPS backend is missing some operations, so no all models runs!).
Using `no_cuda=True` in the `TrainerArgs` solved the issue! pretty unintuitive!<|||||>cc @SunMarc Maybe we could deprecate the `no_cuda` flag to replace it with `use_cpu`, which would be more intuitive?<|||||>Yes, we should do that since we will automatically set the device to `cuda` or `mps` if available. Furthermore, `use_mps_device` in `TrainingArgs` is also deprecated. I will open a PR for that. The other issue is that we don't dispatch the data in the right device. @muellerzr, I see that we don't move the `dataloader` to a specific device in [`get_train_dataloader`](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3834-L3836). Is this something we want to add ? I can open a PR for it if needed. <|||||>@SunMarc accelerate does this automatically in its dataloader/with the Accelerator, so this should be already happening. If not, it's something we need to fix in accelerate<|||||>There is also another issue that the default device is `mps` but the data is not moved to `mps`, so the Trainer raises an error, minimal code:
```python
from transformers import AutoTokenizer
from datasets import load_dataset
from transformers import AutoModelForCausalLM
from transformers import Trainer, TrainingArguments
model_checkpoint = "roneneldan/TinyStories-33M"
ds = load_dataset('MohamedRashad/characters_backstories')["train"]
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
def tokenize_function(example):
merged = example["text"] + " " + example["target"]
batch = tokenizer(merged, padding='max_length', truncation=True, max_length=128)
batch["labels"] = batch["input_ids"].copy()
return batch
tokenized_dataset = ds.map(tokenize_function, remove_columns=["text", "target"])
model = AutoModelForCausalLM.from_pretrained(model_checkpoint);
training_args = TrainingArguments(
num_train_epochs=1,
output_dir=".",
# use_mps_device=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
print(trainer.accelerator.device)
# device("mps")
# Let's train!
trainer.train()
```
You can solve the issue by explicitly using `use_mps_device=True` or `no_cuda=True` on the `TrainingArgs`
PD: I am on latest of `transformers`, `datasets` and `accelerate` (pip install -U ....)
<|||||>Hey @tcapelle , thanks for the snippet. It helps a lot to solve the issue. I was able to reproduce the bug on the latest version of `transformers`. This bug is fixed on the main branch of `transformers` that you can download with `pip install https://github.com/huggingface/transformers.git`. Let me know if it works on your side. |
transformers | 24,696 | closed | Removing unnecessary `device=device` in modeling_llama.py | Removing unnecessary `device=device`
# What does this PR do?
Removing unnecessary `device=device` in the second argument to `torch.full`.
`torch.full` expects a scaler for the second argument: https://pytorch.org/docs/stable/generated/torch.full.html
So if a device tensor is passed to it, the tensor needs to be synced and sent to CPU first. On TPU, this blocks the tracing of the current iteration that should be overlapped with graph execution of the previous iteration.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-06-2023 18:57:02 | 07-06-2023 18:57:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,695 | open | Time Series Transformer - Dynamic Categorical Features | ### Feature request
I would like to have a Dynamic Categorical Feature Embedding option in TimeSeriesTransformerConfig
### Motivation
I didn't see any option in the TimeSeriesTransformerConfig where I could define an embedding of a Dynamic Categorical Feature. I'm working with sales data and holiday is an important element of sales, so all of my models handle the holidays with a dynamic embedding. Is it the case in Time Series Transformer too, and I'm just missing something?
### Your contribution
Happy to help, but would need some guidance on how it's handled currently. | 07-06-2023 18:35:31 | 07-06-2023 18:35:31 | cc @kashif <|||||>@guyko81 yes sure! I would be happy to help you get this done. I never found a good example of dynamic categorical features, so if you have some sample example that would be really helpful.
We can assume that the dataset has a key e.g.
```py
dynamic_static_categorical = [ [0, 2, 555], [23, 5, 66], ... [33, 4, 54]]
```
where we have a list of categories for each time point where the len of this array will be the length of the target values array in the time dim.
Next we will need to specify the number of dynamic cat. features (3) in the example above and the cardinalities and dims of the corresponding features:
```
dynamic_cat_card = [50, 10, 1000]
dynamic_cat_dimns = [12, 16, 32]
```
Once we have that done on the config side we can just add a corresponding `nn.Embedding` and concat the outputs to the input vector. If you open a PR please CC me and then i can help out!
Thank you!
<|||||>@kashif I have created a pull request https://github.com/huggingface/transformers/pull/24712
Still need to test it first, but I wanted you to have a look |
transformers | 24,694 | open | Make correct padding for text generation with GPT-NEO | ### System Info
- `transformers` version: 4.28.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.10.8
- Huggingface_hub version: 0.13.4
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
In order to make generate text sequences with `GPT-NEO`, I first load all the relevant components for sequence generation for `GPTNeoForCausalLM`.
```
from transformers import AutoTokenizer, GPTNeoForCausalLM
import torch
from torch.nn import functional as F
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m")
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m")
```
There are two ways how I can generate `input_ids` and `attention_mask`.
1. I take the standard approach without padding
```
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
```
2. I use padding instead
```
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'left'
tokenizer.truncation_side = 'left'
no_items_for_history = 30
inputs = tokenizer.encode_plus("Hello, my dog is cute", max_length=no_items_for_history, padding='max_length', truncation=True, return_tensors="pt")
```
Then for both approaches, I iteratively loop through everything in order generate the sequence on token at a time.
```
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
for i in range(10):
if i == 0:
outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=inputs["input_ids"])
else:
outputs = model(input_ids=new_input_ids, attention_mask=attention_mask, past_key_values=past_key_values)
loss = outputs.loss
logits = outputs.logits[:, -1, :]
logits = F.softmax(logits, dim=1)
topk_values, topk_indices = torch.topk(logits, 5)
inputs_in_topk = torch.multinomial(topk_values, num_samples=1, replacement=True)
new_input_ids = torch.gather(topk_indices, 1, inputs_in_topk)
past_key_values = outputs.past_key_values
attention_mask = torch.concat((attention_mask, torch.ones(1, 1).to(attention_mask.device)), dim=1)
input_ids = torch.concat((input_ids, new_input_ids), dim=1)
print(tokenizer.decode(input_ids.tolist()[0], skip_special_tokens=True))
```
### Expected behavior
**Here is the problem:**
The starting `input_ids` and `attention_mask` for the first approach look like:
```
input_ids = tensor([[15496, 11, 616, 3290, 318, 13779]])
attention_mask = tensor([[1, 1, 1, 1, 1, 1]])
```
The output looks very sensible:
```
Hello, my dog is cute! This post is about dogs and cats
```
However, for the second approach the starting `input_ids` and `attention_mask` look like
```
input_ids = tensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 15496, 11, 616, 3290, 318, 13779]])
attention_mask = tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]])
```
and it always generates nonsense like
```
Hello, my dog is cute pet is my pet pet pet is my dog is
```
**Question:** Do you know how to make it work with padding, i.e., the second approach?
| 07-06-2023 16:48:27 | 07-06-2023 16:48:27 | @mzamini92 Many thanks for getting back to me.
I know that the padding tokens should be ignored, when doing the generation. (However, it will be important for batch processing if there are multiple inputs).
What is strange is that, if I follow the approach with different models the output makes sense for both approaches, yet here the second approach is not working for gpt-neo-125m.<|||||>Hey @junoriosity ๐
> if I follow the approach with different models the output makes sense for both approaches, yet here the second approach is not working for gpt-neo-125m.
Masking is not a perfect operation, as it adds a very large negative number to the attention scores. While the impact of masked tokens is very very small, it still exists. In some cases, it may change an output token at generation time, which may derail (or improve!) the generation process.
Have a go with other prompts and other model sizes for `GPTNeo`. Unless this phenomenon consistently happens, I'd say there is nothing to worry about :)<|||||>Hey @gante
I tried the second approach with `EleutherAI/gpt-neo-1.3B` and got
```
Hello, my dog is cute dog is cute is cute is cute is cute is
```
so no improvement ...<|||||>@junoriosity we would need a much larger sample size to conclude it is not working correctly. And we can only afford to look deeper into the issue after we confirm that it is indeed an issue :)<|||||>Hi @junoriosity
Is there any reason to force the padding_side to be `left` ? removing the lines
```python
tokenizer.padding_side = 'left'
tokenizer.truncation_side = 'left'
```
Leads to "better" output (the default `padding_side` is `right` for that model):
```bash
>>> Hello, my dog is cute a little dog. He is so cute cute cute
>>> Hello, my dog is cuteh! She, and I have been in a
```
Maybe there is something wrong in the way we compute the position ids. Consider the case where `padding_side=left` and assume your text has 10 tokens and you want to add padding tokens on the first 20 tokens.
Currently:
```python
if position_ids is None:
position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)
position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])
```
The position ids will be computed as such regardless the index of the first non-padding token. Maybe this is the culprit @gante - similarly as https://github.com/huggingface/transformers/pull/22382<|||||>Hi @younesbelkada
even then there are repetitions etc.
I tried the same feat with the smallest OPT-125m model and it worked like charm, also for others. The only model that does cause me trouble with that approach is gpt-neo.
I use this padding-right strategy to align a batch of sequences to the right. I thought this makes most sense. So far at least it works quite nicely for all other models.
@gante 1.3 B is already the second largest, the largest being 2.7 B. Hence, there is no way to get much beyond that and I also doubt that just doubling the size will change much.<|||||>@younesbelkada oh definitely, the position ids should be computed from the attention mask in `prepare_inputs_for_generation` (like in the PR you linked)! That could be the cause for the mismatches<|||||>@gante @younesbelkada Okay, since I here from these things for the first time, could you
- tell me what it means
- how I could use it to solve the issue?<|||||>@junoriosity it appears there is no issue at all at the end. GPTNeo seems to already support the creation of correct position ids [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L682-L700)
if you modify your script as follows:
```python
from transformers import AutoTokenizer, GPTNeoForCausalLM
import torch
from torch.nn import functional as F
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m")
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'left'
tokenizer.truncation_side = 'left'
no_items_for_history = 30
inputs = tokenizer.encode_plus("Hello, my dog is cute", max_length=no_items_for_history, padding='max_length', truncation=True, return_tensors="pt")
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=40)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
>>> Hello, my dog is cute and I'm going to give you some tips on how to get your dog to sleep.
I'm going to give you some tips on how to get your dog to sleep.
```
Now in your case you need to properly call `prepare_inputs_for_generation` as @gante suggested to create the correct position ids and pass it to your model during forward pass. Let me get back to you with the updated script and explanation<|||||>@younesbelkada You are right, that this is a very elegant solution. :)
However, I would like this "step-by-step" solution to extract some information about the state of the generation.
Hence, is there any possibilty to do it that way? Again, for other models like OPT-125m it was no problem as well.<|||||>Hi @junoriosity
Going back to the explanation; let's first try to understand what is the purpose of `position_ids`. These ids indicates the model the positional information of the input tokens. This information is extremely important for the model to capture the positional information of the input tokens. Here: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L582 the model extracts the so called "positional embeddings" that are added later on together with the input embeddings, to produce the first hidden states here: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L583
As you can see above, if no `position_ids` is passed to the model, it will create a new one: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L551 with the `torch.arange(xxx)` method.
Blindly creating position ids like that can lead to silent bugs (as described on your issue) - for a classic input (no padding involved) there is no problem at all using `torch.arange(xxx)` as there is no special token we want the model to ignore during its forward pass.
Now assume your input is (consider `[PAD]` as the padding token produced by the tokenizer):
```python
"[PAD]ย [PAD] Hello my name is"
```
Therefore the (dummy) input ids would look like (assuming `0` is the pad token id):
```python
[ 0 , 0, 45, 32, 2, 86, ..]
```
If the position_ids are blindly created, it will result in the following :
```python
torch.Tensor([0, 1, 2, 3, 4, 5])
```
this is not correct and leads to wrong computation, the attention mask will ignore the first two tokens, however, the first non-padding token will have a positional ID of `2`, which in fact should be 0 - therefore one always needs to calculate separately the position ids before each generation step to handle corner cases such as the one your are facing.
`.generate()` API does everything for you under the hood. Before each forward pass, it calls a method called `prepare_inputs_for_generation` that ideally handles all these scenarios: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/generation/utils.py#L2359 including correctly shifting the position ids if any.
Going back to your case, the fix is to prepare the model's input before the generation step 1, then at each generation step iteratively call `model.prepare_inputs_for_generation()` with the correct arguments and correctly pass the produced `position_ids`
Changing the script to the one below:
<details><summary>Working script</summary>
```python
from transformers import AutoTokenizer, GPTNeoForCausalLM
import torch
from torch.nn import functional as F
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m")
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'left'
tokenizer.truncation_side = 'left'
no_items_for_history = 30
inputs = tokenizer.encode_plus("Hello, my dog is cute", max_length=no_items_for_history, padding='max_length', truncation=True, return_tensors="pt")
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
position_ids = model.prepare_inputs_for_generation(input_ids, attention_mask=attention_mask, past_key_values=None, position_ids=None)["position_ids"]
for i in range(50):
if i == 0:
outputs = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids)
past_key_values = None
else:
outputs = model(**next_stage_input)
loss = outputs.loss
logits = outputs.logits[:, -1, :]
logits = F.softmax(logits, dim=1)
topk_values, topk_indices = torch.topk(logits, 5)
inputs_in_topk = torch.multinomial(topk_values, num_samples=1, replacement=True)
new_input_ids = torch.gather(topk_indices, 1, inputs_in_topk)
past_key_values = outputs.past_key_values
attention_mask = torch.concat((attention_mask, torch.ones(1, 1).to(attention_mask.device)), dim=1)
input_ids = torch.concat((input_ids, new_input_ids), dim=1)
next_stage_input = model.prepare_inputs_for_generation(input_ids, attention_mask=attention_mask, past_key_values=None, position_ids=None)
print(tokenizer.decode(input_ids.tolist()[0], skip_special_tokens=True))
```
</details>
seems to produce correct output. Let us know with @gante if you have more questions
The reason it works correctly for OPT is that the positional embeddings are computed directly using the attention mask, which indicates where is the first non-padding token: https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/src/transformers/models/opt/modeling_opt.py#L653
Also make sure to use the latest version of `transformers`:
```bash
pip install --upgrade transformers
```<|||||>@younesbelkada @gante Wow, this makes things a lot better by now. ๐ค
However, please correct me if I am wrong, but we do not use `past_key_values`, which will force us to do an enormous amount of calculation again and again.
I tried some things to make use of it, but I did not succeed. Do you have an idea how to make the above code work while using `past_key_values` for speeding up the code?<|||||>@younesbelkada @gante I found how to solve it, you have to enter `position_id` into model, but this becomes
```
position_ids = position_ids[:, -1:] + 1
```
Then things work like a charm.
In any case, many thanks for all your support. Without your effort, this progress wouldn't have been possible. ๐ค<|||||>That's great to hear ! Thanks very much @junoriosity <|||||>Also I believe we should support this, same way as it was done [here](https://github.com/raghavanone/transformers/commit/3d42b725fe357d45fe4f745e1bf700a09f06c1cc). I'll open a PR for both as the changes were reverted because TF version were not updated! I'll take care of it ๐ <|||||>@ArthurZucker That is awesome. Just out of curiosity: How long do you think it will take until a new `transformer` version with the changes is realeased? :)<|||||>Oups as @younes mentioned, the automatic creation of position ids seems to be correct for GPTNeo (not for GPT2).
TLDR; position ids should be created and correctly support past key values and use of use_cache. If this is not, the case then should fix it! <|||||>@ArthurZucker Okay, could you perhaps outline with an example how you mean it? I am a bit lost due to my lack of experience for the specific requirements.<|||||>Hi @ArthurZucker could you perhaps get back to me on that matter? ๐ค
Personally, I would appreciate it a lot if I could handle GPTNeo just like the OPT models, as this would facilitate my life a lot.<|||||>Hey! really sorry had a bit of a sprint this week! Will get back to you soon ๐ <|||||>Hey @junoriosity, what I meant is that in this case, the problem does not seem to be from transformers:
https://github.com/huggingface/transformers/blob/f092997ca669750d4f32ada127b2624bd450aee5/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L682-L707
In the above snippet, we can see that the attention mask is correctly taken into account to create the positional ids. I just needed to check whether this was the case or not.
Hope this answers your final questions! <|||||>@ArthurZucker So long story short:
I can proceed without position_ids for `OPT`, but have to implement it for `GPT-NEO`.
Is that a correct summary?<|||||>If what you are doing is:
> Then for both approaches, I iteratively loop through everything in order generate the sequence on token at a time.
then you just need to make sure your code is adapted yes.
The other solution is for us to include the correct position id creation in the Model class instead of when `prepare_inputs_for_generation`. This might be better, wdyt @gante (gpt2 also needs this as it was reverted since tf does not use it) <|||||>@junoriosity @ArthurZucker I'd favor adding it in `prepare_inputs_for_generation`.
(Adding it in the model class is more elegant, but most models do it in `prepare_inputs_for_generation`. Keeping the same structure makes maintenance easier :) )<|||||>@gante Terrific, I am always a bit impatient, but is there a realistic time range until when this would be included in the library? ๐ค<|||||>@junoriosity our bandwidth to retroactively add features/fix bugs runs short for the foreseeable months. My suggestion would be to have a go at it and open a PR :) <|||||>Sure, but for training could this not be a problem? ๐ |
transformers | 24,693 | closed | TF : tensor mismatch error in training with opus100 and t5-small | ### System Info
`transformers ==4.31.0.dev0`
`tensorflow-macos==2.10.0`
Hello there! ๐
Thanks for creating examples for the Translation task!
## Context
Im going through run_translation.py example modified with [opus100](https://huggingface.co/datasets/opus100) dataset.
Launching the script with flags listed below.
```
python train_model.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--source_prefix "translate English to Romanian: " \
--dataset_name opus100 \
--dataset_config_name en-ro \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=16 \
--per_device_eval_batch_size=16 \
--overwrite_output_dir
```
## Error
All dataset feature engineering seems to display well, It starts training but at some point, there is a **tensor mismatch** error in training.
```
Shape of tensor args_0 [16,128] is not compatible with expected shape [16,64].
[[{{node EnsureShape_1}}]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNext]] [Op:__inference_train_function_17297]
```
Any hints on how Shall I reshape this? At some point, I thought it was something with preprocessing, but it starts training, so a little bit confused... I also explored [wtm16](https://huggingface.co/datasets/wmt16) (example tested and working) during #24579 and when I go 2 the Hub, it seems to have the same structure and partitions as opus100.
Thanks for the time dedicated to this!๐ and for the help!
Looking forward to get all this working, and share it in [PyCon Spain keynote](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks#the-lord-of-the-words--the-two-frameworks) this year!
### Who can help?
@gante
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Launch training with config
```
python train_model.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--source_prefix "translate English to Romanian: " \
--dataset_name opus100 \
--dataset_config_name en-ro \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=16 \
--per_device_eval_batch_size=16 \
--overwrite_output_dir
```
### Expected behavior
Training is not interrupted.
| 07-06-2023 16:47:43 | 07-06-2023 16:47:43 | This looks like a dataset issue, which is not in the scope of `transformers` GitHub pages.
However, if you can provide a full log error + the content of `train_model.py`, we might be able to have a quick look.<|||||>Hello there @ydshieh . Thanks for your time ๐๐
You can find full script [here](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/src/models/train_model.py)
Full Log
```
07/06/2023 17:59:34 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(
_n_gpu=-1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gcp_project=None,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=/tmp/tst-translation/runs/Jul06_17-59-34_mbp-de-gema.lan,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
optim=adamw_hf,
optim_args=None,
output_dir=/tmp/tst-translation,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=16,
per_device_train_batch_size=16,
poly_power=1.0,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['mlflow', 'tensorboard'],
resume_from_checkpoint=None,
run_name=/tmp/tst-translation,
save_on_each_node=False,
save_safetensors=False,
save_steps=500,
save_strategy=steps,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_name=None,
tpu_num_cores=None,
tpu_zone=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xla=False,
xpu_backend=None,
)
07/06/2023 17:59:35 - INFO - datasets.info - Loading Dataset Infos from /Users/gema/.cache/huggingface/modules/datasets_modules/datasets/opus100/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704
07/06/2023 17:59:35 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
07/06/2023 17:59:35 - INFO - datasets.info - Loading Dataset info from /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704
07/06/2023 17:59:35 - WARNING - datasets.builder - Found cached dataset opus100 (/Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704)
07/06/2023 17:59:35 - INFO - datasets.info - Loading Dataset info from /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:00<00:00, 33.24it/s]
loading configuration file t5-small/config.json
Model config T5Config {
"_name_or_path": "t5-small",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 512,
"decoder_start_token_id": 0,
"dense_act_fn": "relu",
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"is_gated_act": false,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_decoder_layers": 6,
"num_heads": 8,
"num_layers": 6,
"output_past": true,
"pad_token_id": 0,
"relative_attention_max_distance": 128,
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_pt": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Portuguese: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"transformers_version": "4.31.0.dev0",
"use_cache": true,
"vocab_size": 32128
}
loading file spiece.model
loading file tokenizer.json
loading file added_tokens.json
loading file special_tokens_map.json
loading file tokenizer_config.json
07/06/2023 17:59:36 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704/cache-107d5d31727344a2.arrow
Running tokenizer on validation dataset: 0%| | 0/2000 [00:00<?, ? examples/s]07/06/2023 17:59:36 - INFO - datasets.arrow_dataset - Caching processed dataset at /Users/gema/.cache/huggingface/datasets/opus100/en-ro/0.0.0/256f3196b69901fb0c79810ef468e2c4ed84fbd563719920b1ff1fdc750f7704/cache-e8cb6f4c7ff7ad3e.arrow
Tensorflow: setting up strategy
loading weights file t5-small/model.safetensors
Generate config GenerationConfig {
"_from_model_config": true,
"decoder_start_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 0,
"transformers_version": "4.31.0.dev0"
}
Loaded 60,506,624 parameters in the TF 2.0 model.
All PyTorch model weights were used when initializing TFT5ForConditionalGeneration.
All the weights of TFT5ForConditionalGeneration were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.
You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss. You can also specify `loss='auto'` to get the internal loss without printing this info string.
07/06/2023 17:59:38 - INFO - __main__ - ***** Running training *****
07/06/2023 17:59:38 - INFO - __main__ - Num examples = 1000000
07/06/2023 17:59:38 - INFO - __main__ - Num Epochs = 3.0
07/06/2023 17:59:38 - INFO - __main__ - Instantaneous batch size per device = 16
07/06/2023 17:59:38 - INFO - __main__ - Total train batch size = 16
07/06/2023 17:59:38 - INFO - __main__ - Total optimization steps = 187500
2023-07-06 17:59:38.328410: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2023-07-06 17:59:38.353957: W tensorflow/core/framework/dataset.cc:769] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
Epoch 1/3
18/62500 [..............................] - ETA: 21:26:35 - loss: 2.2246Traceback (most recent call last):
File "/Users/gema/Documents/The-Lord-of-The-Words-The-two-frameworks/src/models/train_model.py", line 730, in <module>
main()
File "/Users/gema/Documents/The-Lord-of-The-Words-The-two-frameworks/src/models/train_model.py", line 683, in main
history = model.fit(tf_train_dataset, epochs=int(training_args.num_train_epochs), callbacks=callbacks)
File "/Users/gema/miniforge3/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/Users/gema/miniforge3/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Shape of tensor args_0 [16,128] is not compatible with expected shape [16,64].
[[{{node EnsureShape_1}}]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
[[IteratorGetNext]] [Op:__inference_train_function_17297]
```
For the future, I will go with the tailored example for the [forum](https://discuss.huggingface.co/) and maybe shall be redirected there. Let me know if at some point this is a suitable issue for [datasets](https://github.com/huggingface/datasets) in this case. ๐งญ๐บ๏ธ
Thanks for the time dedicated to this, I really appreciate it, and my apologies for the inconvenience.<|||||>@Rocketknight1
Do you know why
```python
if "cols_to_retain" in list(inspect.signature(dataset._get_output_signature).parameters.keys()):
output_signature, _ = dataset._get_output_signature(
dataset,
batch_size=None,
collate_fn=collate_fn,
collate_fn_args=collate_fn_args,
cols_to_retain=model_inputs,
)
```
gives `output_signature`
```
{'input_ids': TensorSpec(shape=(None, None), dtype=tf.int64, name=None), 'attention_mask': TensorSpec(shape=(None, None), dtype=tf.int64, name=None), 'labels': TensorSpec(shape=(None, 64), dtype=tf.int64, name=None), 'decoder_input_ids': TensorSpec(shape=(None, 64), dtype=tf.int64, name=None)}
```
which has a fixed sequence length `64` in `labels` and `decoder_input_ids`?
FYI: the sequences in `dataset` have different lengths in each element.<|||||>@ydshieh We actually generate those shapes empirically by grabbing several batches from the dataset, which is not ideal but usually works. Do almost all samples from the dataset have a post-padding decoder_input_ids length of 64, but some don't? That might trigger this issue. If that turns out to be the case, let me know - I've been wary of that code for a while, so this might be a good time to try a fix!<|||||>Hello there. Thanks again for keeping this issue open. ๐
Managed to solved the issue .
Im putting it here before closing. Hopefully this can give some light to the question posted.
#### 1. Script [train_model.py ](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/src/models/train_model.py#L418)
What I understand is that the `preprocess_function` , We call the [tokenizer](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/de4a08eda2a2de2695f4e3ed12b571bdb3dc9a8f/src/models/train_model.py#L418), that is having the padding and the max length associated
1.a ) Initially what I did is set `max_source_length ` that fixes the length **after** tokenization to 64 . According to the docstring, larger sequences are _truncated_, and shorter are _padded_. IT TRAINS CORRECTLY . But then I thought that this could (please correct me if I'm wrong ) split the sequences when they are longer, therefore larger sentences could be cut, affecting to understanding context in translation in larger sentences.
2.b ) Then I discovered [`pad_to_max_length`](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/de4a08eda2a2de2695f4e3ed12b571bdb3dc9a8f/src/models/train_model.py#L183) . What Im assuming here is that it pads taking into account the max sequence length, so I tried to set it to `True` and `max_target_length ` to `None` . IT SEEMS TO BE TRAINING CORRECTLY as well. What Im understanding here is that Im padding WRT the max length.
Come what may, I gather to TRAIN the model with these two options.
If anyone wants to keep this conversation or clarify some wrong hypothesis I might have, please come by [#2](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/issues/2) ๐ as I wonยดt consider proper to keep this issue here. ๐๐ค
Thanks @ydshieh & @Rocketknight1
|
transformers | 24,692 | closed | Breaking change in upcoming PyTorch version for weight norm and loading pretrained models | Probably to be fixed around here: https://github.com/huggingface/transformers/blob/bbf3090848cf0ceff98f9465691e9ecce63684a1/src/transformers/modeling_utils.py#L3016
See this issue on PyTorch:
https://github.com/pytorch/pytorch/issues/102999#issuecomment-1623975562
| 07-06-2023 16:37:35 | 07-06-2023 16:37:35 | Hi, what's your `transformers` version?
If you use a dev version with the commit of this PR #https://github.com/huggingface/transformers/pull/24030 included, it should be fine. Let me know if not, thanks. |
transformers | 24,691 | closed | Fix integration with Accelerate and failing test | # What does this PR do?
This PR brings back the logic when gathering and calculating metrics, which was borked with https://github.com/huggingface/transformers/pull/24028. Proof is the fact that tests now pass that were failing related to the Trainer
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/24391
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
As Sylvain is on vacation, cc @amyeroberts
| 07-06-2023 16:23:46 | 07-06-2023 16:23:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@muellerzr I'm not sure if it's this or the previous PR that is causing the issue, but it still seems to be hanging on moving the loss tensor to the cpu.<|||||>@winglian can you provide a reproducer?<|||||>I'll have to distill something down later, but I can confirm the issue happens on multi-gpu, but when using single gpu, it seems to work properly.<|||||>A repr will definitely be needed here, because so far at least all the official example scripts don't hang for me. (Though there was something with gradient accumulation fixed with https://github.com/huggingface/transformers/pull/24756). Ping me once you have that and I can take a look. (Though sooner is better so I can try and have it for the next release ๐ ) |
transformers | 24,690 | closed | [DO NOT MERGE] Test PR for studying #24622 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> | 07-06-2023 13:59:59 | 07-06-2023 13:59:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24690). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks ๐ค |
transformers | 24,689 | closed | Avoid import `sentencepiece_model_pb2` in `utils.__init__.py` | # What does this PR do?
Otherwise, trying to import anything from `utils` will fail if protobuf is not installed.
More details in the comment. | 07-06-2023 13:14:04 | 07-06-2023 13:14:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,688 | open | is there any plan to add falcon to instructblip? | ### Model description
instructblip seems to be really cool, is there any possibility to add falcon also to the pipeline in the future?. currently we have option for flant5 & vicuna. problem is you cannot use vicuna for commercial use & flant5 performance is poor, also vicuna is not that great too, adding falcon to the pipeline will massively boost the performance of instructblip.
incase if i want to add it myself how can i do that? code base seems to be really heavy.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/tiiuae
@NielsRogge @DanielHesslow @guipenedo @slippylolo @FalconLLM @mickbo32 @karnakar | 07-06-2023 13:10:46 | 07-06-2023 13:10:46 | cc @NielsRogge <|||||>However, falcon port is still in WIP
https://github.com/huggingface/transformers/pull/24523<|||||>thanks for replying @ydshieh , any idea when it'll come to the instructblip pipeline?, any tentative timeline for the same?
also, is there a way to add the falcon locally by our side to the pipeline in place of vicuna/flant-5?
plan is to plugin this falcon (https://huggingface.co/tiiuae/falcon-40b) to the instructblip pipeline somehow & then we can choose modeltype="falcon"
model, vis_processors, _ = load_model_and_preprocess(name="blip2_vicuna_instruct", model_type="vicuna7b", is_eval=True, device=device)
|
transformers | 24,687 | open | OSError: Error no file named pytorch_model.bin | ### System Info
transformers==4.30.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(base) /mnt/workspace/lawyer-llama/demo> python demo_web.py --port 7863 --checkpoint /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0
Loading model...
Traceback (most recent call last):
File "/mnt/workspace/lawyer-llama/demo/demo_web.py", line 52, in
model = LlamaForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.float16)
File "/home/pai/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2449, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0.
transformers==4.30.2
### Expected behavior
help me๏ผ
(base) /mnt/workspace/lawyer-llama/demo> python demo_web.py --port 7863 --checkpoint /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0
Loading model...
Traceback (most recent call last):
File "/mnt/workspace/lawyer-llama/demo/demo_web.py", line 52, in
model = LlamaForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.float16)
File "/home/pai/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2449, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0.
transformers==4.30.2 | 07-06-2023 10:28:32 | 07-06-2023 10:28:32 | Hi @wangzff, thanks for raising this issue.
If you look at the contents of the checkpoint passed into the script, do you see the model weights? i.e. what is the output of `ls -al mnt/workspace/lawyer-llama/lawyer-llama-13b-beta1.0`? |
transformers | 24,686 | closed | ๐ [i18n-KO] Updated Korean `serialization.md` | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Updated the `serialization.md` file for the Korean documentation.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
=Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 07-06-2023 09:04:51 | 07-06-2023 09:04:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>์ง์์ ์ธ ๋ฒ์ญ ์์ ์์
๋งค์ฐ ๋ฉ์ง๋๋ค! ๋๋ถ์ ONNX ๋ฌธ์๋ฅผ ๊ผผ๊ผผํ ์ฝ์ ์ ์์์ต๋๋ค. ์์ ํ ๋ถ๋ถ์ ์์ด๋ณด์
๋๋ค!<|||||>@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
The difference in length is due to an overhaul in the English document. I will try to use the same PR steps as https://github.com/huggingface/transformers/issues/20179#issuecomment-1528191933 for easier review next time.
Thank you so much for your support. I hope you have a great weekend! โค๏ธ |
transformers | 24,685 | open | How to get the last 4 Hidden states from the feature extraction pipeline | I have defined a pipeline for Feature extraction
```
# Create the pipeline
p = pipeline(
task="feature-extraction",
tokenizer="microsoft/biogpt",
model="microsoft/biogpt",
framework="pt",
device=0
)
bio_gpt = AutoModel.from_pretrained("microsoft/biogpt", output_hidden_states= True)
bio_gpt = bio_gpt.to(device)
```
and I want to extract the embeddings of the last token of the last hidden state, and the Average Pooling of the last 4 layers using the pipeline approach I am doing it like this
_Last token of the last hidden state:_
```
def extract_last_token(last_hidden_states):
last_hidden_states = np.array(last_hidden_states)
return last_hidden_states[:,-1,:]
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
# Extract the last token of the last hidden state
embeddings = [extract_last_token(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings2"] = embeddings
```
_Average pooling of the last 4 layers:_
```
def mean_pooling(last_hidden_states, ):
last_4_layers = last_hidden_states[-4:] # Consider the last 4 layers
return np.mean(last_4_layers, axis=1)
# Process the data using the pipeline
results = p([row["text"] for _, row in df2.iterrows()])
features = np.squeeze(results)
print(features.shape)
# Perform mean pooling on the last hidden states
embeddings = [mean_pooling(hidden_state) for hidden_state in results]
# Create a DataFrame to store the results
df2["embeddings4"] = embeddings
```
The issues are:
1. When I extract the embeddings of the 4 last layers or the 12 last layers the embeddings are always the same

2. The embeddings of the last token of the last hidden state are different from the same embeddings using the "manual" method

Weardly in the above picture the 2 of the embeddings are the same but opposite row ids, this indicates another problem I don't see it if you can spot this I appreciate it.
Here is the code of how I did the manual version
```
output = bio_gpt(**model_inputs)
# Get the last state
last_state = output.last_hidden_state
cls_embeddings = last_state[:, -1, :]
# Print the last state
print(cls_embeddings)
# Assign cls_embeddings to "embeddings4" column in df2
df2["embeddings_manual"] = [cls_embeddings[i].cpu().detach().numpy() for i in range(len(df2))]
``` | 07-06-2023 08:45:08 | 07-06-2023 08:45:08 | Hi, could you also provide the data `df2` (or another version of it if privacy is concerned).
Thanks.<|||||>> Hi, could you also provide the data `df2` (or another version of it if privacy is concerned).
>
> Thanks.
Sure, its just a text and the label

<|||||>@Luke-4
Not as an image please. Make it something that can be used to run the code snippet directly ๐
<|||||>> @Luke-4
>
> Not as an image please. Make it something that can be used to run the code snippet directly ๐
here hope this works:
https://drive.google.com/drive/folders/186rEP0ZMYc3tjR_EKBYNhYEx9sUtmSTj?usp=sharing
<|||||>I haven't look in full details. However, your input to `mean_pooling` (i.e. each element of the output from `p`) seems to have `[batch_dim, seq_len, hideen_dim]`. The `batch_dim` here is just `1`
When you do `last_hidden_states[-4:]` inside `mean_pooling`, it is actually the same element as `last_hidden_states`, as your are taking the last 4 elements along batch dimension (and not the different layers!). When you do `np.mean(..., axis=1)`, it's actually mean along the sequence dimension, and get a shape of `[batch_dim=1, hidden_dim]`.
This doesn't corresponds to what you describe that you want to get mean along the last 4 layers.
However, I am not sure if the `feature extraction pipeline` allow to get all hidden states (from all layers) - probably yes.
Could you verify first, please?
```python
r = p(["I love dog", "I love cat too", "I love cat that meow meow a lot"])
import numpy as np
print(len(a))
print(len(a[0]))
print(len(a[1]))
print(len(a[2]))
print(len(a[0][0]))
print(len(a[1][0]))
print(len(a[2][0]))
print(np.array(a[0][0]).shape)
print(np.array(a[1][0]).shape)
print(np.array(a[2][0]).shape)
```
gives
```bash
3
1
1
1
5
5
6
(5, 1024)
(5, 1024)
(12, 1024)
``` |
transformers | 24,684 | closed | [`T5`] Adding model_parallel = False to `T5ForQuestionAnswering` and `MT5ForQuestionAnswering` | # What does this PR do?
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/24682 by adding `self.model_parallel = False`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker and @younesbelkada
| 07-06-2023 07:56:34 | 07-06-2023 07:56:34 | Is there a good way to add a test for this? I wasn't sure where a test like this would be added. <|||||>> Is there a good way to add a test for this? I wasn't sure where a test like this would be added.
No need to add a test for this. We have `test_model_parallelization` which tests model parallelization (the opposite).
As we are dealing with some deprecated thing, it doesn't worth much time on it.
<|||||>Thanks! Would be nice if you can check this change works (i.e. fix the issue you opened in #24682) ๐ <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks! Would be nice if you can check this change works
Definitely! I just ran the code locally and it works. <|||||>Yes. It seems a bit strange indeed. So far all `is_parallelizable` is set at the `XXXPreTrainedModel` level.
I think it's fine as the only usage of `is_parallelizable` is here
```python
if hasattr(model, "is_parallelizable") and model.is_parallelizable and model.model_parallel:
self.is_model_parallel = True
else:
self.is_model_parallel = False
```
There shouldn't bee too much confusion between `is_parallelizable = True` and `always model_parallel = False` for just this single (and new) model class. |
transformers | 24,683 | closed | Model checkpoint twice as large when saved with safetensors | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1037-gcp-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-2.7b")
model.save_pretrained("opt_safetensor", safe_serialization=True)
```
The original pytorch_model.bin is 5.3GB and the new one is sharded:
model-00001-of-00002.safetensors 9.3GB
model-00002-of-00002.safetensors 601MB
### Expected behavior
Is that the expected behaviour? I would expect the weights to have roughly the same size when saved using safetensors. | 07-06-2023 07:50:51 | 07-06-2023 07:50:51 | Hi @lenbrocki
Could you make a self-complete code snippet in `Reproduction` section, please. Thank you.<|||||>I have updated the Reproduction section<|||||>Thanks @lenbrocki !
cc @Narsil <|||||>This model is saved in `float16`. `from_pretrained` will by default load it in `float32`.
`from_pretrained(..., torch_dtype=torch.float16)`
Should fix it.<|||||>Yes, that fixed it. Thanks! |
transformers | 24,682 | closed | Unable to use Trainer with T5ForQuestionAnswering | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.17
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
Tagging @sgugger since this is related to the Trainer and @ArthurZucker and @younesbelkada since it is also related to the text model T5ForQuestionAnswering.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I first ran into this error when running the `examples/pytorch/question-answering/run_qa.py` script, but I am able to reproduce with the following minimal example:
```python
from transformers import AutoModelForQuestionAnswering, Trainer
model = AutoModelForQuestionAnswering.from_pretrained("sjrhuschlee/flan-t5-base-squad2")
trainer = Trainer(model=model)
```
This produces the error
```python
Traceback (most recent call last):
File "/Users/sebastianlee/miniconda3/envs/sjrl_transformers/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-84527b5cd844>", line 3, in <module>
trainer = Trainer(model=model)
File "/Users/sebastianlee/Documents/code/sjrl_transformers/src/transformers/trainer.py", line 373, in __init__
if hasattr(model, "is_parallelizable") and model.is_parallelizable and model.model_parallel:
File "/Users/sebastianlee/miniconda3/envs/sjrl_transformers/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'T5ForQuestionAnswering' object has no attribute 'model_parallel'
```
I believe this could be fixed by adding
```python
self.model_parallel = False
```
to the init method of T5ForQuestionAnswering. However, this model does not support parallelization so I wonder if it would be better to somehow update the Trainer or possibly remove the `is_parallelizable` attribute from T5ForQuestionAnswering.
### Expected behavior
For the Trainer to work with T5ForQuestionAnswering. | 07-06-2023 07:21:58 | 07-06-2023 07:21:58 | Hi @sjrl
Could you add `model.model_parallel = False` before the line `trainer = Trainer(model=model)`?
`T5ForQuestionAnswering` is recently added, and it doesn't have `parallelize` or `deparallelize` as other T5 model classes.
Or you can follow the suggestion below to use `device_map`.
```
"`T5ForConditionalGeneration.parallelize` is deprecated and will be removed in v5 of Transformers, you"
" should load your model with `device_map='balanced'` in the call to `from_pretrained`. You can also"
" provide your own `device_map` but it needs to be a dictionary module_name to device, so for instance"
" {'encoder.block.0': 0, 'encoder.block.1': 1, ...}",
```<|||||>Hey @ydshieh, thanks for the feedback!
> T5ForQuestionAnswering is recently added, and it doesn't have parallelize or deparallelize as other T5 model classes.
Haha yes, maybe I should have specified that I was the one to recently add it. I opted to not add the `parallelize` or `deparallelize` classes since they would eventually be deprecated.
And my apologies, I should have been more clear with my error. I'm not trying to use the `parallelzie` functionality at all, but since `T5ForQuestionAnswering` inherits from `T5PreTrainedModel` it automatically inherits the `is_parallelizable = True` class variable which is causing the error to be thrown.
> Could you add model.model_parallel = False before the line trainer = Trainer(model=model)?
Definitely! I'll try this and get back to you. <|||||>Yep, we should probably add `is_parallelizable = False` in the class<|||||>> Yep, we should probably add is_parallelizable = False in the class
I can go ahead and open a PR for this. |
transformers | 24,681 | closed | LlamaTokenizer should be picklable | # What does this PR do?
Fixes `LlamaTokenizer` not picklable and will cause `OSError` when tokenizing with Spark UDF.
Reference: #13577
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| 07-06-2023 06:32:17 | 07-06-2023 06:32:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,680 | closed | Add dropouts to GPT-NeoX | # What does this PR do?
The current GPT-NeoX modeling code does not contain dropouts as in [the original EleutherAI/gpt-neox code](https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model), possibly because that GPT-NeoX 20B, where the HF gpt-neox implementation was applied for the first time, has disabled all dropouts.
However, EleutherAI/gpt-neox does provide dropouts at several places.
* post-word-embedding dropout, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/word_embeddings.py#L156),
* attention score dropout, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L453C1-L453C1),
* post-attention dropout, [reference1](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L829-L834), [reference2](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L865-L870), [reference3](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L873-L880),
* post-mlp dropout, [reference1](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L839-L844), [reference2](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/model/transformer.py#L893-L898).
These dropouts can be turned on and help produce better fine-tuning performance.
This PR adds corresponding dropouts to the HF gpt_neox implementation. Following the original EleutherAI code, dropout probabilities are controlled by two config arguments:
* attention_dropout, which controls the probability of the attention score dropout, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/neox_arguments/neox_args.py#L911),
* hidden_dropout, which controls the probability of remaining dropouts, [reference](https://github.com/EleutherAI/gpt-neox/blob/2534e3d76e320aba095894e7dc2a4b416a1ac8df/megatron/neox_arguments/neox_args.py#L916).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Sorry that I am not sure whom to tag, so I am following the suggestion to tag text model people @ArthurZucker and @younesbelkada.
| 07-06-2023 06:26:17 | 07-06-2023 06:26:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Glad to make my first PR to transformers! |
transformers | 24,679 | closed | Custom vision encoder-decoder problem | ### Model description
I'm trying to make a custom vision encoder-decoder model.
I want to use pre-trained encoder but use decoder from scratch, So I cannot use `VisionEncoderDecoderModel.from_pretrained()`.
Specifically, I want to use pre-trained `deit` model as a encoder, and custom trained `Electra` as a decoder.
I write code like below. In train step, there is no problem.
But I got a problem which says "model have no attribute 'generate'". How can I implement or import `generate` function?
```
class CustomEncoderDecoderModel(nn.Module):
config_class = VisionEncoderDecoderConfig
def __init__(self, encoder_name, decoder_config,
config=None):
super(CustomEncoderDecoderModel, self).__init__()
self.encoder = AutoModel.from_pretrained(encoder_name)
self.decoder_config = decoder_config
self.decoder = AutoModelForCausalLM.from_config(self.decoder_config)
self.config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(self.encoder.config, self.decoder.config)
self.criterion = nn.CrossEntropyLoss()
self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size)
def forward(self, pixel_values, labels, decoder_input_ids=None,
decoder_input_embeds=None,
decoder_attention_mask=None,
decoder_inputs_embeds=None,
past_key_values=None):
encoder_outputs = self.encoder(pixel_values,
output_attentions=True)
encoder_hidden_states = encoder_outputs[0]
encoder_attention_mask = None
if decoder_input_ids is None and decoder_input_embeds is None:
decoder_input_ids = shift_tokens_right(
labels, self.decoder.config.pad_token_id, decoder_start_token_id=2
)
if self.encoder.config.hidden_size != self.decoder.config.hidden_size:
encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states)
decoder_outputs = self.decoder(
input_ids = decoder_input_ids,
attention_mask = decoder_attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
inputs_embeds=decoder_inputs_embeds,
output_attentions=True,
use_cache=True,
past_key_values=past_key_values,
)
logits = decoder_outputs[0]
loss = self.criterion(logits.reshape(-1, self.decoder.config.vocab_size), labels.reshape(-1))
return {'loss': loss, 'logits': logits,
'past_key_values': decoder_outputs.past_key_values,
'decoder_hidden_states': decoder_outputs.hidden_states,
'decoder_attentions': decoder_outputs.attentions,
'cross_attentions': decoder_outputs.cross_attentions,
'encoder_hidden_state': encoder_outputs.hidden_states,
'encoder_attentions': encoder_attention_mask,
'encoder_attentions': encoder_outputs.attentions,
}
```
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | 07-06-2023 05:58:08 | 07-06-2023 05:58:08 | Hi @kyle-bong
The `transformers` GitHub pages are reserved for issues or feature requests. The question here is not in the scope, and [[Hugging Face Forums](https://discuss.huggingface.co/)](https://discuss.huggingface.co/) is a better place.
--------------------------------------------------------------------------------------------
However. The decoder model in `transformers` inherit from `PreTrainedModel` which itself is a subclass of `class GenerationMixin` that's where `generate` being defined.
You can probably do `class CustomEncoderDecoderModel(PreTrainedModel):`, but there might something more to make it work. |
transformers | 24,678 | closed | [`MT5`] Fix CONFIG_MAPPING issue leading it to load umt5 class | # What does this PR do?
Adresses #24662 and one of our CI test.
The issue stems from the `CONFIG_MAPPING`'s values used as keys to index over the auto mapping.
There were two ways to fix this, either change our logic or just add a config.
For simplicity added a config. | 07-06-2023 01:18:24 | 07-06-2023 01:18:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can confirm that:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('google/mt5-small')
print(type(model))
<class 'transformers.models.mt5.modeling_mt5.MT5ForConditionalGeneration'>
```
is back to normal ๐ |
transformers | 24,677 | closed | Gradient clipping is no longer recommended? | ### System Info
Hi,
I just found that in the current examples (e.g., https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py), gradient clipping is no longer applied. Is there any particular reason? Is it okay if we add a line to do gradient clipping myself?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
N/A | 07-05-2023 23:18:52 | 07-05-2023 23:18:52 | You can definitely experiment with gradient clipping.
The `transformers` GitHub pages are reserved for issues or feature requests. The question here is not in the scope, and [Hugging Face Forums](https://discuss.huggingface.co/) is a better place, if you have further question on this topic. |
transformers | 24,676 | closed | TrainingArguments not working in transformers v 4.30 | ### System Info
Hi @sgugger
I was trying to implement the same code that is present in the tutorial "https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb" , but when executing the TrainingArguments function I am getting the error "ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`" even after installing what was suggested still facing the same problem. My previous code which worked well 1 month ago also not working exactly at the **TrainingArguments** function.
Attaching the image below
<img width="1346" alt="image" src="https://github.com/huggingface/transformers/assets/96924488/01819fec-1bae-4d45-bea7-fa0ab60d63db">
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
training_args = TrainingArguments(
f"{model_checkpoint}-wikitext2",
evaluation_strategy = "epoch",
learning_rate=2e-5,
weight_decay=0.01,
push_to_hub=True
)
From Casual language modeling task
### Expected behavior
TrainingArguments should be working with no error. | 07-05-2023 23:02:27 | 07-05-2023 23:02:27 | From the discussion forum "https://discuss.huggingface.co/t/trainingargument-does-not-work-on-colab/43372" got the solution to use Transformers version 4.17 to make TrainingArguments work. Wanted to know why TrainingArguments not working in version 4.30?<|||||>After you `pip install accelerate -U`, did you restart the notebook? (It seems you are using colab notebook?)<|||||>Hi @ydshieh yes thanks that worked, yesterday I tried the same but not sure what changed but today it worked. Thanks for the comment. Yes, it is in colab notebook. |
transformers | 24,675 | closed | Bump grpcio from 1.44.0 to 1.53.0 in /examples/research_projects/decision_transformer | Bumps [grpcio](https://github.com/grpc/grpc) from 1.44.0 to 1.53.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc/releases">grpcio's releases</a>.</em></p>
<blockquote>
<h2>Release v1.53.0</h2>
<p>This is release 1.53.0 (<a href="https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md">glockenspiel</a>) of gRPC Core.</p>
<p>For gRPC documentation, see <a href="https://grpc.io/">grpc.io</a>. For previous releases, see <a href="https://github.com/grpc/grpc/releases">Releases</a>.</p>
<p>This release contains refinements, improvements, and bug fixes, with highlights listed below.</p>
<h2>Core</h2>
<ul>
<li>xDS: fix crash when removing the last endpoint from the last locality in weighted_target. (<a href="https://redirect.github.com/grpc/grpc/pull/32592">#32592</a>)</li>
<li>filter stack: pass peer name up via recv_initial_metadata batch. (<a href="https://redirect.github.com/grpc/grpc/pull/31933">#31933</a>)</li>
<li>[EventEngine] Add advice against blocking work in callbacks. (<a href="https://redirect.github.com/grpc/grpc/pull/32397">#32397</a>)</li>
<li>[http2] Dont drop connections on metadata limit exceeded. (<a href="https://redirect.github.com/grpc/grpc/pull/32309">#32309</a>)</li>
<li>xDS: reject aggregate cluster with empty cluster list. (<a href="https://redirect.github.com/grpc/grpc/pull/32238">#32238</a>)</li>
<li>Fix Python epoll1 Fork Support. (<a href="https://redirect.github.com/grpc/grpc/pull/32196">#32196</a>)</li>
<li>server: introduce ServerMetricRecorder API and move per-call reporting from a C++ interceptor to a C-core filter. (<a href="https://redirect.github.com/grpc/grpc/pull/32106">#32106</a>)</li>
<li>[EventEngine] Add invalid handle types to the public API. (<a href="https://redirect.github.com/grpc/grpc/pull/32202">#32202</a>)</li>
<li>[EventEngine] Refactoring the EventEngine Test Suite: Part 1. (<a href="https://redirect.github.com/grpc/grpc/pull/32127">#32127</a>)</li>
<li>xDS: fix WeightedClusters total weight handling. (<a href="https://redirect.github.com/grpc/grpc/pull/32134">#32134</a>)</li>
</ul>
<h2>C++</h2>
<ul>
<li>Update minimum MSVC version to 2019. (<a href="https://redirect.github.com/grpc/grpc/pull/32615">#32615</a>)</li>
<li>Use CMake variables for paths in pkg-config files. (<a href="https://redirect.github.com/grpc/grpc/pull/31671">#31671</a>)</li>
</ul>
<h2>C#</h2>
<ul>
<li>Grpc.Tools: Use x86 protoc binaries on arm64 Windows. (<a href="https://redirect.github.com/grpc/grpc/pull/32017">#32017</a>)</li>
</ul>
<h2>Python</h2>
<ul>
<li>Support python 3.11 on aarch64. (<a href="https://redirect.github.com/grpc/grpc/pull/32270">#32270</a>)</li>
<li>Include .pyi file. (<a href="https://redirect.github.com/grpc/grpc/pull/32268">#32268</a>)</li>
<li>De-experimentalize wait-for-ready. (<a href="https://redirect.github.com/grpc/grpc/pull/32143">#32143</a>)</li>
<li>De-experimentalize compression. (<a href="https://redirect.github.com/grpc/grpc/pull/32138">#32138</a>)</li>
</ul>
<h2>Ruby</h2>
<ul>
<li>[ruby]: add pre-compiled binaries for ruby 3.2; drop them for ruby 2.6. (<a href="https://redirect.github.com/grpc/grpc/pull/32089">#32089</a>)</li>
</ul>
<h2>Release v1.53.0-pre2</h2>
<p>This is a prerelease of gRPC Core 1.53.0 (glockenspiel).</p>
<p>For gRPC documentation, see <a href="https://grpc.io/">grpc.io</a>. For previous releases, see <a href="https://github.com/grpc/grpc/releases">Releases</a>.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc/blob/master/doc/grpc_release_schedule.md">grpcio's changelog</a>.</em></p>
<blockquote>
<h1>gRPC Release Schedule</h1>
<p>Below is the release schedule for gRPC <a href="https://github.com/grpc/grpc-java/releases">Java</a>, <a href="https://github.com/grpc/grpc-go/releases">Go</a> and <a href="https://github.com/grpc/grpc/releases">Core</a> and its dependent languages C++, C#, Objective-C, PHP, Python and Ruby.</p>
<p>Releases are scheduled every six weeks on Tuesdays on a best effort basis. In some unavoidable situations a release may be delayed or released early or a language may skip a release altogether and do the next release to catch up with other languages. See the past releases in the links above. A six-week cycle gives us a good balance between delivering new features/fixes quickly and keeping the release overhead low.</p>
<p>The gRPC release support policy can be found <a href="https://grpc.io/docs/what-is-grpc/faq/#how-long-are-grpc-releases-supported-for">here</a>.</p>
<p>Releases are cut from release branches. For Core and Java repos, the release branch is cut two weeks before the scheduled release date. For Go, the branch is cut just before the release. An RC (release candidate) is published for Core and its dependent languages just after the branch cut. This RC is later promoted to release version if no further changes are made to the release branch. We do our best to keep head of master branch stable at all times regardless of release schedule. Daily build packages from master branch for C#, PHP, Python, Ruby and Protoc plugins are published on <a href="https://packages.grpc.io/">packages.grpc.io</a>. If you depend on gRPC in production we recommend to set up your CI system to test the RCs and, if possible, the daily builds.</p>
<p>Names of gRPC releases are <a href="https://github.com/grpc/grpc/blob/master/doc/g_stands_for.md">here</a>.</p>
<table>
<thead>
<tr>
<th>Release</th>
<th>Scheduled Branch Cut</th>
<th>Scheduled Release Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>v1.17.0</td>
<td>Nov 19, 2018</td>
<td>Dec 4, 2018</td>
</tr>
<tr>
<td>v1.18.0</td>
<td>Jan 2, 2019</td>
<td>Jan 15, 2019</td>
</tr>
<tr>
<td>v1.19.0</td>
<td>Feb 12, 2019</td>
<td>Feb 26, 2019</td>
</tr>
<tr>
<td>v1.20.0</td>
<td>Mar 26, 2019</td>
<td>Apr 9, 2019</td>
</tr>
<tr>
<td>v1.21.0</td>
<td>May 7, 2019</td>
<td>May 21, 2019</td>
</tr>
<tr>
<td>v1.22.0</td>
<td>Jun 18, 2019</td>
<td>Jul 2, 2019</td>
</tr>
<tr>
<td>v1.23.0</td>
<td>Jul 30, 2019</td>
<td>Aug 13, 2019</td>
</tr>
<tr>
<td>v1.24.0</td>
<td>Sept 10, 2019</td>
<td>Sept 24, 2019</td>
</tr>
<tr>
<td>v1.25.0</td>
<td>Oct 22, 2019</td>
<td>Nov 5, 2019</td>
</tr>
<tr>
<td>v1.26.0</td>
<td>Dec 3, 2019</td>
<td>Dec 17, 2019</td>
</tr>
<tr>
<td>v1.27.0</td>
<td>Jan 14, 2020</td>
<td>Jan 28, 2020</td>
</tr>
<tr>
<td>v1.28.0</td>
<td>Feb 25, 2020</td>
<td>Mar 10, 2020</td>
</tr>
<tr>
<td>v1.29.0</td>
<td>Apr 7, 2020</td>
<td>Apr 21, 2020</td>
</tr>
<tr>
<td>v1.30.0</td>
<td>May 19, 2020</td>
<td>Jun 2, 2020</td>
</tr>
<tr>
<td>v1.31.0</td>
<td>Jul 14, 2020</td>
<td>Jul 28, 2020</td>
</tr>
<tr>
<td>v1.32.0</td>
<td>Aug 25, 2020</td>
<td>Sep 8, 2020</td>
</tr>
<tr>
<td>v1.33.0</td>
<td>Oct 6, 2020</td>
<td>Oct 20, 2020</td>
</tr>
<tr>
<td>v1.34.0</td>
<td>Nov 17, 2020</td>
<td>Dec 1, 2020</td>
</tr>
<tr>
<td>v1.35.0</td>
<td>Dec 29, 2020</td>
<td>Jan 12, 2021</td>
</tr>
<tr>
<td>v1.36.0</td>
<td>Feb 9, 2021</td>
<td>Feb 23, 2021</td>
</tr>
<tr>
<td>v1.37.0</td>
<td>Mar 23, 2021</td>
<td>Apr 6, 2021</td>
</tr>
<tr>
<td>v1.38.0</td>
<td>May 4, 2021</td>
<td>May 18, 2021</td>
</tr>
<tr>
<td>v1.39.0</td>
<td>Jun 15, 2021</td>
<td>Jun 29, 2021</td>
</tr>
<tr>
<td>v1.40.0</td>
<td>Jul 27, 2021</td>
<td>Aug 10, 2021</td>
</tr>
<tr>
<td>v1.41.0</td>
<td>Sep 7, 2021</td>
<td>Sep 21, 2021</td>
</tr>
<tr>
<td>v1.42.0</td>
<td>Oct 19, 2021</td>
<td>Nov 2, 2021</td>
</tr>
<tr>
<td>v1.43.0</td>
<td>Nov 30, 2021</td>
<td>Dec 14, 2021</td>
</tr>
</tbody>
</table>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/grpc/grpc/commit/358bfb581feeda5bf17dd3b96da1074d84a6ef8d"><code>358bfb5</code></a> Bump version to 1.53.0 (<a href="https://redirect.github.com/grpc/grpc/issues/32685">#32685</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/6e1ebe76d87a2e9b643c08b3e234d374edcd9e92"><code>6e1ebe7</code></a> Backport: Ensure compatibility with the new custom kokoro win2019 image (<a href="https://redirect.github.com/grpc/grpc/issues/326">#326</a>...</li>
<li><a href="https://github.com/grpc/grpc/commit/44a77f6e911b95e1bc2c909b348123b2da2c4375"><code>44a77f6</code></a> Backport 1.53: Update minimum MSVC version to 2019 (<a href="https://redirect.github.com/grpc/grpc/issues/32615">#32615</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/c11153cb4ef01ca5f83304b2e28edd0182b3c0d0"><code>c11153c</code></a> backport to 1.53: xDS: fix crash when removing the last endpoint from the las...</li>
<li><a href="https://github.com/grpc/grpc/commit/7c7712a6b08ebf1bdc18fc43dc871b47b3dffe97"><code>7c7712a</code></a> Bump version to 1.53.0-pre2. (<a href="https://redirect.github.com/grpc/grpc/issues/32545">#32545</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/a4017dc45e342064722a36181ed14e6d7b469d29"><code>a4017dc</code></a> backport to 1.53: [promises] Make Poll<T> its own type, not a variant<> (<a href="https://redirect.github.com/grpc/grpc/issues/32540">#32540</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/3f93c1667280e6f11a1eb35cccfb8c81c698bee5"><code>3f93c16</code></a> Fuzzer fix backport to v1.53 (<a href="https://redirect.github.com/grpc/grpc/issues/32511">#32511</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/5b244b25c2b87a85781ceeecd34ce0f8e8e7e840"><code>5b244b2</code></a> Bump release version to 1.53.0-pre1 (<a href="https://redirect.github.com/grpc/grpc/issues/32428">#32428</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/6589340efc39b87c94897d221eaf949213cdac87"><code>6589340</code></a> Bump core version 202302161703 (<a href="https://redirect.github.com/grpc/grpc/issues/32416">#32416</a>)</li>
<li><a href="https://github.com/grpc/grpc/commit/d49e1513063e6624e08eb6f59049596178a28783"><code>d49e151</code></a> [backoff] Add random early detection classifier (<a href="https://redirect.github.com/grpc/grpc/issues/32354">#32354</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc/compare/v1.44.0...v1.53.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 07-05-2023 21:23:32 | 07-05-2023 21:23:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.
If you change your mind, just re-open this PR and I'll resolve any conflicts on it. |
transformers | 24,674 | closed | Fix non-deterministic Megatron-LM checkpoint name | # What does this PR do?
`os.listdir`'s order is not deterministic, which is a problem when querying the first listed file as in the code (`os.listdir(...)[0]`).
This can return a checkpoint name such as `distrib_optim.pt`, which does not include desired information such as the saved arguments originally given to Megatron-LM.
Instead, we try out different file names used by Megatron-LM (`model_rng.pt` was mentioned in other parts of the script; I'm assuming this is for backward-compatibility).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100 wrote most of the code in there and made a Twitter post about this functionality, hope you're the right person to tag. :) | 07-05-2023 21:21:31 | 07-05-2023 21:21:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review! Actually that's only the case if `--use-distributed-optimizer` is not given! Otherwise an extra file called `distrib_optim.pt` is created on the most recent Megatron-LM commit. :)<|||||>>Otherwise an extra file called distrib_optim.pt is created on the most recent Megatron-LM commit. :)
Cool, thank you for the info! |
transformers | 24,673 | closed | Language Modeling on Already Tokenized Data | ### System Info
When I try to execute [`run_clm.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) in the language modeling example I naturally get the question to specify the `tokenizer_name`.
Yet, my data is already tokenized, i.e. my train and validation look (very crudely) like
```
0 3111 5100 2100 3100 6000
1000 4067 3031 3068 5141 3073
1000 3067 6031 3068 5141 3076
```
Thus, sequences on separate lines.
My question is: is there some kind of workaround such that I can train a model on this already tokenized data?
Any help would be great!
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Inside `transformers/examples/pytorch/language-modeling` create the folder `output` and the file `train.txt` (with some numbers in it, see above).
```
python run_clm.py --model_type gpt2 --output_dir output --do_train --train_file train.txt
```
It returns
```
ValueError: You are instantiating a new tokenizer from scratch. This is not
supported by this script.You can do it from another script, save it, and load
it from here, using --tokenizer_name.
```
### Expected behavior
I would expect/prefer that the output would be a warning specifying that a "passthrough" tokenizer will be used. | 07-05-2023 19:29:33 | 07-05-2023 19:29:33 | The example scripts serve as examples ๐ค . If you need some custom modification(s), go for it.
In your case, you can probably skip the following block (and other similar places). You might need to assign your own tokenized dataset(s) to variables like `tokenized_datasets` however.
https://github.com/huggingface/transformers/blob/9a5d468ba0562e2d5edf9da787881fa227132bca/examples/pytorch/language-modeling/run_clm.py#L456-L470 |
transformers | 24,672 | closed | Remove WWT from README | Removes the line that presents Write With Transformer as the official demo for text generation as this hasn't been the case for a while. | 07-05-2023 18:03:28 | 07-05-2023 18:03:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,671 | open | Is there any plan to add kosmos-2 to the transformers. | ### Model description
Kosmos-2 is a grounded multimodal large language model, which integrates grounding and referring capabilities compared with Kosmos-1. The model can accept image regions selected by the user using bounding boxes as input, provide visual answers (i.e., bounding boxes), and ground the text output to the visual world.
**Is there any plan to add this model to the transformers.**
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code: https://github.com/microsoft/unilm/tree/master/kosmos-2
Paper: https://arxiv.org/abs/2306.14824
Weight: the checkpoint can be downloaded from [here](https://conversationhub.blob.core.windows.net/beit-share-public/kosmos-2/kosmos-2.pt?sv=2021-10-04&st=2023-06-08T11%3A16%3A02Z&se=2033-06-09T11%3A16%3A00Z&sr=c&sp=r&sig=N4pfCVmSeq4L4tS8QbrFVsX6f6q844eft8xSuXdxU48%3D)
VQA demo: [here](https://github.com/BIGBALLON/kosmos-2-gd) | 07-05-2023 17:27:59 | 07-05-2023 17:27:59 | Thank you for mentioning this :-). There is some early discussion within the team. I will come back to you once we have some decision.<|||||>This is tracked in PR #24709. (so far empty, but I will try to ๐ )<|||||>@ydshieh I'm very excited to hear this news. I sincerely appreciate your efforts.<|||||>any updates?<|||||>Still on it (slowly) ๐ค <|||||>Sure. Thank you. Appreciate those efforts.
<|||||>I just want to say a big thank you for your effort @ydshieh! Looking forward to it.<|||||>@Rajmehta123 @yolandalalala @vanpelt
This [project](https://github.com/BIGBALLON/kosmos-2-gd) can be provided for everyone to try, I hope it can help everyone<|||||>Very nice! @BIGBALLON Thanks a lot!<|||||>@ydshieh Thank you again for your great contribution!<|||||>Amazing! @BIGBALLON Thanks a lot!<|||||>Just want to give a update: I am almost done the coding - just need to put everything together to finalize.
(The model might ends up as a custom code on the Hub instead of directly available in `transformers` - I am not sure) |
transformers | 24,670 | open | Unable to Get Decoded Output from Whisper | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: Yes (nvidia a100-sxm4-40gb)
- Using distributed or parallel set-up in script?: parallel
### Who can help?
speech model: @sanchit-gandhi
tokenizer: @ArthurZucker
trainer: @sgugger
PyTorch: @sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
All preprocessing steps for the data were the same as the following notebook: https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb.
Training the data was able to yield results with proper metrics for WER, but using `trainer.evaluate()` led to an error as transcripts were unable to be generated.
```
dataset = dataset_dict['train']
dataset = dataset.train_test_split(test_size=0.25)
print(dataset)
DatasetDict({
train: Dataset({
features: ['audio', 'sentence'],
num_rows: 1750
})
test: Dataset({
features: ['audio', 'sentence'],
num_rows: 584
})
})
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-hi", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=4000,
gradient_checkpointing=True,
fp16=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=common_voice['train'],
eval_dataset=common_voice['test'],
data_collator=data_collator,
tokenizer=processor.feature_extractor,
compute_metrics=compute_metrics,
)
trainer.evaluate()
in <cell line: 1>:1 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2945 in evaluate โ
โ โ
โ 2942 โ โ start_time = time.time() โ
โ 2943 โ โ โ
โ 2944 โ โ eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else se โ
โ โฑ 2945 โ โ output = eval_loop( โ
โ 2946 โ โ โ eval_dataloader, โ
โ 2947 โ โ โ description="Evaluation", โ
โ 2948 โ โ โ # No point gathering the predictions if there are no metrics, otherwise we d โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:3227 in evaluation_loop โ
โ โ
โ 3224 โ โ โ โ โ EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=a โ
โ 3225 โ โ โ โ ) โ
โ 3226 โ โ โ else: โ
โ โฑ 3227 โ โ โ โ metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, lab โ
โ 3228 โ โ else: โ
โ 3229 โ โ โ metrics = {} โ
โ 3230 โ
โ in compute_metrics:13 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:3490 in โ
โ batch_decode โ
โ โ
โ 3487 โ โ Returns: โ
โ 3488 โ โ โ `List[str]`: The list of decoded sentences. โ
โ 3489 โ โ """ โ
โ โฑ 3490 โ โ return [ โ
โ 3491 โ โ โ self.decode( โ
โ 3492 โ โ โ โ seq, โ
โ 3493 โ โ โ โ skip_special_tokens=skip_special_tokens, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:3491 in โ
โ <listcomp> โ
โ โ
โ 3488 โ โ โ `List[str]`: The list of decoded sentences. โ
โ 3489 โ โ """ โ
โ 3490 โ โ return [ โ
โ โฑ 3491 โ โ โ self.decode( โ
โ 3492 โ โ โ โ seq, โ
โ 3493 โ โ โ โ skip_special_tokens=skip_special_tokens, โ
โ 3494 โ โ โ โ clean_up_tokenization_spaces=clean_up_tokenization_spaces, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/whisper/tokenization_whisper.py:592 โ
โ in decode โ
โ โ
โ 589 โ โ Returns: โ
โ 590 โ โ โ `str`: The decoded sentence. โ
โ 591 โ โ """ โ
โ โฑ 592 โ โ text = super().decode( โ
โ 593 โ โ โ token_ids, โ
โ 594 โ โ โ skip_special_tokens=skip_special_tokens, โ
โ 595 โ โ โ clean_up_tokenization_spaces=clean_up_tokenization_spaces, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:3530 in decode โ
โ โ
โ 3527 โ โ # Convert inputs to python lists โ
โ 3528 โ โ token_ids = to_py_obj(token_ids) โ
โ 3529 โ โ โ
โ โฑ 3530 โ โ return self._decode( โ
โ 3531 โ โ โ token_ids=token_ids, โ
โ 3532 โ โ โ skip_special_tokens=skip_special_tokens, โ
โ 3533 โ โ โ clean_up_tokenization_spaces=clean_up_tokenization_spaces, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/models/whisper/tokenization_whisper.py:619 โ
โ in _decode โ
โ โ
โ 616 โ โ โ decoder_start_token_id = self.convert_tokens_to_ids("<|startoftranscript|>") โ
โ 617 โ โ โ token_ids = self._strip_prompt(token_ids, prompt_token_id, decoder_start_tok โ
โ 618 โ โ โ
โ โฑ 619 โ โ filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip โ
โ 620 โ โ โ
โ 621 โ โ # To avoid mixing byte-level and unicode for byte-level BPT โ
โ 622 โ โ # we need to build string separately for added tokens and byte-level tokens โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils.py:906 in โ
โ convert_ids_to_tokens โ
โ โ
โ 903 โ โ โ โ return self._convert_id_to_token(ids) โ
โ 904 โ โ tokens = [] โ
โ 905 โ โ for index in ids: โ
โ โฑ 906 โ โ โ index = int(index) โ
โ 907 โ โ โ if skip_special_tokens and index in self.all_special_ids: โ
โ 908 โ โ โ โ continue โ
โ 909 โ โ โ if index in self.added_tokens_decoder: โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'list'
```
### Expected behavior
I would expect `trainer.evaluate()` to return proper metrics (validation loss and WER) along with generated transcripts for each of the samples fed into the Whisper model. | 07-05-2023 16:22:51 | 07-05-2023 16:22:51 | Hey @as1078 - did you pre-process your inputs according to the function `prepare_dataset`? See https://huggingface.co/blog/fine-tune-whisper#prepare-data
I observe that your dataset has two columns present: `audio` and `sentence`. These are both columns corresponding to raw audio input data and raw target text data. As explained in the blog post / Colab, we need to pre-process the (audio, text) data to (log-mel spectrograms, token ids) respectively.
You should be able to run evaluation simply by pre-processing your dataset as-per the instructions provided and then passing the pre-processed dataset to the `trainer`.
If you want a more streamlined version of a Whisper evaluation script, I recommend you check out: https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#evaluation
You should just be able to specify your model id and dataset metadata and run evaluation directly<|||||>Yes, the data was preprocessed according to the `prepare_dataset` function. However, `trainer.evaluate()` still gave those errors. If I am running the script that you sent a link to, does the model checkpoint need to have been saved after training? Also, I have limited GPU access even with Google Colab Pro, so would saving checkpoints be a better way to save computational resources?<|||||>Hey @as1078 - could you provide an end-to-end reproducible code snippet to run your script? It would be helpful in checking that all the pre-processing steps have been applied correctly (the fact that we're seeing `audio` and `sentence` in your dataset means something has gone wrong!)
The script can either use a `model_id` for a checkpoint on the Hub (e.g. `"openai/whisper-small"` for the pre-trained small Whisper checkpoint, or the path to a locally saved checkpoint (e.g. if you set your save directory to `./my-model`, set `model_id=./my-model` in the training arguments)<|||||>Yes, I can. Here is the link to my colab file: https://colab.research.google.com/drive/10NaxWZtQgaYMN2fTnbRNNqV2baGkJVGG?usp=sharing.
The data directory is linked here: https://drive.google.com/drive/folders/1-3WqzbKH4ZFUm0r2rYwi2f7Wyw64bYa1?usp=sharing. Here is the link to the CSV file with the data files listed:
[audio_new.csv](https://github.com/huggingface/transformers/files/12023463/audio_new.csv)
<|||||>Hey @as1078 - thanks for sharing your script. It looks largely correct, but I can't run it since the data is saved in your Google Drive, so I can't link it to my Colab runtime without downloading it all. Could you perhaps load your dataset, and then push it to the Hub with:
```python
common_voice.push_to_hub("stuttering_asr")
```
This will create the dataset under your namespace, which will then allow me to run your script by streaming the data from the Hub.
If you're just interested in evaluation, there's a lightweight script [here](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#evaluation) that you can use that will do all the pre-processing for you and not require the HF Trainer.<|||||>Yes, of course. The notebook has been updated with this: [https://colab.research.google.com/drive/10NaxWZtQgaYMN2fTnbRNNqV2baGkJVGG?usp=sharing](url). I can also try the evaluation script, but it would be great if you could take a look at the data and give me any feedback. Thanks so much for the help.<|||||>Hey @as1078 - I'm still not able to reproduce your script unfortunately. The dataset that you have pushed contains an `audio` column that is the absolute **path** to a local audio file, rather than an audio file itself. See the dataset viewer to inspect the first 100 examples: https://huggingface.co/datasets/amansingh203/stuttering_asr_dataset/viewer/amansingh203--stuttering_asr_dataset/train?row=0
Could you first load your dataset as an audio dataset, and then push it to the Hub? This way, the audio files will be pushed, and subsequently I'll be able to load them locally. You can follow these steps for doing so: https://huggingface.co/docs/datasets/audio_load
Once you've done this, simply push to Hub:
```python
dataset.push_to_hub("stuttering_asr")
```<|||||>Hi @sanchit-gandhi . The data was loaded into my Hugging Face data account (I pushed it to the Hub), where the audio is now stored. Let me know if you have any issues accessing it. I was able to resolve the issues when I ran `trainer.evaluate()` (I was not using a `Sequence2SequenceTrainer`). However, when I generate transcripts, some of them are not in English, even though the tokenizer is set to transcribe English. Was wondering if this was an issue with the code, or if the model needs more epochs to run.<|||||>Hey @as1078 - nice work on figuring out the issue!
> when I generate transcripts, some of them are not in English
Since you're fine-tuning on an English-only dataset, it makes sense to use an English-only checkpoint as your starting point. See the table [here](https://huggingface.co/blog/fine-tune-whisper#introduction) for details. If doing this, ensure that you **do not** specify the language or task arguments to the tokenizer and processor - these are not required for English-only fine-tuning!
In short, you can swap `openai/whisper-small` for `openai/whisper-small.en` everywhere in your script, and remove all the `language` and `task` arguments |
transformers | 24,669 | closed | Add Nucleotide Transformer notebooks and restructure notebook list | As the name suggests, this adds links to the recent Nucleotide Transformer notebooks in the main `transformers` docs! It also restructures the notebooks list - right now the `Other` list is just full of bio models, so I moved them into their own section. | 07-05-2023 16:04:32 | 07-05-2023 16:04:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,668 | open | updating _compute_mask_indices fn to work with torch compile | fixes #22849
The inplace operations are replaced with out-of-place ones to fix the torch compile computational graph breakage.
This method converts the numpy operations to torch operations in _compute_mask_indices function.
The _compute_mask_indices is used when using SpecAugmentation in wav2vec2 training.
@sanchit-gandhi | 07-05-2023 13:29:53 | 07-05-2023 13:29:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24668). All of your documentation changes will be reflected on that endpoint.<|||||>
> And also add a new test to check that compiling the forward call works when we have spec aug activated
We already have a test case ```test_mask_time_prob_ctc``` that check the forward call with spec aug activated.
Do you mean when using compile mode - we want to have a test case?<|||||>> Do you mean when using compile mode - we want to have a test case?
Yes please - one test to make sure this PR gives the expected behaviour would be grand!<|||||>I believe we just need an end-to-end test here and then we're good to go right @Kirandevraj? |
transformers | 24,667 | closed | Unpin `huggingface_hub` | # What does this PR do?
- As the release `0.16` is out today.
- Also, use `--upgrade-strategy eager` in `pip install` which is required to respect [this comment](https://github.com/huggingface/transformers/pull/24424#pullrequestreview-1493647494).
The default `-U` (which is associated with `only-if-needed`) won't upgrade to all available new versions. See [the doc](https://pip.pypa.io/en/stable/development/architecture/upgrade-options/#controlling-what-gets-installed):
> packages are only upgraded if they are named in the pip command or a requirement file (i.e, they are direct requirements), or an upgraded parent needs a later version of the dependency than is currently installed.
- Since some packages are upgraded, let's change the cache version number for the new versions could be included in the cache. | 07-05-2023 12:42:12 | 07-05-2023 12:42:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @ydshieh ! |
transformers | 24,666 | closed | Whisper: fix prompted max length | # What does this PR do?
Fixes #24600
#23724 Added the ability to guide generation with Whiper through `prompt_ids`. It was increasing the generation length by the length of the prompt -- these tokens were being hardcoded, and thus "not generated".
However, in the default case, we were already setting the generation length to the maximum allowed model length (see [model config](https://huggingface.co/openai/whisper-large-v2/blob/main/config.json#L42)). This increment was forcing us to go behind the maximum length and, because the model uses a `nn.Embedding` for the position embedding, indexing exceptions started popping up on long audio inputs :D
This PR modifies the length extension to what I believe was the author's original goal: only increment the length if `max_new_tokens` is passed. By default, this argument is not set and should correspond to the "new" (=non-prompt) generated tokens. | 07-05-2023 10:48:23 | 07-05-2023 10:48:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@amyeroberts @sanchit-gandhi
After the latest changes, a warning is emitted when we cross `config.max_position_embeddings` for the first time.
For instance, if you now run
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
model = AutoModelForCausalLM.from_pretrained("distilgpt2").to("cuda")
inputs = tokenizer(["The quick brown"], return_tensors="pt").to("cuda")
# distilgpt2 has a maximum length of 1024
gen_out = model.generate(**inputs, do_sample=True, eos_token_id=-1, max_length=1025)
```
You'll see
```
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (1024). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.
```
(And if you set `max_length=1026`, you'll see the warning right before the exceptions. This is because we can technically generate `config.max_position_embeddings + 1` tokens even with restrictive position embeddings, although we shouldn't!) |
transformers | 24,665 | open | Add ELECTRA/DeBERTa v3 pretraining script (replaced token detection pretraining) | ### Feature request
It would be welcome to add a pretraining script for the replaced token detection task that [ELECTRA](https://github.com/google-research/electra) and, later, [DeBERTa v3](https://github.com/microsoft/DeBERTa/tree/master/experiments/language_model#pre-training-with-replaced-token-detection-task) were trained on.
Note that DeBERTa v3 models are actually of type [DeBERTa v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2) under the hood according to [the config file](https://huggingface.co/microsoft/deberta-v3-large/blob/main/config.json#L2).
### Motivation
While DeBERTa v3 and especially ELECTRA are quite "old" in terms of LLM life spans, for completeness' sake it could be worthwhile to add a training example that is fully compatible with all the recent developments in the HF eco system (accelerate, peft, datasets, evaluate etc.).
### Your contribution
Depending on the interest and my own time I can either review or contribute to this as well. | 07-05-2023 09:58:14 | 07-05-2023 09:58:14 | Hey @BramVanroy really good idea!
I think a good start would be the codebasis of latest CamemBERTa model. It uses own DeBERTa v3 pretraining code (modified from the ELECTRA implementation from NVIDIA). In general, DeBERTa v3 uses Gradient-Disentangled Embedding Sharing (GDES) in pretraining compared to v2, which is also implemented in CamemBERTa repository.
Repo is here: https://github.com/WissamAntoun/CamemBERTa
@WissamAntoun is the first author of CamemBERTa paper and also active here :hugs: <|||||>Great find @stefan-it! I see that the code is modified from the [NVIDIA repo](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/ELECTRA). That's probably a great starting point. Personally, I'd also like to see a `torch` equivalent (which I might work on if no one else picks this up).<|||||>Hey @BramVanroy ,
regarding to ELECTRA and PyTorch I recently discovered this repo:
https://github.com/ficstamas/charmen-electra
It implements a kind of Charformer with ELECTRA, but ELECTRA pretraining is also supported. This could be also a good start and interesting for a PyTorch reference, I'm currently testing the ELECTRA Charformer approach :)
(/cc @ficstamas who is maintainer of that repo :hugs: )<|||||>Hey,
@stefan-it Thanks for the cc!
There is an [unofficial](https://github.com/richarddwang/electra_pytorch/tree/master) implementation of ELECTRA which can be a good starting point for you. I used this repository as a reference to make my own.
Also here is a more documented, stripped down version of [my implementation*](https://gist.github.com/ficstamas/263435c924abdd7f742d9925ab12b0d1) if you need it.
*In this example, I initialized it from a checkpoint, but you can initialize it however you like. |
transformers | 24,664 | closed | ๐ [i18n-KO] Fixed Korean and English `quicktour.md` | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Updated and fixed some issues on the `quicktour.md` file for the Korean and English documentation.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
=Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@sgugger, @ArthurZucker, @eunseojo May you please review this PR? | 07-05-2023 08:44:45 | 07-05-2023 08:44:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger, @ArthurZucker, @eunseojo May you please review this PR?
Thank you so much for your support! |
transformers | 24,663 | closed | Fix `EncodecModelTest::test_multi_gpu_data_parallel_forward` | # What does this PR do?
`test_multi_gpu_data_parallel_forward` requires the batch size to be an even number if the batch dim is not at position 0 in the output shape. | 07-05-2023 08:12:40 | 07-05-2023 08:12:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,662 | closed | Loading mT5 checkpoint will load from UMT5 class | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```Python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('google/mt5-small')
print(type(model))
#transformers.models.umt5.modeling_umt5.UMT5ForConditionalGeneration
```
### Expected behavior
@ArthurZucker Thank you for the recent integration of umT5. However, from the latest branch of transformers, loading normal mT5 will load from UMT5 class. Of course this does not happen with 4.30.2. | 07-05-2023 08:12:32 | 07-05-2023 08:12:32 | cc @ArthurZucker <|||||>Hey! Indeed one of our CI test is failing because of that. Looking into it now! <|||||>Yep, the issue is that in the `CONFIG_MAPPING_NAMES` `umt5` maps to mt5 (since they have the same configuration file). This is messing with the overall mapping. A custom coming has to be create, or find a way to properly update! ๐ <|||||>Hmm. The values in `CONFIG_MAPPING(_NAMES)` is used as keys when creating `MODEL_MAPPING`. We should remove the entries of `umt5` in `CONFIG_MAPPING_NAMES` and other mappings.
Those models should be loaded in a non-auto way.
<|||||>We can't just remove every mapping, some of our checks and doc require them. Let's just add a config for UMT5. |
transformers | 24,661 | closed | Fix `VisionTextDualEncoderIntegrationTest` | # What does this PR do?
Need a tiny update in the test files after the PR #24585
So far, CI gets errors like
```bash
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
``` | 07-05-2023 07:12:02 | 07-05-2023 07:12:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,660 | closed | Add is_torch_mps_available function to utils | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Added mps functionality for apple silicon GPU Acceleration.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-05-2023 06:41:05 | 07-05-2023 06:41:05 | Hi @NripeshN,
Thanks for the PR! Could you please fill out the PR description, including the motivation for adding this function?
For the style and quality checks, you'll need to run `make style` at the top level of the repo and push any changes.
cc @ydshieh <|||||>Hi @NripeshN
Would this new `is_torch_mps_available` be used somewhere in `transformers`? Currently, this PR only adds the definition but not using it anywhere.<|||||>> Hi @NripeshN
>
> Would this new `is_torch_mps_available` be used somewhere in `transformers`? Currently, this PR only adds the definition but not using it anywhere.
I was planning on creating a new pull request where I'd be using this function in transformers. This function would provide GPU acceleration for apple silicon Macs.
<|||||>Hi @ydshieh,
I have used is_torch_mps_available in the latest push<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,659 | open | Add HyenaDNA model | ### Model description
HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to 1 million tokens at single nucleotide resolution.
I would like to add this model to the transformers.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code: https://github.com/HazyResearch/hyena-dna
Weights: https://huggingface.co/LongSafari
Paper: https://arxiv.org/abs/2306.15794
cc @exnx | 07-05-2023 06:35:17 | 07-05-2023 06:35:17 | Hi @heytanay, thanks for opening this issue!
The easiest and recommended way to make a model available in `transformers` is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models
This means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while.
Let us know if you have any questions about how to add a model using this process. Looking forward to seeing this model in action! <|||||>Thanks for this @amyeroberts! I will proceed with that!<|||||>Hi, @heytanay we are also working on adding hyena models to transformers, how far along are you ?<|||||>@djaym7 As Amy mentioned, I won't be implementing the model directly in transformers and instead will be adding it directly to the hub. If you are doing it / already have done it, please go ahead! |
transformers | 24,658 | open | CUDA error: out of memory with zero3 offload | ### System Info
WSL2
- `transformers` version: 4.30.2
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.11.4
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. the example notebook `https://huggingface.co/ctheodoris/Geneformer/blob/main/examples/cell_classification.ipynb`
2. modify here `training_args_init = TrainingArguments(**training_args, deepspeed ='ds_config_zero3.json')`
3. the example dataset `https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/tree/main/example_input_files/cell_classification/cell_type_annotation/cell_type_train_data.dataset`
ds_config_zero3.json:
This json worked well with transformers deepspeed test `deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path t5-small --output_dir output_dir --do_eval --max_eval_samples 50 --warmup_steps 50 --max_source_length 128 --val_max_target_length 128 --overwrite_output_dir --per_device_eval_batch_size 4 --predict_with_generate --dataset_config "ro-en" --fp16 --source_lang en --target_lang ro --dataset_name wmt16 --source_prefix "translate English to Romanian: "`
I also confirmed the CPU offload with `https://github.com/huggingface/transformers-bloom-inference` (the transfer from VRAM to CPU RAM)
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e8,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e8,
"stage3_max_reuse_distance": 1e8,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
message and error:
```
DESKTOP-6FHRRIO:1110179:1110179 [0] NCCL INFO Bootstrap : Using eth0:172.31.110.212<0>
DESKTOP-6FHRRIO:1110179:1110179 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
DESKTOP-6FHRRIO:1110179:1110179 [0] misc/cudawrap.cc:90 NCCL WARN Failed to find CUDA library in (null) (NCCL_CUDA_PATH=(null))
NCCL version 2.14.3+cuda11.7
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Failed to open libibverbs.so[.1]
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO NET/Socket : Using [0]eth0:172.31.110.212<0>
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Using network Socket
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 00/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 01/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 02/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 03/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 04/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 05/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 06/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 07/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 08/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 09/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 10/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 11/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 12/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 13/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 14/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 15/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 16/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 17/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 18/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 19/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 20/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 21/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 22/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 23/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 24/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 25/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 26/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 27/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 28/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 29/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 30/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Channel 31/32 : 0
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Trees [0] -1/-1/-1->0->-1 [1] -1/-1/-1->0->-1 [2] -1/-1/-1->0->-1 [3] -1/-1/-1->0->-1 [4] -1/-1/-1->0->-1 [5] -1/-1/-1->0->-1 [6] -1/-1/-1->0->-1 [7] -1/-1/-1->0->-1 [8] -1/-1/-1->0->-1 [9] -1/-1/-1->0->-1 [10] -1/-1/-1->0->-1 [11] -1/-1/-1->0->-1 [12] -1/-1/-1->0->-1 [13] -1/-1/-1->0->-1 [14] -1/-1/-1->0->-1 [15] -1/-1/-1->0->-1 [16] -1/-1/-1->0->-1 [17] -1/-1/-1->0->-1 [18] -1/-1/-1->0->-1 [19] -1/-1/-1->0->-1 [20] -1/-1/-1->0->-1 [21] -1/-1/-1->0->-1 [22] -1/-1/-1->0->-1 [23] -1/-1/-1->0->-1 [24] -1/-1/-1->0->-1 [25] -1/-1/-1->0->-1 [26] -1/-1/-1->0->-1 [27] -1/-1/-1->0->-1 [28] -1/-1/-1->0->-1 [29] -1/-1/-1->0->-1 [30] -1/-1/-1->0->-1 [31] -1/-1/-1->0->-1
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Connected all rings
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO Connected all trees
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer
DESKTOP-6FHRRIO:1110179:1110525 [0] NCCL INFO comm 0x560d4b0d62b0 rank 0 nranks 1 cudaDev 0 busId 3000 - Init COMPLETE
#################come on tensor([0., 0., 0., ..., 0., 0., 0.])
#################come on tensor([0., 0., 0., ..., 0., 0., 0.])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[9], line 1
----> 1 trainer.train()
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py:1645, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1640 self.model_wrapped = self.model
1642 inner_training_loop = find_executable_batch_size(
1643 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1644 )
-> 1645 return inner_training_loop(
1646 args=args,
1647 resume_from_checkpoint=resume_from_checkpoint,
1648 trial=trial,
1649 ignore_keys_for_eval=ignore_keys_for_eval,
1650 )
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py:1759, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1756 model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
1757 else:
1758 # to handle cases wherein we pass "DummyScheduler" such as when it is specified in DeepSpeed config.
-> 1759 model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(
1760 self.model, self.optimizer, self.lr_scheduler
1761 )
1763 if self.is_fsdp_enabled:
1764 self.model = model
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/accelerator.py:1178, in Accelerator.prepare(self, device_placement, *args)
1176 args = self._prepare_ipex(*args)
1177 if self.distributed_type == DistributedType.DEEPSPEED:
-> 1178 result = self._prepare_deepspeed(*args)
1179 elif self.distributed_type == DistributedType.MEGATRON_LM:
1180 result = self._prepare_megatron_lm(*args)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/accelerator.py:1505, in Accelerator._prepare_deepspeed(self, *args)
1502 if type(scheduler).__name__ in deepspeed.runtime.lr_schedules.VALID_LR_SCHEDULES:
1503 kwargs["lr_scheduler"] = scheduler
-> 1505 engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
1506 if optimizer is not None:
1507 optimizer = DeepSpeedOptimizerWrapper(optimizer)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/__init__.py:165, in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params)
153 engine = DeepSpeedHybridEngine(args=args,
154 model=model,
155 optimizer=optimizer,
(...)
162 config=config,
163 config_class=config_class)
164 else:
--> 165 engine = DeepSpeedEngine(args=args,
166 model=model,
167 optimizer=optimizer,
168 model_parameters=model_parameters,
169 training_data=training_data,
170 lr_scheduler=lr_scheduler,
171 mpu=mpu,
172 dist_init_required=dist_init_required,
173 collate_fn=collate_fn,
174 config=config,
175 config_class=config_class)
176 else:
177 assert mpu is None, "mpu must be None with pipeline parallelism"
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/engine.py:309, in DeepSpeedEngine.__init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_class, dont_change_device)
306 model_parameters = list(model_parameters)
308 if has_optimizer:
--> 309 self._configure_optimizer(optimizer, model_parameters)
310 self._configure_lr_scheduler(lr_scheduler)
311 self._report_progress(0)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/engine.py:1184, in DeepSpeedEngine._configure_optimizer(self, client_optimizer, model_parameters)
1181 optimizer_wrapper = self._do_optimizer_sanity_check(basic_optimizer)
1183 if optimizer_wrapper == ZERO_OPTIMIZATION:
-> 1184 self.optimizer = self._configure_zero_optimizer(basic_optimizer)
1185 elif optimizer_wrapper == AMP:
1186 amp_params = self.amp_params()
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/engine.py:1474, in DeepSpeedEngine._configure_zero_optimizer(self, optimizer)
1472 log_dist(f'Creating {model_dtype} ZeRO stage {zero_stage} optimizer', ranks=[0])
1473 from deepspeed.runtime.zero.stage3 import DeepSpeedZeroOptimizer_Stage3
-> 1474 optimizer = DeepSpeedZeroOptimizer_Stage3(
1475 self.module,
1476 optimizer,
1477 timers=timers,
1478 ds_config=self.config,
1479 static_loss_scale=self.loss_scale(),
1480 dynamic_loss_scale=self.dynamic_loss_scale(),
1481 dynamic_loss_args=self.dynamic_loss_scale_args(),
1482 clip_grad=self.gradient_clipping(),
1483 contiguous_gradients=self.zero_contiguous_gradients(),
1484 reduce_bucket_size=self.zero_reduce_bucket_size(),
1485 prefetch_bucket_size=self.zero_prefetch_bucket_size(),
1486 max_reuse_distance=self.zero_max_reuse_distance(),
1487 max_live_parameters=self.zero_max_live_parameters(),
1488 param_persistence_threshold=self.zero_param_persistence_threshold(),
1489 model_persistence_threshold=self.zero_model_persistence_threshold(),
1490 dp_process_group=self.data_parallel_group,
1491 reduce_scatter=self.zero_reduce_scatter(),
1492 overlap_comm=self.zero_overlap_comm(),
1493 offload_optimizer_config=self.zero_offload_optimizer(),
1494 offload_param_config=self.zero_offload_param(),
1495 sub_group_size=self.zero_sub_group_size(),
1496 mpu=self.mpu,
1497 postscale_gradients=self.postscale_gradients(),
1498 gradient_predivide_factor=self.gradient_predivide_factor(),
1499 gradient_accumulation_steps=self.gradient_accumulation_steps(),
1500 aio_config=self.aio_config(),
1501 communication_data_type=self.communication_data_type)
1503 else:
1504 raise NotImplementedError("ZeRO stage {} not implemented".format(zero_stage))
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:149, in DeepSpeedZeroOptimizer_Stage3.__init__(self, module, init_optimizer, timers, ds_config, static_loss_scale, dynamic_loss_scale, dynamic_loss_args, verbose, contiguous_gradients, reduce_bucket_size, prefetch_bucket_size, max_reuse_distance, max_live_parameters, param_persistence_threshold, model_persistence_threshold, dp_process_group, reduce_scatter, overlap_comm, offload_optimizer_config, offload_param_config, sub_group_size, mpu, clip_grad, communication_data_type, postscale_gradients, gradient_predivide_factor, gradient_accumulation_steps, elastic_checkpoint, aio_config)
146 self.params_in_nvme_and_cpu = False
147 self.max_params_in_cpu = 0
--> 149 self.parameter_offload = self.initialize_ds_offload(module=module,
150 timers=timers,
151 ds_config=ds_config,
152 overlap_comm=overlap_comm,
153 prefetch_bucket_size=prefetch_bucket_size,
154 max_reuse_distance=max_reuse_distance,
155 max_live_parameters=max_live_parameters,
156 param_persistence_threshold=param_persistence_threshold,
157 model_persistence_threshold=model_persistence_threshold,
158 offload_param_config=offload_param_config,
159 mpu=mpu)
161 self.persistent_parameters = self.parameter_offload.persistent_parameters
162 self._configure_offloading(offload_optimizer_config, offload_param_config)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/stage3.py:352, in DeepSpeedZeroOptimizer_Stage3.initialize_ds_offload(self, module, timers, ds_config, overlap_comm, prefetch_bucket_size, max_reuse_distance, max_live_parameters, param_persistence_threshold, model_persistence_threshold, offload_param_config, mpu)
338 def initialize_ds_offload(
339 self,
340 module,
(...)
350 mpu,
351 ):
--> 352 return DeepSpeedZeRoOffload(module=module,
353 timers=timers,
354 ds_config=ds_config,
355 overlap_comm=overlap_comm,
356 prefetch_bucket_size=prefetch_bucket_size,
357 max_reuse_distance=max_reuse_distance,
358 max_live_parameters=max_live_parameters,
359 param_persistence_threshold=param_persistence_threshold,
360 model_persistence_threshold=model_persistence_threshold,
361 offload_param_config=offload_param_config,
362 mpu=mpu)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/parameter_offload.py:229, in DeepSpeedZeRoOffload.__init__(self, module, timers, ds_config, overlap_comm, prefetch_bucket_size, max_reuse_distance, max_live_parameters, param_persistence_threshold, model_persistence_threshold, offload_param_config, mpu)
226 self.offload_device = offload_param_config.device
227 self.offload_param_pin_memory = offload_param_config.pin_memory
--> 229 self._convert_to_zero_parameters(ds_config, module, mpu)
231 for m in module.modules():
232 _init_external_params(m)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/parameter_offload.py:297, in DeepSpeedZeRoOffload._convert_to_zero_parameters(self, ds_config, module, mpu)
294 if mpu:
295 group = mpu.get_data_parallel_group()
--> 297 Init(module=module,
298 data_parallel_group=group,
299 dtype=self.dtype,
300 config_dict_or_path=ds_config,
301 remote_device=self.offload_device,
302 pin_memory=self.offload_param_pin_memory,
303 mpu=mpu)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:782, in Init.__init__(self, module, data_parallel_group, mem_efficient_linear, remote_device, pin_memory, config_dict_or_path, config, enabled, dtype, mpu)
780 if module is not None:
781 assert isinstance(module, torch.nn.Module)
--> 782 self._convert_to_zero_parameters(module.parameters(recurse=True))
784 self.use_all_gather_into_tensor = dist.has_all_gather_into_tensor()
785 if not self.use_all_gather_into_tensor:
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:798, in Init._convert_to_zero_parameters(self, param_list)
796 continue
797 self._convert_to_deepspeed_param(param)
--> 798 param.partition()
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:966, in Init._convert_to_deepspeed_param.<locals>.partition(param_list, hierarchy, has_been_updated)
964 if param_list is None:
965 param_list = [cls]
--> 966 self._partition(param_list, has_been_updated=has_been_updated)
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:1104, in Init._partition(self, param_list, force, has_been_updated)
1100 def _partition(self, param_list, force=False, has_been_updated=False):
1101 for param in param_list:
1102 #print_rank_0(f"Before Partitioning Param {param.ds_id}")
1103 # self._param_status(param)
-> 1104 self._partition_param(param, has_been_updated=has_been_updated)
1105 param.ds_status = ZeroParamStatus.NOT_AVAILABLE
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/utils/nvtx.py:15, in instrument_w_nvtx.<locals>.wrapped_fn(*args, **kwargs)
13 def wrapped_fn(*args, **kwargs):
14 get_accelerator().range_push(func.__qualname__)
---> 15 ret_val = func(*args, **kwargs)
16 get_accelerator().range_pop()
17 return ret_val
File /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py:1186, in Init._partition_param(self, param, buffer, has_been_updated)
1183 if start < param.ds_numel and end <= param.ds_numel:
1184 src_tensor = one_dim_param.narrow(0, start, partition_size)
-> 1186 param.ds_tensor.copy_(src_tensor)
1187 #partitioned_tensor = src_tensor.clone().detach().to(self.remote_device)
1188
1189 else:
1190 # partitioned_tensor = torch.zeros(partition_size,
1191 # dtype=param.dtype,
1192 # device=self.remote_device )
1194 if start < param.ds_numel:
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
I
### Expected behavior
I expect the CPU offload to work so I can use a larger batch number (current 2 working without `deepspeed ='ds_config_zero3.json'`). However, with DeepSpeed, even batch 1 did not work with the same error. | 07-05-2023 06:18:58 | 07-05-2023 06:18:58 | Hello, are you running the notebook as is? or are you running it as a script with distributed launcher such as `deepspeed`/`torchrun`/`accelerate launch`?
You can't run DeepSpeed in a notebook. You need to convert the notebook to a script and run the script via a distributed launcher similar to the translation example that you are running<|||||>Worked!
Thank you @pacman100<|||||>Hi @pacman100
I encountered another issue. The .py ran fine alone but with deepspeed it encountered `server socket has failed to bind to [::]:29500 (errno: 98 - Address already in use).` It could not be resolved by specifying a port `os.environ["MASTER_PORT"] = "9994"`. Was it because ray assigns multiple ports at the same time and I only have 1 GPU? Thank you.
```
from ray.air import session
def train(config):
# ...
session.report({"metric": metric}, checkpoint=checkpoint)
For more information please see https://docs.ray.io/en/latest/tune/api/trainable.html
warnings.warn(
== Status ==
Current time: 2023-07-05 16:13:18 (running for 00:00:00.64)
Using FIFO scheduling algorithm.
Logical resource usage: 0/48 CPUs, 0/1 GPUs
Result logdir: /root/ray_results/_objective_2023-07-05_16-13-18
Number of trials: 1/100 (1 PENDING)
+---------------------+----------+-------+-----------------+---------------------+--------------------+------------------------+---------+----------------+----------------+
| Trial name | status | loc | learning_rate | lr_scheduler_type | num_train_epochs | per_device_train_bat | seed | warmup_steps | weight_decay |
| | | | | | | ch_size | | | |
|---------------------+----------+-------+-----------------+---------------------+--------------------+------------------------+---------+----------------+----------------|
| _objective_fd2e0c55 | PENDING | | 4.36572e-06 | polynomial | 1 |
12 | 59.5864 | 1832.57 | 0.0619684 |
+---------------------+----------+-------+-----------------+---------------------+--------------------+------------------------+---------+----------------+----------------+
(pid=1172302) [2023-07-05 16:13:25,088] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
(_objective pid=1172302) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:67: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
(_objective pid=1172302) @jit
(_objective pid=1172302) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:84: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
(_objective pid=1172302) @jit
(_objective pid=1172302) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:101: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
(_objective pid=1172302) @jit
2023-07-05 16:13:34,406 ERROR tune_controller.py:873 -- Trial task failed for trial _objective_fd2e0c55
Traceback (most recent call last):
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
result = ray.get(future)
^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/_private/auto_init_hook.py", line 18, in auto_init_wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/_private/worker.py", line 2540, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(RuntimeError): ray::ImplicitFunc.train() (pid=1172302, ip=172.31.110.212, actor_id=ffa19b5f202ac72158b2946001000000, repr=_objective)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/trainable.py", line 389, in train
raise skipped from exception_cause(skipped)
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/function_trainable.py", line 336, in entrypoint
return self._trainable_func(
^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/function_trainable.py", line 653, in _trainable_func
output = fn()
^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/integrations.py", line 357, in dynamic_modules_import_trainable
return trainable(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/ray/tune/trainable/util.py", line 324, in inner
return trainable(config, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/integrations.py", line 258, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py", line 1614, in train self._hp_search_setup(trial)
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py", line 1330, in _hp_search_setup
self.create_accelerator_and_postprocess()
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/transformers/trainer.py", line 3968, in create_accelerator_and_postprocess
self.accelerator = Accelerator(
^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/accelerator.py", line 345, in __init__
self.state = AcceleratorState(
^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/state.py", line 680, in __init__
PartialState(cpu, **kwargs)
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/accelerate/state.py", line 191, in __init__
torch.distributed.init_process_group(backend=self.backend, **kwargs)
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 900, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 245, in _env_rendezvous_handler
store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 176, in _create_c10d_store
return TCPStore(
^^^^^^^^^
RuntimeError: The server socket has failed to listen on any local network address. The server socket has failed to bind to [::]:29500 (errno: 98 - Address already in use). The server socket has failed to bind to DESKTOP-6FHRRIO:29500 (errno: 98 - Address already in use).
Result for _objective_fd2e0c55:
date: 2023-07-05_16-13-25
hostname: DESKTOP-6FHRRIO
node_ip: 172.31.110.212
pid: 1172302
timestamp: 1688544805
trial_id: fd2e0c55
(_objective pid=1172302) [W socket.cpp:426] [c10d] The server socket has failed to bind to [::]:29500 (errno: 98 - Address already in use).
(_objective pid=1172302) [W socket.cpp:426] [c10d] The server socket has failed to bind to DESKTOP-6FHRRIO:29500 (errno: 98 - Address already in use).
(_objective pid=1172302) [E socket.cpp:462] [c10d] The server socket has failed to listen on any local network address.
(pid=1172435) [2023-07-05 16:13:41,310] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
(_objective pid=1172435) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:67: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
(_objective pid=1172435) @jit
(_objective pid=1172435) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:84: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
(_objective pid=1172435) @jit
(_objective pid=1172435) /home/pc/miniconda3/envs/Transformers/lib/python3.11/site-packages/loompy/bus_file.py:101: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
(_objective pid=1172435) @jit
^C[2023-07-05 16:13:47,161] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 1168420
*
```<|||||>@pchiang5 Perhaps the process has not been fully shut down, please use `ps -aux` to find the remaining process, then use `kill -9 process_pid` to kill them totally.<|||||>@2033329616 Thank you for your feedback. Yes, it shall be due to an open process not shut down and could be resolved by randomly assigning a new port before the previous call.
However, I found the incompatibility of ray tune + hyperopt with deepspeed launcher is the main issue: Without ray tune + hyperopt, it ran with successful CPU offload. With deepspeed and ray+hyperopt as below, the zero3 offload did not work because the amount of VRAM consumption was identical to that without deepspeed.
```
# create the trainer
trainer = Trainer(
model_init=model_init,
args=training_args_init,
data_collator=DataCollatorForCellClassification(),
train_dataset=organ_trainset,
eval_dataset=organ_evalset,
compute_metrics=compute_metrics,
callbacks = [EarlyStoppingCallback(early_stopping_patience=3)]
)
# specify raytune hyperparameter search space
ray_config = {
"num_train_epochs": tune.choice([epochs]),
"learning_rate": tune.loguniform(1e-6, 1e-3),
"weight_decay": tune.uniform(0.0, 0.3),
"lr_scheduler_type": tune.choice(["linear","cosine","polynomial"]),
"warmup_steps": tune.uniform(100, 2000),
"seed": tune.uniform(0,100),
"per_device_train_batch_size": tune.choice([geneformer_batch_size])
}
hyperopt_search = HyperOptSearch(
metric="eval_macro_f1", mode="max")
early_stop = {
"training_iteration": 10
}
# optimize hyperparameters
trainer.hyperparameter_search(
direction="maximize",
backend="ray",
resources_per_trial={"cpu":18,"gpu":1},
hp_space=lambda _: ray_config,
stop=early_stop,
search_alg=hyperopt_search,
n_trials=100, # number of trials
progress_reporter=tune.CLIReporter(max_report_frequency=600,
sort_by_metric=True,
max_progress_rows=100,
mode="max",
metric="eval_macro_f1",
metric_columns=["loss", "eval_loss", "eval_accuracy", "eval_macro_f1"])
```<|||||>Hi @pacman100,
> You can't run DeepSpeed in a Jupyter notebook. You need to convert the notebook to a script and run the script via a distributed launcher similar to the translation example that you are running.
I am also having a CUDA OOM error running DeepSpeed in a notebook on a single node with a single GPU (training [Segformer](https://huggingface.co/docs/transformers/model_doc/segformer) on a GPU with 8 GB RAM). I expected it to work, given [the deployment excerpt](https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-in-notebooks) from the docs. Why would you say so?
I injected the env variables as reported in the docs using stage 3 with CPU offloading, but still the error remains.
|
transformers | 24,657 | open | At least one model's inference seems to have broken from transformers 4.29.2 -> 4.30.* | ### System Info
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.9
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: default setting ( I think it uses GPU )
- Using distributed or parallel set-up in script?: not sure what this is, but I think its N/A
### Who can help?
inference of the model [staka/fugumt-en-ja](https://huggingface.co/staka/fugumt-en-ja) using the "translation" pipeline has broken from 4.30.0 and above.
I don't know if this is expected, or if there are some new parameters I need to use, but using the default script from the readme no longer works. It results in gibberish. I have also confirmed that it works fine in 4.29.2.
I don't know what other models are affected.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have a slightly modified script to write the output to a txt file, since my windows commandline doesn't support Japanese, but I don't think that is relevant. Otherwise, the code is the same from the official readme of the model. Here is the code itself:
```
import pysbd
seg_en = pysbd.Segmenter(language="en", clean=False)
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-en-ja')
txt = 'This is a cat. It is very cute.'
result = fugu_translator(seg_en.segment(txt))
print(result)
final = ''
for s in result:
final += s['translation_text']
with open('./tmp.txt', "w", encoding="utf-8") as f:
f.write(final)
```
in transformers 4.29.2 result is correct:
`ใใใฏ็ซใงใใใจใฆใๅฏๆใใงใใ`
in transformers 4.30.0 and above, result is gibberish:
`ใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใใๅฟ
่ฆใจใชใใพใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใไผใใ`
### Expected behavior
in transformers 4.29.2 result is correct:
`"ใใใฏ็ซใงใใใจใฆใๅฏๆใใงใใ"`
I expect the same behavior in 4.30.* and above. | 07-05-2023 05:19:53 | 07-05-2023 05:19:53 | Thanks for reporting. I confirm the issue could be reproduced.<|||||>cc @Narsil @ArthurZucker <|||||>I went forward to check which commit causing issue. It turns out to be
096f2cf12664bb7da41f89897d3a22966baee9b4
Tied weights load (#24310)
We will have to wait @sgugger to take a look.
(I can probably did what he has done for another model)<|||||>The fix will involve pushing a new model file to the Hub repo.
If you need to use this model/pipeline and can't stay with version `4.29`, I can help creating the new model file.
<|||||>cool, technically i can stay on 4.29 but its also nice to be able to do 4-bit inference by updating to 4.30.* . Maybe I can post on the model owners twitter to see if he wants to update his models.<|||||>The necessary step (required for some model after #24310) to update (some) model weights is only known by a few team member. I am not sure the repo. owner knows how to do it. I can open a Hub repo. PR however.<|||||>Ok I posted on his twitter and linked this thread. No idea if hes going to respond though.
He seems to have the most popular japanese/english translation models on huggingface, ja-en and en-ja looks like they got 9k-10k downloads in the past month, so I guess it would be good if they can be updated for the newest transformers.
So this is not a temporary issue? Basically any models affected will need to update, otherwise all future versions of transformers won't work with them? Can you not make the tie weights thing a parameter or something, or does that actually break other stuff?<|||||>I created branch for a temporary fix. You can use it as
```bash
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers@temp_fix_marian#egg=transformers
```
with a slightly modified script
```python
def preprocess_state_dict_fn(state_dict):
state_dict["lm_head.weight"] = state_dict["model.encoder.embed_tokens.weight"]
return state_dict
model_kwargs = {"preprocess_state_dict_fn": preprocess_state_dict_fn}
import pysbd
seg_en = pysbd.Segmenter(language="en", clean=False)
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-en-ja', model_kwargs=model_kwargs)
txt = 'This is a cat. It is very cute.'
result = fugu_translator(seg_en.segment(txt))
print(result)
final = ''
for s in result:
final += s['translation_text']
with open('./tmp.txt', "w", encoding="utf-8") as f:
f.write(final)
```<|||||>Please note this is not through a discussion with the team, and it's not clear yet how we will deal with this issue officially.
Let me know if the (temp) fix works.<|||||>ok I see. Thanks, the temp fix worked.<|||||>I was pinged, but I'm not sure why. Is this because this is related to weights tying ?
Anything I can do to help ?<|||||>@Narsil I don't know the technical details but the situation is that I found some models that were broken by presumably the weight tying, apparently this was also known as being related to marian or something like that.
ydshieh provided me with a workaround patch in huggingface that fixes the issue, but he doesn't know if thats going to make it into official releases.
The other alternative is that the affected model owners need to update their model weights.
Also, The creator of the model responded on twitter saying he wanted to fix the model so I think he might come to this issue. I'm not sure to what extent he knows english though.<|||||>@Narsil It's because this issue is shown with `pipeline` (that's why Amy pin you in the first place), but the root cause is the tie weights in `from_pretrained`.
No, you don't need to be involved :-)<|||||>The creator responded on twitter and said he'll try to fix the model: https://twitter.com/voleneko/status/1677545104037539841
by the way, as for the general issue of incompatibilities between versions, do you guys know if this is also the reason why tortoise-tts doesn't seem to work after 4.30.* also, or is that a separate issue?<|||||>@Disastorm
Could you provide a link to `tortoise-tts` (It is a HF hub repo. right?)
So far we only see this issue on (a few ) marian model (checkpoints). But it might affect a few other model classes. <|||||>I've used tortoise's own library, but inside the library they reference the huggingface repo https://huggingface.co/jbetker/tortoise-tts-v2 .
So I don't know if its an issue with their library or not, but it does break starting from 4.30 but works on 4.29.2 also.
Here is an issue from their github: https://github.com/neonbjb/tortoise-tts/issues/472
some kind of state dictionary errors related to gpt2 or something.
Here is their issue where they commit the solution ( forcing transformers==4.29.2 ): https://github.com/neonbjb/tortoise-tts/pull/508
<|||||>Hi @Disastorm
It would be super great if you can take a look of what is the model used being not working anymore in 4.30 ๐ <|||||>I really dont know that much about this stuff, but from what I can tell, the tortoise library uses the .pth models here ( I'm not really sure what .pth models represent ): https://huggingface.co/jbetker/tortoise-tts-v2/tree/main/.models
The specific file that has the above error is the autoregressive.pth.
The .pth file is being loaded by a custom torch.nn.module called UnifiedVoice in the tortoise repo.
This module inside of it has a huggingface GPT2Model inside of it that is initialized here https://github.com/neonbjb/tortoise-tts/blob/82724cca5427ddf1570256e616d56b0ebb93e668/tortoise/models/autoregressive.py#L231C45-L231C45
I don't know how the torch.nn.modules work but I believe in the end what may be happening is that this UnifiedVoice module is using the GPT2Model to "load_state_dict" on the autoregressive.pth file and thats where the difference between transformers 4.29.2 and 4.30.* is.<|||||>Tortoise is broken for 4.31.0 as well.
https://github.com/rsxdalv/tts-generation-webui/issues/106
https://github.com/neonbjb/tortoise-tts/issues/480<|||||>@rsxdalv
If you can **translate** the issue in `tortoise` to a code snippet that only involves `transformers` stuff, we are more than happy to take a look and help. We don't really know how `tortoise` things work, like `UnifiedVoice `, `autoregressive.pth` and what's the checkpoint being used.<|||||>> @rsxdalv
>
> If you can **translate** the issue in `tortoise` to a code snippet that only involves `transformers` stuff, we are more than happy to take a look and help. We don't really know how `tortoise` things work, like `UnifiedVoice `, `autoregressive.pth` and what's the checkpoint being used.
@sanchit-gandhi Just wanted to ask if perhaps you know the answer to this before I dig into it. |
transformers | 24,656 | open | discontinuity learning rate while resume from checkpoint | ### System Info
transformers 4.30.2
pytorch 2.0.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I use deepspeed stage 3 and huggingface trainer to resume from my past checkpoint(finish running step 1000). My warm_up steps is 2000. My total training epoch is 1. But when I resume from my past checkpoint, the learning rate is started from scratchใ I except it is started from the learning rate in step 1000.
### Expected behavior
Thanks | 07-05-2023 03:32:53 | 07-05-2023 03:32:53 | Hi @jiangix-paper, thanks for raising this issue.
Without a code snippet that we can use to reproduce the issue on our end, more information about the running environment e.g. deepspeed version, hardward (run `transformers-cli env` in the terminal and copy-paste the output) and more details about what's observed (specific numbers / outputs) it's not possible for us to help you. <|||||>@amyeroberts Sorry for incomplete details. My deepspeed config file is as follows:

The deepspeed version is 0.9.0
Run "transformers-cli env", the output are as follows:

My training arguments are as follows:

First, I run the following code to get a deepspeed saved model:

The saved model files are as follows:


The loss are as follows:

But when i resume from the saved checkpoint using trainer.train(resume_from_checkpoint="xxx"), I expected the learning rate continue from the step 10 (1.4999e-05) and the loss should continue from that(10.4141). But I found the learning rate is from scratch.

Finally, I load the "zero_pp_rank_0_mp_rank_00_model_states.pt" in checkpoint 10. I found the lr_scheduler is None. Although I do not define the lr_scheduler in deepspeed config file, I define it in training arguments. Why is lr_scheduler not be saved?
Thanks a lot. If it lack the other details, please contact me.<|||||>Can you help me please. Thanks a lot. @ydshieh <|||||>@jiangix-paper I am not familiar with deepspeed. But I can tag someone in the team.
However, please don't upload screenshot as code snippet. Use text format (and in a good formatting too) so we can copy paste.
Otherwise, consider using a cola notebook.<|||||>Sorry for that. I will paste my code in text format.
My deepspeed config is :
```
{
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 1,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
The training args are:
```
run_cmd="torchrun --master_addr localhost --nnodes 1 --nproc_per_node 8 --master_port 9001 \
pretrain.py \
--deepspeed ${deepspeed_config_file} \
--config_name ${llama_path} \
--tokenizer_name_or_path ${llama_path} \
--validation_split_percentage 0.000001 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--seed 2023 \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--learning_rate 0.00015 \
--max_grad_norm 1.0 \
--weight_decay 0.1 \
--warmup_ratio 0.01 \
--logging_strategy steps \
--logging_steps 1 \
--save_strategy steps \
--save_total_limit 100 \
--save_steps 1000 \
--bf16 True \
--tf32 True \
--optim adamw_apex_fused \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--report_to tensorboard \
--evaluation_strategy no \
--gradient_accumulation_steps 1 \
--preprocessing_num_workers 100 \
--block_size 2048 \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 360000 \
--logging_first_step True \
--torch_dtype bfloat16 \
--gradient_checkpointing True \
--ddp_find_unused_parameters False"
```
The pretrain.py code is:
```
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=fault_tolerance_data_collator,
compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
preprocess_logits_for_metrics=preprocess_logits_for_metrics
if training_args.do_eval and not is_torch_tpu_available()
else None,
)
rank0_print('Start Training')
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model()
trainer.save_state()
```
Can you help me to tag someone in your team๏ผ @ydshieh Thanks a lot<|||||>@jiangix-paper Thank you for updating.
- `pretrain.py` is not self-complete. Please including the necessary import statements and all the variable definitions that are used
- `${llama_path}` is missing: please specify it.
- datasets seem to be missing<|||||>But looking at
```
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
Have you verified that `checkpoint` passed to `trainer.train` has the desired value?<|||||>> But looking at
>
> ```
> if training_args.resume_from_checkpoint is not None:
> checkpoint = training_args.resume_from_checkpoint
> elif last_checkpoint is not None:
> checkpoint = last_checkpoint
> train_result = trainer.train(resume_from_checkpoint=checkpoint)
> ```
>
> Have you verified that `checkpoint` passed to `trainer.train` has the desired value?
I have checked the checkpoint, and I find the lr_scheduler in checkpoint is None. But I specified lr_scheduler_type in the parameter settings as 'cosine'ใI do not know why it is not saved.<|||||>Nice! Would you like to fill more missing info. so we can take a look ๐ .
Probably this issue is not even with DeepSpeed (?) |
transformers | 24,655 | open | Add a mechanism to transform the forward pass on Flax models | ### Feature request
There should be some way to apply function transformations to Flax models, while not losing the ability to use things like generation utilities.
### Motivation
JAX's main idea is "composable transformations", but currently there's no good way to apply transformations to Flax models. Currently, to apply `my_cool_transformation` to a model, one needs to do something like:
```python
@my_cool_transformation
def wrapper(params, *args, **kwargs):
return model(*args, params=params, **kwargs)
```
This works fine for training loops and so on, but there doesn't seem to be a way to do this and still be able to use `.generate()`. The reason this would be beneficial is that one can implement things like quantization and LoRA as function transformations, so it would be cool to not lose generation support when doing so.
### Your contribution
I'd be willing to make a PR, but I think this would probably require some modification to the HuggingFace base classes for Flax models. | 07-05-2023 03:06:48 | 07-05-2023 03:06:48 | cc @gante @sanchit-gandhi <|||||>Hey @davisyoshida - that's a great point, would you want to add a composable transformation on top of the `transformers` Flax model (a standard Python class object), or the Flax nn.Module?
If it's the latter (which I believe is the more typical use case), you can first extract the Flax module from the Flax model:
```python
model = FlaxGPT2ForCausalLM.from_pretrained("gpt2") # standard python object
module = model.module #ย flax nn module
```
And then apply any composable transformations to this module (it behaves in the same way as a pure Flax module).
Note that the signature of the module is not the same as the model - this is something that will be addressed by #22499 / https://github.com/huggingface/transformers/pull/22866
You can read more about the `transformers` Flax design philosophy here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/jax-projects#flax-models-in-transformers<|||||>The issue I'm getting at is that extracting the module like that makes you lose access to all the utilities on the model class. Here's an example:
```python
import jax
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
def wrap_model(model):
module = model.module
def inner(*args, **kwargs):
return module.apply(*args, **kwargs)
model.module = inner
model_name = 'gpt2'
model, params = FlaxAutoModelForCausalLM.from_pretrained(model_name, _do_init=False)
tokenizer = AutoTokenizer.from_pretrained(model_name)
params = jax.device_put(params, jax.devices('gpu')[0])
inputs = jnp.asarray(tokenizer.encode('Hello there.'))[None]
outputs = model.generate(inputs, params=params)
print(tokenizer.decode(outputs.sequences[0]))
# Already not very JAX-y since wrap_model mutates `model`
wrap_model(model)
# Crashes with error: AttributeError: can't set attribute 'module'
outputs = model.generate(inputs, params=params)
print(tokenizer.decode(outputs.sequences[0]))
# Ideal pure API:
# my_wrapped_model = wrap_the_model(model)
# my_wrapped_model.generate(inputs, params=params)
```
If you just extract the module, is there still some way to use generate?
<|||||>Sorry! `model.module` is a `property`, which just returns `model._module`:
https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/modeling_flax_utils.py#L255-L257
You should be able modify `model._module` to access the module class!
> If you just extract the module, is there still some way to use generate?
The generate method is tied to the `FlaxPreTrainedModel`, i.e. the Python class `model`: https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/modeling_flax_utils.py#L158
So I don't think there's any way to generate with just the module that you extract from the model. What you could try is changing the module itself to **also** inherit from `FlaxGenerationMixin`, such that we can call `module.generate`. Note that we'll also have to implement methods like `prepare_inputs_for_generation`:
https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/models/gpt2/modeling_flax_gpt2.py#L745
And `update_inputs_for_generation` for this to work:
https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/models/gpt2/modeling_flax_gpt2.py#L766
Although I'm not sure whether this is possible since Flax modules are just data classes, so you'll have to experiment and see.
I think the easiest would be to define your `wrap_model` function such that it extracts the `_module`, then applies the composition as required, and finally sets the `model._module` attribute again (despite being not super JAX-y, I think this is the easiest way)<|||||>> and finally sets the model._module attribute again
Ah right I remember running into this when I tried to make generation from quantized models work. The problem is that it's expected that `module` be a proper Flax module, not just a function. Assigning to `_module` in my code above leads to this:
```python
transformers/models/gpt2/modeling_flax_gpt2.py", line 451, in init_cache
init_variables = self.module.init(
AttributeError: 'function' object has no attribute 'init'
```
You might think to try wrapping `_module`'s `__call__` method like this:
```python
def wrap_model(model):
call_fn = model.module.__call__
def inner(*args, **kwargs):
return call_fn(*args, **kwargs)
model._module.__call__ = inner
```
But using this (again in the original code I posted), gives the following:
```python
File "/home/davis/data/venvs/jax/lib/python3.10/site-packages/transformers/models/gpt2/modeling_flax_gpt2.py", line 749, in prepare_inputs_for_generation
past_key_values = self.init_cache(batch_size, max_length)
File "/home/davis/data/venvs/jax/lib/python3.10/site-packages/transformers/models/gpt2/modeling_flax_gpt2.py", line 451, in init_cache
init_variables = self.module.init(
line 8, in inner
return call_fn(*args, **kwargs)
File "/home/davis/data/venvs/jax/lib/python3.10/site-packages/transformers/models/gpt2/modeling_flax_gpt2.py", line 703, in __call__
outputs = self.transformer(
AttributeError: "FlaxGPT2LMHeadModule" object has no attribute "transformer". If "transformer" is defined in '.setup()', remember these fields are only accessible from inside 'init' or 'apply'.
```
So none of these methods seem to work. I think this should definitely be possible without needing to re-implement methods like `prepare_inputs_for_generation` which are already implemented on the model class.<|||||>Okay I think I figured out the right thing to do, you have to wrap the module's `apply()` method. I think that requires enough indirection that maybe providing a utility for it would be helpful, and it shouldn't be too hard to do.<|||||>Would you like to contribute such a utility @davisyoshida? Or update the docstrings with a note on how this could be done? Would be a nice addition to make it easier to build on top of `transformers` for JAX/Flax models <|||||>Is overwriting `apply()` on the module actually what you guys would like to do as the recommended solution? I think something a bit cleaner might be adding indirection in between the module and places where the model calls it. That way custom behavior could be inserted without needing to modify the Flax object. I'm not sure exactly what the best way to do that would be though.<|||||>What was the kind of utility you had in mind? Not sure I fully follow from your previous comment how this would look other than wrapping the `apply`? https://github.com/huggingface/transformers/issues/24655#issuecomment-1626189281
Perhaps we could go through one or two proposed solutions and discuss them here before proceeding with a PR? Would be great to discuss a bit how this would look before jumping into new code<|||||>Yeah so the simplest option is just something like:
```python
def wrap_apply(model, wrapper):
model.module.apply = wrapper(model.module.apply)
```
The downside is that it has side effects (although this is probably hard to avoid with the non-functional API HF went with), but more importantly you can't get the original behavior back (maybe you just want to apply the transformation for evaluation then get back to training).
Another option would be something like:
```python
# On the model:
def set_apply_wrapper(self, wrapper=None):
self._apply_wrapper = wrapper
@property
def module(self):
# this proxy should wrap self._module and but call
# wrapper(self._module.apply) whenever
return some_proxy_object
```
This way if you want to restore the model to its original state you can just set the wrapper to `None`.
I think a more ambitious option which is (IMO) more in line with JAX's philosophy, would be to factor the generation utilities out into pure functions which accept whatever arguments they need (e.g. a callable which maps (params, *args, past_cache) -> logits, and one which initializes the cache), then relegate the mixins to just calling those external functions appropriately.
This would let people use the generation utilities much more flexibly.
<|||||>Thanks for the clear explanation - happy to proceed with a PR for design 2 if that works for you short-term? We can then assess how much additional benefit we'd get from a full JAX generate re-factor, since this would be a rather large undertaking as you've outlined.<|||||>Sounds good, I'm willing to put something together. It might be a month or two since I'm pretty slammed atm.<|||||>Perfect! We can also run it by the Flax authors since they're interested in having Transformers' Flax models work more seamlessly with the JAX/Flax libraries |
transformers | 24,654 | open | add CFG for .generate() | This commit [implements CFG](https://github.com/huggingface/transformers/issues/24536)
Fixes #24536 (I did not touch MusicGen)
Hope you enjoy it!
@sanchit-gandhi
@gante | 07-05-2023 01:09:54 | 07-05-2023 01:09:54 | @Vermeille -- @sanchit-gandhi raises good points about the attention mask and taking the first item of the batch in the unconditional logits. As it stands, it will only work with batch size = 1, and our logits processors should be flexible wrt batch size :) <|||||>good catch with the batch size! As for the attention mask, could you guide me to a working solution with that? I'm quite unfamiliar with huggingface tbh.<|||||>Tests are on the way.<|||||>All right. we only need to address use_cache / attention_mask.
* use_cache: currently, the forward passes take care of automatically appending to the negative prompt. I don't think such a thing happens with use_cache=False so I gotta do the concat myself. probably meaning I have to make two branches based on the value of use_cache?
* attention_mask: Does it even make sense then to read out.logits[:, -1]? is -1 a valid index if that position has an attention_mask of 0 due to padding? If so, then I will concat a valid id to padding and the attention_mask will be something like [1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1], won't that screw up positional encoding with the "empty" slots?
Basically, I think .generate() had to answer the same questions so you guys will be able to answer them quite easily. Also I will need your guidance as for the API design to integrate this seamlessly.<|||||>@gante I think we're good. The failure looks totally unrelated.<|||||>Indeed. no more failed test.<|||||>@Vermeille
> * use_cache: currently, the forward passes take care of automatically appending to the negative prompt. I don't think such a thing happens with use_cache=False so I gotta do the concat myself. probably meaning I have to make two branches based on the value of use_cache?
As I've replied in the dedicated thread, don't worry about the uncached case :) Make sure an exception is thrown, though!
EDIT: I see that you've handled the uncached case. In that case, since you've already written the code, you can leave it be :)
> * attention_mask: Does it even make sense then to read out.logits[:, -1]? is -1 a valid index if that position has an attention_mask of 0 due to padding? If so, then I will concat a valid id to padding and the attention_mask will be something like [1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1], won't that screw up positional encoding with the "empty" slots?
That's a non-issue: `.generate()` must always be used with left-padding, so you won't run into the case of picking a padded token with `-1` indexing ๐
<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24654). All of your documentation changes will be reflected on that endpoint.<|||||>Following @MPKonst remarks:
- the CFG Rescale "technique" has been removed as it is just a different parameterization of the guidance scale
- The final log_softmax has been removed, which corresponds to how the calculations were performed for the benchmarks anyway. It was introduced as later stage for the GPT4All experiments, as part of the CFG Rescale integration, and it seems it was not a good idea. <|||||>That's weird, the tests have been written for a while but did not show up in the PR. They do now.
I also addressed your latest comments.<|||||>I answered some comments but will finish the PR next week. I'm unavailable until then. Sorry for the delay.<|||||>@gante I need you to answer about the `model_kwargs` validation before I can submit a new version of the PR<|||||>That would be an amazing feature. Thanks for working on this @Vermeille
Fingers crossed it will get reviewed and accepted soon<|||||>+1 for this PR. I hope that it can be merged soon.<|||||>@Vermeille answered in the thread!
LMK if there is any other decision I can help with -- and tag me when you think the PR is in a finalized state, for a quick check and approval โ
<|||||>@gante looks like we're good now :)<|||||>(@sgugger this one possibly did not get through your notifications, gently pinging :) )<|||||>@Vermeille would you be able to retouch the tests? We can merge right after that change :) |
transformers | 24,653 | closed | Llama/GPTNeoX: add RoPE scaling | # What does this PR do?
This is an experimental PR for discussion, so we can decide whether to add this pattern.
## Context
In the past week, there have been several developments about scaling RoPE (Rotary Position Embeddings, i.e. Llama's position embeddings) so as to be able to extrapolate beyond 2048 tokens. Without any scaling and/or finetuning, the perplexity quickly explodes when we go beyond 2048 tokens. Here's the sequence of RoPE scaling improvements, announced mostly on Reddit:
1. Linear scaling -- Simply divide the position index by a scaling factor. Needs fine-tuning to observe the best results. Discussed in [this lmsys blog post](https://lmsys.org/blog/2023-06-29-longchat/). Credits to the reddit user `/u/kaiokendev`.
2. NTK-aware scaling -- proposed in [this reddit thread](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/). Scaling the RoPE Fourier space linearly is not optimal to evenly distribute information, so this can be seen as a improved linear scaling. Works okay without fine-tuning, but seems to benefit from it. Credits to the reddit user `/u/bloc97`. EDIT: following the comments in this thread, this technique will not be added!
3. Dynamic NTK scaling -- proposed in [this reddit thread](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/). It's a form of NTK-aware scaling that a) [works quite well without fine-tuning](https://preview.redd.it/2qdj7itsb39b1.png?width=662&format=png&auto=webp&v=enabled&s=f9b2f044f59fbad5ad51fefacda0b61f724f12f1); b) doesn't degrade the performance if the model is used with short sequences; c) gracefully scales to long sequences, under a fixed parameterization. Credits to the reddit user `/u/emozilla`.
## Changes in the PR
The goal of this PR is to debate whether we want to include RoPE scaling support, with working code as reference. The field is evolving quite fast, so I've added it in a way that we can quicky add to new scaling strategies and keep surfing the wave ๐ Of course, the implementation itself is up for discussion! (An alternative implementation would be to have separate classes for the scalable RoPEs)
Pros:
- Flexible implementation that allows adding new scaling methods in minutes;
- Works quite well with pre-trained models (see example below), through dynamic NTK scaling;
- Supports strategies that are compatible with fine-tuning (it is unclear whether dynamic NTK works well with fine-tuning, and [it seems like Linear scaling is better after fine-tuning](https://www.reddit.com/r/LocalLLaMA/comments/14ojd7s/summary_post_for_higher_context_sizes_for_this/))
Cons:
- `rope_scaling` is a dictionary input, which is somewhat undesirable;
- additional if/else branches in RoPE
## Example
Consider the following prompt from a paper transcript, containing ~6k tokens:
<details>
<summary> prompt built from the transcript of https://arxiv.org/abs/2306.15595 </summary>
```py
prompt = '''
You are given this machine learning research paper, please read it carefully and answer the follow up question.
=== BEGIN ===
2306.15595v2 [cs.CL] 28 Jun 2023
arXiv
EXTENDING CONTEXT WINDOW OF LARGE LAN-
GUAGE MODELS VIA POSITION INTERPOLATION
Shouyuan Chen Sherman Wong Liangjian Chen Yuandong Tian
Meta Platforms Inc.
{chenshouyuan, shermanwong, cli, yuandong}@meta . com
1 INTRODUCTION
Large language models (LLMs) typically come with a pre-defined context window size. For exam-
ple, inputs to LLaMA models (Touvron et al., 2023) must be fewer than 2048 tokens. This pre-set
context window limit is frequently exceeded in applications such as conducting long conversations,
summarizing long documents, or executing long-term planning. For these applications, LLMs with
longer context windows are preferred. However, training an LLM from scratch with long context
windows requires significant investments. This naturally leads to a question: Can we extend the
context window of an existing pre-trained LLM?
One straightforward approach is to fine-tune an existing pre-trained Transformer with a longer con-
text window. However, empirically, we found that models trained this way adapt to long context
windows very slowly. After training for more than 10000 batches, the effective context window
saw a minimal increase, moving from 2048 to 2560 (Table 4). This suggests that such method is
inefficient for extending to substantially longer context windows.
While certain techniques such as ALiBi (Press et al., 2022) and LeX (Sun et al., 2022) enable length
extrapolation of Transformers, i.e. train on short context windows and inference on longer ones,
many existing pre-trained LLMs, including LLaMA (Touvron et al., 2023), use positional encodings
that have weak extrapolation properties (e.g., RoPE (Su et al., 2021)). Therefore, the applicability
of these techniques for extending the context window sizes of such LLMs remains limited.
In this work, we introduce Position Interpolation to enable context window extensions for certain
existing pre-trained LLMs, including LLaMA. The key idea is, instead of extrapolation, we directly
down-scale the position indices so that the maximum position index matches the previous context
window limit in the pre-training stage. See Figure 1 for an illustration. In other words, to accom-
modate more input tokens, we interpolate the position encodings at neighboring integer positions,
utilizing the fact that position encodings can be applied on non-integer positions, as opposed to
extrapolating outside the trained positions, which may lead to catastrophic values. We verify our
approach theoretically, by showing that the interpolated attention score has a much smaller upper
bound (~ 600x smaller in LLaMA 7B setting) than the extrapolated one, and is thus much more
stable. Therefore, interpolated position encodings are easier for the model to adapt.
Empirically, we found that Position Interpolation is highly effective and efficient, requiring only a
very short period of fine-tuning for the model to fully adapt to greatly extended context windows.
We present experimental results for extending the context window to up to 32768 from the initial
2048 across 7B to 65B LLaMA models using Position Interpolation. Our results show that
1. Position Interpolation can easily enable very long context windows (e.g. 32768), requiring
only fine-tuning for 1000 steps on the Pile (Gao et al., 2020) to achieve a good quality.
The cost of fine-tuning is negligible compared to the pre-training costs. This confirms
our hypothesis that it is relatively easy for the models to adapt to interpolated position
encodings.
2. Position Interpolation generates strong models that can effectively make use of much ex-
tended context window. We show that models extended by Position Interpolation enjoy
significant perplexity gains from greatly extended context windows for text modeling, and
we show that the perplexity reduces graceful with the enlargement of context windows.
We also applied Position Interpolation in a long text summarization task, and demonstrate
competitive performances.
3. Position Interpolation preserves model quality relatively well for tasks within its original
context window sizes. We present a variety of evaluation results for the extended LLaMA
models on the original LLaMA benchmark. Compared with original LLaMA models, the
extended LLLaM A models saw a minor degradation on several standard benchmarks within
a 2048 token limit.
Our results highlight the innate ability of Transformer models to โextrapolate to sequence lengths
longer than the ones encountered during trainingโ as hypothesized in the seminal work of Vaswani
et al. (2017). We reaffirm this hypothesis and suggest that the previously known weakness of ex-
trapolating to longer sequences for language modeling (Press et al., 2022) may be due to direct
extrapolation of positional encodings and it can be largely mitigated by interpolating position en-
codings instead.
Concurrent work. Right before our release, we are informed with a concurrent blogpost (Super-
HOT kaiokendev (2023)) that also interpolates positional encoding in RoPE to extend the context
window from 2K to 8K. Recently, open source community picks it up in Reddit post ! and Github
Issues 2, which shows that fine-tuning with LoRA (Hu et al., 2021) also seems to work well. Our
paper shows a full fine-tuning with up to 65B model work well with Position Interpolation, and we
also give theoretical explanations why interpolation achieves much more stable results than extrap-
olation, by showing that the upper bound of interplated attention score is much lower than that of
extrapolated ones.
2 METHOD
2.1 BACKGROUND: ROTARY POSITION EMBEDDING (ROPE)
Transformer models require explicit positional information to be injected, typically in the form of
positional encodings, to represent the order of inputs. We consider Rotary Position Embedding
(ROPE) (Su et al., 2021), which is the position encoding used in the LLLaMA model (Touvron et al.,
2023). Given a position index m โฌ [0, ยข) and an embedding vector x := [zg, 71,..., 241], Where
d is the dimension of the attention head, RoPE defines a vector-valued complex function f{x, m) as
follows
Using RoPE, the self-attention score
is only dependent on relative position m โ 7 through trigonometric functions. Here q and k are the
query and key vector for a specific attention head. At each layer, RoPE is applied on both query and
key embeddings for computing attention scores.
2.2 DIRECT EXTRAPOLATION
While the attention score in RoPE only depends on the relative positions, which is what we want,
its extrapolation performance is not great . In particular, when directly extending to larger context
windows unseen in the training, the perplexity may shoot up to very high numbers (i.e., > 10%),
comparable to untrained models.
Ideally, we want to see the model trained on a context window of size L = 2048 to still work
reasonably well on longer context window, but may not have the capability to leverage information
that appears beyond L. For example, to answer a question located at 3000, the model trained on
maximal window size of I = 2048 cannot leverage evidences provided at location 0, but still
can leverage the evidences provided at location 2900. In contrast, in reality we see catastrophic
behaviors, i.e., question at location 3000 cannot be answered correctly, even if the evidences are
located at location 2900.
What is the reason behind? How could this happen if the attention score a,,,โ,, decays as the relative
distance |m โ n/| increases, according to Section 3.4.3 of (Su et al., 2021), and content from very
far distances should not matter that much? It turns out that the upper bound derived in Section 3.4.3
of (Su et al., 2021) may be too loose: while it indeed decays with respect to |m โ nl, the bound
can still be quite large (i.e., the bound can be critically depends on the magnitude of v;) and thus
vacuous. In fact, if we treat all trigonometric functions as basis functions (i.e, ยข;(s) := #93), and
think about Eqn. 2 as basis expansion as the following:
where s is the positional span between a query and a key and h; := (ga; + igaj+1){k2j โ tk2j+1)
are complex coefficients depending on q and k (here the definition of h; is exactly the same as the
definition of k; in Sec 3.4.3 in RoPE (Su et al., 2021)). Now the the issue becomes clear: as shown
in Fig. 2, a, can be small in magnitude in the range of [0, 2048], but gives huge values out of the
region. The underlying reason is that the trigonometric family {ยข;} (with sufficiently large d) is
a universal approximator and can fit any arbitrary functions. Therefore, for a, there always exist
coefficients {h;} (i.e. key and query) that corresponds to small function values in [0, 2048] but
much larger in regions beyond.
2.3 PROPOSED APPROACH: POSITION INTERPOLATION (PI)
In Fig. 2, thanks to the smoothness of bases functions ยข; interpolation is much more stable and will
not lead to wild values. Therefore, instead of extrapolate the attention score in Eqn. 3 to s > L,
how about we define an attention score a{s) = a(Ls/Lโ) where Lโ is the longer context window?
Formally, we replace RoPE f by {โ defined as follows
We call this transformation on the position encoding Position Interpolation. In this step, we reduce
position indices from [0, L') to [0, L) to match the original range of indices before computing RoPE.
Consequently, as inputs to RoPE, the maximum relative distance between any two tokens has been
reduced from Iโ to L. Since we align the ranges of position indices and relative distances before
and after extension, we mitigate the effect on attention score computation due to context window
extensions, which can allow the model easier to adapt. To further demonstrate this is the case, in the
following theorem, we show that the interpolated attention score is well-behaved:
While there is no close form for B(s) := 4/21 |Ag41(s)|, numerically it is at least larger than d, and for many positional difference s, B(s) is much larger than d
(check Appendix B for the plot). Therefore, the interpolation bound is at least 2 - 294.73 ~ 600 x
smaller than the extrapolation bound, and thus the interpolated attention score is much more stable
than extrapolated one.
Notably, our method of rescaling of position indices does not introduce extra weight, or modify
the model architecture in any way. This makes it attractive in practical applications, since most
infrastructure and optimization for the original model can be reused after the extension.
Fine-tuning. We can further fine-tune the interpolated model using the next token prediction task
with interpolated position encodings on the extended context window size using a pre-training cor-
pus such as the Pile (Gao et al., 2020). In the next section, we show that our fine-tuning process
only needs tens to hundreds thousands of examples. We also find that the result of the fine-tuning
is not sensitive to the choice of examples. The reason may be that the model is only adapting to the
new context window during the fine-tuning phase, starting from a good initialization, as opposed to
acquiring new knowledge.
Other ways to reduce interpolation/extrapolation bound. From the expression of the interpola-
tion (Eqn. 5) and extrapolation bound (Eqn. 8), a common term is max; ||, which is the maximal
magnitude of query/key products. If we enforce a regularization on || during LLM training, it is
possible that the catastrophic extrapolation error can be mitigated or even resolved. In fact, if we
apply ridge regression with proper regularization to fit a curve in Fig. 2, the magnitude of extrapo-
lated a(s) when s > L can be comparable to that within [0, L]. To our knowledge, we are not aware
of existing LLM pre-training techniques that leverage this regularization and will leave it for future
work.
3 EXPERIMENTS
We show Position Interpolation can effectively extend context window up to 32 times of the original
size, and such extension can be done with only several hundreds of training steps. We show the
resulting models are strong LLMs with fully effective long context windows. We demonstrate its
performance in a number of tasks including language modeling, passkey retrieval, and long doc-
ument summarization. We also present benchmark results of the extended models on the original
LLaMA evaluation benchmarks.
3.1 SETUP
Model Variants. We extended the pre-trained 7B, 13B, 33B and 65B LLaMA models (Touvron
et al., 2023) to various context window of sizes up to 32768, using either direct fine-tuning or
Position Interpoloation method. Except for rescaling the position indices for models extended with
Position Interpolation, we did not modify LLaMA model architectures (Touvron et al., 2023) in any
ways.
Training Procedure. We fine-tune all model variants using the next token prediction objective. We
use AdamW (Loshchilov & Hutter, 2019) with 5; = 0.9 and 2 = 0.95. We use a linear learning
rate warmup of 20 steps starting from 10% of the maximum learning rate. For 7B and 13B models,
we set the learning rate to 2 x 1075 and for 33B and 65B models we set the learning rate to 1072. We
set the weight decay to zero. For extending 7B, 13B and 33B models to the 8192 context window
size, we use 32 A100 GPUs and 64 global batch size. For all other cases we use 128 A100 GPUs and
128 global batch size. We note that the main need of using more GPUs is memory limitation during
fine-tuning, and it is possible to use fewer GPUs in certain cases. We train all models using PyTorch
(Paszke et al., 2019) with Fully Sharded Data Parallel (Zhao et al., 2023) and Flash Attention (Dao
et al., 2022).
If not specified otherwise, for the Position Interpolation method, we fine-tune the models for 1000
steps. For the direct fine-tuning method, we use 10000 steps. We primarily fine-tune using the Pile
training dataset (Gao et al., 2020). In Section 3.4 we also compared fine-tuning performance on the
RedPajama dataset (Computer, 2023).
3.2 LONG SEQUENCE LANGUAGE MODELING
We evaluate the long sequence language modeling performance of our extended models and base-
lines on two datasets: book corpus (PG-19) (Rae et al., 2020) and cleaned Arxiv Math proof-pile
dataset (Azerbayev et al., 2022).
We use the test splits of PG19 (Rae et al., 2020) and proof-pile (Azerbayev et al., 2022). For PG19,
we use the whole test split consisting of 100 documents. For the proof-pile dataset, we use a random
subsample of 128 documents with at least 32768 SentencePiece (Kudo & Richardson, 2018) tokens
and truncate to the first 32768 tokens for each test document. We evaluate perplexity at various
context window size by using a sliding window approach following Press et al. (2022) with stride
S = 256.
In Table 1 and Table 2, we report the perplexity results for our models and baselines on the datasets.
From the results, we found that models extended with our method enjoy a significantly improved
perplexity from longer context window sizes. By increasing the context window size from 2048 to
16384, we observed -0.28 and -0.5 reductions of perplexity for extending LLaMA 7B models on
both datasets, -0.27 and -0.48 reductions for extending LL.aMA 13B models, and -0.14 and -0.42
reductions for extending LLaMA 33B models. For LLaMA 65B models, we observed -0.12 and
-0.3 reductions of perplexity by extending to the 8192 context window size.
In general, we observed a consistent trend of our models achieving better perplexity with longer
context windows. This indicates our models can effectively make use of the longer context windows
to better predict next tokens in language modeling tasks. Moreover, we found this trend extends to
32768 window size without diminishing on the PG19 dataset for LLaMA 7B and 13B models. This
indicates that our method may enable extension to even longer context windows.
In contrast, we observed that models extended via the direct fine-tuning method has shown regres-
sion (up to +0.48) or minor improvement (up to -0.12) on the perplexity at longer context windows.
This indicates that models extended this way have limited capability of making use of context win-
dows longer than their pre-trained settings.
We saw a minor degradation of the perplexity on the original context window of 2048 for our ex-
tended models in some cases. For example, on the Proof-pile dataset, we saw a degradation ranging
from 0.01 to 0.05 across all models with extended with Position Interpolation. A small degradation
of performance within original evaluation context window is expected since Position Interpolation
forces position encodings in original context window to reside in a much narrower region, which
may negatively affect the language modelโs performance. We present more benchmark results on
the original context window size in Section 3.4.
In Table 3 we report the relationship between perplexity and the number of fine-tuning steps for
LLaMA 7B model extending to 8192 and 16384 context window sizes using Position Interpolation
evaluated on the PG19 dataset. We can see without fine-tuning (at step 0) the model can exhibit
certain language modeling capability, as indicated by < 20 perplexity for extending to 8192 context
window (in contrast, the direct extrapolation method leads to > 10% perplexity). With fine-tuning,
we observed that the perplexity improves quickly. At 200 steps the models surpassed the original
modelโs perplexity on 2048 context window size, indicating the models gaining ability of effectively
using sequences longer than the pre-training settings for language modeling. At 1000 steps, we can
see the models have improved steadily and achieve a significantly better perplexity.
3.3 MEASURING EFFECTIVE CONTEXT WINDOW SIZE THROUGH PASSKEY RETRIEVAL
We study the effective context window size, i.e. the maximum distance of a token can effectively
attend to during inference, of our models after extension. To measure this, we follow a synthetic
evaluation task of passkey retrieval proposed by Mohtashami & Jaggi (2023). In this task, the models
are asked to recover a random passkey hidden in a long document. See Figure 3 for the format of
the document.
Given a language model, we estimate the upper and lower bounds of effective context windows as
follows. Suppose the random passkey is k tokens away from the end of the input. When a model
persistently fails to retrieve the correct passkey value across several independent attempts, it suggests
that the effective context window size of the model is less than k. Conversely, if a model consistently
succeeds in retrieving the correct passkey value, we deduce that the effective context window size
of the model is at least k.
We evaluate the 7B and 33B LLaMA model variants that are extended via Position Interpolation or
direct fine-tuning. For each model, we use 32 different &ยฃ uniformly spaced in the targeted context
window Lโ and run the above tests for 10 times for each k, where each time a random passkey of 5
random digits is used. In Table 4, we report kyax as a function of the number of fine-tuning steps,
We can see that models extended via Position Interpolation all successfully attain their desired ex-
tension objectives in terms of effective context window sizes, indicating by the effective context
window size reaching maximum kp, = L/, after merely fine-tuning for 200 steps, consistently
across both 7B and 33B model sizes and up to 32768 context windows. In contrast, LLLaMA models
that are extended via direct fine-tuning only saw a minimal increase of the effective context win-
dow size kay from 2048 to 2560, even after fine-tuning for more than 10000 steps, with no clear
indication of an acceleration in the increase of window size.
3.4 BENCHMARKS ON ORIGINAL CONTEXT WINDOW SIZE
We evaluate the models extended by Position Interpolation on several standard benchmark tasks
within the original context window size of 2048. The evaluation results are listed in Table 5. From
the results, we saw that models extended to 8192 produce comparable results on the original bench-
mark which is designed for a much smaller context window, with a degradation of up to 2% on
the benchmark tasks, for both 7B and 33B model sizes. Models extended to longer context win-
dows regressed more on the benchmarks, but still in reasonable ranges for most tasks. We also note
that the choice of fine-tuning datasets does not seem to lead significant difference in the benchmark
performances, which may be due to the limited number of fine-tuning steps used in our method.
The regression on benchmark tasks is consistent with our observation on perplexity regression in
Section 3.2.
3.5 LONG DOCUMENT SUMMARIZATION
In this task, we evaluate our modelsโ performance on the long document summarization task. In
particular, we consider the GovReport (Huang et al., 2021) dataset, which contains 17457 documents
for training and 972 documents for evaluation. Each document comes with a human generated
summary. We truncate all input documents to their first 15000 tokens.
We fine-tune the LL.aMA models extended with Position Interpolation with a context window of
16384. Note the rescaling of position indices are still required during this fine-tuning step. We first
Model Size Context Window Fine-tune on BoolQ PIQA Race-M Race-H WinoGrande
format the raw document using the prompt template in Figure 4, and then concatenate the prompt
with the ground-truth summary (truncate to 1000 tokens) associated with each document. We fine-
tune the model using the next token prediction task with the above setup for 10 epochs. The losses
from the input prompt proportion of training examples are excluded during our fine-tuning.
We use a generation temperature of 0.5 and top, = 0.95 as our inference parameter to generate a
summarization of each document in the test set. The final output is truncated at 1000 tokens. We
used the ROUGE-1/ROUGE-2/ROUGE-L scores (Lin, 2004) as the evaluation metrics to evaluate
the modelsโ outputs vs the ground-truth summaries.
In Table 6 we report our evaluation results. We have also included results from two baselines in
existing SCROLLS Leaderboard (Shaham et al., 2022; Ainslie et al., 2023). In general, we have
obtained competitive R1 score among other models with minimal tuning of hyper-parameters. This
result suggests our models with 16384 context window can effectively handle the long document
summarization task.
=== END OF FILE ===
'''
```
</details>
If we place it in the following example
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b")
model = AutoModelForCausalLM.from_pretrained(
"huggyllama/llama-7b",
load_in_8bit=True,
device_map="auto",
)
prompt = ...
question = "Question: What is the paper about?"
inputs = tokenizer(prompt + question, return_tensors="pt").to("cuda")
print(inputs.input_ids.shape)
gen_out = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(gen_out)[0])
```
we get:
```
Question: What is the paper about? a a a a a a a a a a a a b: a a a a a a: a in a a a a a a [(b a b. a [b [b [b. [b [b [( [( [( [( [( [( [b [(b [b [b
[b [(( [((: [(: [: [: [((((((0:(((((al:
```
However, if we add `rope_scaling={"type": "dynamic", "factor": 2.0}` in `from_pretrained`, we now get:
```
Question: What is the paper about?
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The paper is about extending the context window of Transformer models.
Answer: The
```
Better generation parameterization can definitely be selected, but you get the idea -- with these changes, models with RoPE can handle much larger contexts right out of the box ๐ฅ
| 07-04-2023 17:36:12 | 07-04-2023 17:36:12 | (Of course, tests are missing. Proper validation of whether the feature is working as expected is also missing. I'll add them if we decide to move forward with this feature!)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Having this in transformers would be excellent!
I've uploaded a bunch of fp16 and GPTQ repos to HF using @jquesnelle 's [trust_remote_code Llama modelling patch](https://huggingface.co/emozilla/open_llama_7b-scaled/blob/main/modelling_llama.py) that implements RoPE using @kaiokendev's method, and I know there are quite a number of people using those already, and I've had a few requests to put out more. And even more are using RoPE outside of transformers via the ExLlama GPTQ implementation.
So there's a great deal of appetite for this feature amongst users, understandably.<|||||>Could this also be applied to [GPT-J models](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_gptj.py#L76)? <|||||>@versae yes, it can :) The code needs to be modified there as well, but the concept can be applied to any model with rotary position embeddings<|||||>Thank you for your work! Just letting you know that I've improved the NTK-aware method in this PR. https://github.com/jquesnelle/scaled-rope/pull/1 It decreases non-finetuned PPL even further (preliminary testing shows 4.9 -> 4.6 PPL at 8192 context size) and theoretically will significantly improve a finetune's convergence/stability compared to previous NTK-aware method.
Also because the alpha hyperparameter was difficult to use when predicting effective context size (alpha=4 was something close to ~6400 context size instead of 8192), that problem was fixed and it is now changed to a "scale" factor, which can be used the same way to the "scale" in linear RoPE scaling. (eg. for LLaMA scale=2 is 4096 and scale=4 is 8192)
I hope this improved method might be also considered one day as it is one more step towards extending context size for all LLMs! ๐<|||||>Hey @bloc97 @jquesnelle ๐
Looking at your recent PR ([this one](https://github.com/jquesnelle/scaled-rope/pull/1)) -- am I right in saying that
1. There is no way to parameterize the new class such that it is equivalent to the original NTK-aware scaling?
2. @bloc97's PR and @jquesnelle's dynamic implementation are slightly different, in the sense that @bloc97's targets a specific length (but can extrapolate) and @jquesnelle's dynamically adjusts to the maximum observed length?
3. Because @jquesnelle's implementation `base` may suddenly change due to a longer sequence, it is less friendly to fine-tune?
I'm trying to determine how to integrate and document the goodies, while keeping the diff size manageable ๐ค <|||||>The technique also seems to work out-of-the-box with GPTNeoX models ๐ฅ With the latest [commit](https://github.com/huggingface/transformers/pull/24653/commits/d7e763628dc0b4189402059bea2dd71b828ac18e), running the script below
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pythia-1.4b-deduped")
model = AutoModelForCausalLM.from_pretrained(
"EleutherAI/pythia-1.4b-deduped",
torch_dtype=torch.bfloat16,
device_map="auto",
rope_scaling={"type": "dynamic", "factor": 2.0},
)
prompt = ... # see PR header for the prompt, >5k tokens
question = "Question: What is the paper about?"
inputs = tokenizer(prompt + question, return_tensors="pt").to("cuda")
print(inputs.input_ids.shape)
gen_out = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.batch_decode(gen_out)[0])
```
gets us
```
Question: What is the paper about?
3.6 CONCLUSION
We have shown that Position Interpolation can extend the context window of pre-trained models to substantially longer context windows. We have
demonstrated that the models can be effectively extended to longer context windows, and
```
Without the `rope_scaling` argument, we get
```
Question: What is the paper about? The. The.
. The.
.
.
.
.
4.
. The. The. The. The. The. The. The.al.s. The. The.... a. The.
```
This is particularly amazing, since we're talking about a 1.4B model ๐
(cc @bloc97 @jquesnelle, you may be interested in this finding)<|||||>@amyeroberts @sgugger I'd like to request the review of you two, since this is an unusual post hoc modeling change PR.
A few key points:
1. The technique also works well with `gptneox`, see [this comment](https://github.com/huggingface/transformers/pull/24653/#issuecomment-1632954413) for a cool example on a 1.4B model
2. Adding the functionality to `gptneox` implied a minor modeling change -- the causal mask was limited to the original maximum sequence size, but there is no reason for that limitation. It's just a triangular matrix with ones.
3. Decided NOT to implement on `gptneox-japanese` and `esm`, the two other models with rotary embeddings. I'm not sure if their usage justifies the implementation cost (it takes some time to validate everything is working correctly, as there are variations in the expected usage), so I'd suggest letting demand speak for itself :)
4. RoPE scaling is parameterized by a `dict`, and not a `dataclass`. A `dataclass` would be better, as @sgugger suggested, but it complicates (de)serialization, needing extra code. I'd like to first work on the config file base class I've mentioned on slack, if you're okay with it -- it would make the new `dataclass` a ~50 line change, as opposed to a >200 one!
5. There are new scaling strategies in the works, as mentioned in the comments above, so we can quickly add them in follow up PRs if their results are superior. As it stands, we can already hack `llama` and `gptneox` beyond their original maximum length without fine-tuning ๐ฅ <|||||>(For brevity, I'll refer to the new NTK-By-Parts method as NTKv2)
NTKv2 is an improved version of NTK. We found that NTK did not perform well when fine-tuned; the reason for this was that the resulting embedding matrix still contained some extrapolated out-of-population values that model had not seen during training. Dynamic NTK hid this by continually scaling `base` so that you never actually got to this part of the embedding values.
NTKv2 is parameterized by `scale`, which has the same meaning as linear interpolation, e.g. you set it to `4` to target `8K` context length. We've found that this method, when fine-tuned, beats fine-tuned linear interpolation, which is to say it gives even better results than the recent [Meta](https://arxiv.org/abs//2306.15595) paper.
In the repository there is also now a Dynamic NTKv2, which is the same idea as the previous dynamic method, i.e. scale the hyperparamter relative to the ratio between the current context length and the model's original trained context length, while using the original embedding values when under the native pre-trained length. This also beats Dynamic NTK in the no-fine-tuning scenario.

In the above graph, [LLongMA](https://huggingface.co/conceptofmind) are the fine-tuned OpenLLaMA models we've released, trained on 1B extra tokens (v2 still in the process of training)
> 1. There is no way to parameterize the new class such that it is equivalent to the original NTK-aware scaling?
Unfortunately no. I understand these different methods can get unwieldly quickly, but NTKv2 appears to be strictly better than original NTK -- I would potentially just advocate replacing the original NTK with this, but that could also be done in a follow-up PR too; the results that this gives you is already Very Good (TM).
FWIW the LLongMA models use the exact modeling code here to maintain compatibility without needing `trust_remote_code` if/when this PR gets merged ๐ <|||||>> Hey @bloc97 @jquesnelle ๐
>
> Looking at your recent PR ([this one](https://github.com/jquesnelle/scaled-rope/pull/1)) -- am I right in saying that
>
> 1. There is no way to parameterize the new class such that it is equivalent to the original NTK-aware scaling?
> 2. @bloc97's PR and @jquesnelle's dynamic implementation are slightly different, in the sense that @bloc97's targets a specific length (but can extrapolate) and @jquesnelle's dynamically adjusts to the maximum observed length?
> 3. Because @jquesnelle's implementation `base` may suddenly change due to a longer sequence, it is less friendly to fine-tune?
>
> I'm trying to determine how to integrate and document the goodies, while keeping the diff size manageable ๐ค
1. Unfortunately "NTK v1" was just not good for fine-tuning unless alpha is set correctly, so I think going forward people should strictly use "v2" for fine-tuning, and consider v1 to be only for inference. However it is possible for me to parameterize the "v2" class so that you can make it equivalent to original NTK scaling, but it will take additional effort that is probably best used elsewhere. There are only few "NTK v1" finetunes are out there.
2. For points 2 and 3, finetuning with Dynamic method will need additional consideration in the code on the training side, because training happens on all the tokens at once, dynamic implemented as is (for inference) will probably not be applied correctly. We are still working on the theoretical side of potentially training a dynamic model.<|||||>@bloc97 @jquesnelle thank you for your input -- and excited to hear about the performance of NTK-By-Parts!
Based on your comments, I will:
1 - Delete the `ntk` approach, as NTK-By-Parts is superior;
2 - Merge what I have now -- we are going to have a release early next week, so this would already be included in `v4.31`;
3 - Open a follow-up PR with NTK-By-Parts ๐ค Or, if you're interested in contributing with the technique, we'd highly appreciate it! Just let me know over the next days.
โ ๏ธ Note -- the latest commits have changed the structure of the modeling code from overloading the existing RoPE class to inheriting from the original implementation, so we don't risk ending up with a Frankenstein class as we add more strategies. The parameterization stayed nearly the same, so you probably only need to make minor adjustments to the model config files to load without `trust_remote_code`! (changed from `{"name": scaling type, "factor": scaling factor}` to {"type": scaling type, "factor": scaling factor}, as `name` is often attributed to an instance name in `transformers`)<|||||>Hi, I'm very glad to see that transformers supports RoPE scaling! Experiments show low ppl on long input sequences.
But in the current implementation, would there be a mismatch in generation? Here are my thoughts.
Since the `seq_len` increases during the generation, the base is scaled in every generation step with different scaling factor. Since the history key_states are store in the kv_cache , they are not scaled with the new base. The scaling only affects the state of the current token.
For example, if the input sequence is of length 2048, after generating the first token, the new input length is 2049, and we scale the base with `seq_len=2049`. After generating the second token, the new input length is 2050, and we scale the base with `seq_len=2050`. But during the generation, the kv_cache is used and thus the key_states before position 2049 are not scaled according to the new length.
Should all the key_states be scaled with the same base? Would it be a problem?
<|||||>> Since the `seq_len` increases during the generation, the base is scaled in every generation step with different scaling factor. Since the history key_states are store in the kv_cache , they are not scaled with the new base. The scaling only affects the state of the current token.
Note that this only happens in the dynamic method, not static scaling.
The RoPE embeddings are merged with the q_proj and k_proj (only k_proj is cached after the merge to be reused later), but interestingly, even if the k_proj is cached (thus not using the dynamic scaled RoPE embeddings correctly) the model works without problems. We are currently investigating the reason behind this, but the obvious main implication is that the q_proj is more important for RoPE than k_proj.
But yes, the correct way would be to cache k_proj before applying the RoPE embeddings, so the dynamic embeddings can be applied correctly each time the scale changes.
<|||||>
> Note that this only happens in the dynamic method, not static scaling. The RoPE embeddings are merged with the q_proj and k_proj (only k_proj is cached after the merge to be reused later), but interestingly, even if the k_proj is cached (thus not using the dynamic scaled RoPE embeddings correctly) the model works without problems. We are currently investigating the reason behind this, but the obvious main implication is that the q_proj is more important for RoPE than k_proj. But yes, the correct way would be to cache k_proj before applying the RoPE embeddings, so the dynamic embeddings can be applied correctly each time the scale changes.
Thank you for your comment.
We have also observed that there is no significant difference in whether key_states are stored before or after applying RoPE. However, I think more experiments is necessary to test this.
I implement storing KV_cache before apply RoPE. Anyone interest in the implementation can refer to this [code](https://github.com/ymcui/Chinese-LLaMA-Alpaca/pull/743).
<|||||>> Hi, I'm very glad to see that transformers supports RoPE scaling! Experiments show low ppl on long input sequences.
>
> But in the current implementation, would there be a mismatch in generation? Here are my thoughts.
>
> Since the `seq_len` increases during the generation, the base is scaled in every generation step with different scaling factor. Since the history key_states are store in the kv_cache , they are not scaled with the new base. The scaling only affects the state of the current token.
>
> For example, if the input sequence is of length 2048, after generating the first token, the new input length is 2049, and we scale the base with `seq_len=2049`. After generating the second token, the new input length is 2050, and we scale the base with `seq_len=2050`. But during the generation, the kv_cache is used and thus the key_states before position 2049 are not scaled according to the new length.
>
> Should all the key_states be scaled with the same base? Would it be a problem?
I have question similar to this. The graph showing dynamic scaling in this [reddit post](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) showing that the perplexity of the model with dynamic scaling are same with model without scaling until 2048 tokens length (Of course this must be because the base value did not change before 2048 tokens).
This got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?<|||||>> This got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?
I have the same concern. In the dynamic scaling, the sin and os may should not be cached <|||||>Hi
I try to test ntk effect on my trained neox model. Using dynamic ntk(https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/). However, it is found that the ppl will oscillate. What is the reason for this?
<img width="1571" alt="image" src="https://github.com/huggingface/transformers/assets/21999339/88fdacc6-e986-4a66-8709-dd00742724ab">
here is my test code. I modified from https://huggingface.co/docs/transformers/perplexity .
```
import json
from transformers import AutoModel, AutoTokenizer, AutoConfig
import torch
from tqdm import tqdm
import traceback
device = "cpu"
if torch.cuda.is_available():
device = "cuda"
config = AutoConfig.from_pretrained(model_dir, trust_remote_code=True)
config.rope_scaling = {
"type": "dynamic",
"factor": 2,
}
model = AutoModel.from_pretrained(model_dir, config=config, trust_remote_code=True, torch_dtype=torch.float16)
model.eval()
model = model.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
with torch.inference_mode():
kv = {}
try:
for value in tqdm(range(32, 12000, 32)):
max_length = stride = value
with open("gov_report_test.json") as f:
data = json.load(f)
ppls = []
for idx, line in enumerate(data):
if idx >= 1:
break
encodings = tokenizer(line, return_tensors="pt")
seq_len = encodings.input_ids.size(1)
nlls = []
prev_end_loc = 0
for begin_loc in range(0, seq_len, stride):
end_loc = min(begin_loc + max_length, seq_len)
trg_len = end_loc - prev_end_loc # may be different from stride on last loop
input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device)
target_ids = input_ids.clone()
target_ids[:, :-trg_len] = -100
outputs = model(input_ids, labels=target_ids)
# loss is calculated using CrossEntropyLoss which averages over valid labels
# N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels
# to the left by 1.
neg_log_likelihood = outputs.loss
nlls.append(neg_log_likelihood)
prev_end_loc = end_loc
if end_loc == seq_len:
break
ppl = torch.exp(torch.stack(nlls).mean())
ppls.append(ppl)
total_ppl = torch.stack(ppls).mean()
kv[value] = total_ppl.item()
print(value, total_ppl.item())
except Exception as e:
print(e)
print(value, seq_len)
print(traceback.format_exc())
```<|||||>@guozhiyao Nothing immediately comes to mind, it could be even a model "feature" (looking at the plot for the original model, which also has the periodicity).
Would you be able to a) run the same script for LLaMA and b) repeat your experiment using the script @jquesnelle used ([this one](https://github.com/jquesnelle/scaled-rope/blob/master/eval/perplexity.py))? a) should rule out model-specific issues and b) should rule out code-specific issues.
<|||||>> @guozhiyao Nothing immediately comes to mind, it could be even a model "feature" (looking at the plot for the original model, which also has the periodicity).
>
> Would you be able to a) run the same script for LLaMA and b) repeat your experiment using the script @jquesnelle used ([this one](https://github.com/jquesnelle/scaled-rope/blob/master/eval/perplexity.py))? a) should rule out model-specific issues and b) should rule out code-specific issues.
@gante Thanks a lot. It is solved by using the code.<|||||>> > This got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?
>
> I have the same concern. In the dynamic scaling, the sin and os may should not be cached
@airaria I had the same problem, not only `cos` and `sin`, `inv_freq` also don't cache. The `_set_cos_sin_cache` of `GPTNeoXDynamicNTKScalingRotaryEmbedding` can be changed to the following form, but the efficiency is not optimized.
```
def _set_cos_sin_cache(self, seq_len, device):
self.max_seq_len_cached = 0
base = self.base
if seq_len > self.max_position_embeddings:
base = self.base * (
(self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
) ** (self.dim / (self.dim - 2))
inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
self.register_buffer("inv_freq", inv_freq)
t = torch.arange(seq_len, device=device, dtype=self.inv_freq.dtype)
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
# Different from paper, but it uses a different permutation in order to obtain the same calculation
emb = torch.cat((freqs, freqs), dim=-1)
self.cos_cached = emb.cos()[None, None, :, :]
self.sin_cached = emb.sin()[None, None, :, :]
```<|||||>> > > This got me thinking, If I first generate with long context (say 4096 tokens), the base value would change accordingly (which is around 35000). Then, if I next generate with short context like 1024 context, the `sin_cache` and `cos_cache` will not be reverted back when the base value still 10000 hence the perplexity is raised. Should there be changed to `forward` call especially for dynamic scaled embeddings?
> >
> >
> > I have the same concern. In the dynamic scaling, the sin and os may should not be cached
>
> @airaria I had the same problem, not only `cos` and `sin`, `inv_freq` also don't cache. The `_set_cos_sin_cache` of `GPTNeoXDynamicNTKScalingRotaryEmbedding` can be changed to the following form, but the efficiency is not optimized.
>
> ```
> def _set_cos_sin_cache(self, seq_len, device):
> self.max_seq_len_cached = 0
>
> base = self.base
> if seq_len > self.max_position_embeddings:
> base = self.base * (
> (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
> ) ** (self.dim / (self.dim - 2))
>
> inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
> self.register_buffer("inv_freq", inv_freq)
>
> t = torch.arange(seq_len, device=device, dtype=self.inv_freq.dtype)
>
> freqs = torch.einsum("i,j->ij", t, self.inv_freq)
> # Different from paper, but it uses a different permutation in order to obtain the same calculation
> emb = torch.cat((freqs, freqs), dim=-1)
> self.cos_cached = emb.cos()[None, None, :, :]
> self.sin_cached = emb.sin()[None, None, :, :]
> ```
There is a precision difference between the `inv_freq` here and the `inv_freq` defined in `__init__`, and the reason is not found. In order to ensure the same performance as the original when `seq_len <= self.max_position_embeddings`, it can only be modified to this form.
```
def _set_cos_sin_cache(self, seq_len, device):
self.max_seq_len_cached = 0
if seq_len > self.max_position_embeddings:
base = self.base * (
(self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
) ** (self.dim / (self.dim - 2))
inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
else:
inv_freq = self.inv_freq
t = torch.arange(max(seq_len, self.max_position_embeddings), device=device, dtype=inv_freq.dtype)
freqs = torch.einsum("i,j->ij", t, inv_freq)
# Different from paper, but it uses a different permutation in order to obtain the same calculation
emb = torch.cat((freqs, freqs), dim=-1)
self.cos_cached = emb.cos()[None, None, :, :]
self.sin_cached = emb.sin()[None, None, :, :]
``` |
transformers | 24,652 | open | fixing name position_embeddings to object_queries | # What does this PR do?
This PR refers to #19833 , and it just update some variables/docstrings names. Quoting the Issue, the paper mentions that the `position_embeddings` argument of the cross-attention layer are these input embeddings called `object queries`. And the `key_value_position_embeddings` is refered to as `spatial_position_embeddings`.
Reopening PR #23091
This PR is limited to DETR model.
### Notes
This is my first contribution, so I'm happy to adjust anything in this PR. I ran all tests and style, and it went all, except for one:
`make fixup`. I got the following output:

Reading the output, I assume it is about other file using classes in modeling_detr. I'll wait for updates. I will also wait for review for doc updating or more guidance.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/19833
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
@amyeroberts
| 07-04-2023 17:22:54 | 07-04-2023 17:22:54 | @Lorenzobattistela For the repo consistency and quality checks, you'll need to run `make fix-copies` and then `make style` and push any changes made <|||||>@amyeroberts Done, just updated with the changes for repo consistency and quality. I don't know why, but testing pipelines and torch tests are failling within the installation step (but I did not changed anything related to it), and the test_worflow also failed just for torch. I'll wait for next instructions. Thanks!<|||||>@Lorenzobattistela hmmmm, interesting. Could you try rebasing on main?
Some of the tests are failing because of the changes in this PR: https://app.circleci.com/pipelines/github/huggingface/transformers/67974/workflows/6a69bd9f-d35a-4964-868b-14fdd921d813/jobs/850696
Once these are resolved, ping me again and I can review :)<|||||>@amyeroberts Sorry for bothering, but I'm having a hard time with the circleCi testing. So, I'm having problems on repo consistency (as you mentioned before), but if I do run the script `make fix-copies` it change other models files (3 of them), and I think this would be scaping the Issue scope.
About the tests, I'm getting the following output:
```
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_attention_outputs - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_determinism - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_detr_model - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_detr_no_timm_backbone - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_detr_object_detection_head_model - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_different_timm_backbone - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_feed_forward_chunking - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_greyscale_images - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_hidden_states_output - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_retain_grad_hidden_states_attentions - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
FAILED tests/models/detr/test_modeling_detr.py::DetrModelTest::test_training - RuntimeError: The size of tensor a (12) must match the size of tensor b (49) at non-singleton dimension 1
=== 12 failed, 1419 passed, 2461 skipped, 144 warnings in 163.96s (0:02:43)
```
The funny thing is that I did not changed anything related to tensor sizes, since it was just naming convention<|||||>@Lorenzobattistela No worries, you're not bothering at all :)
> if I do run the script make fix-copies it change other models files (3 of them), and I think this would be scaping the Issue scope.
It's OK, we do want the changes made by `make fix-copies` included in this PR. `make fix-copies` makes sure that changes to the code are propagated across to all part of the codebase where the logic has been copied without the tedium or riskiness of doing it manually. This allows us to keep the one file per model pattern in the library.
> The funny thing is that I did not changed anything related to tensor sizes, since it was just naming convention
Hmmm, funny. It might be that there's a var somewhere still needing it's name changed, or it could be how the model's being called in the tests. I'd suggest picking just one test and run that with the debugger to find where the issue is coming from i.e.
```
pytest tests/models/detr/test_modeling_detr.py::DetrModelTest::test_attention_outputs --pdb
```
and comparing the tensor shapes with and without the changes in this PR to track where they're coming from.
<|||||>@amyeroberts Got it working! It was a problem with `make fix-copies`, so some other files had to change to keep consistency and pass up the tests. Now it's all set!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24652). All of your documentation changes will be reflected on that endpoint.<|||||>@amyeroberts finished doing what was discussed. I think we can also think about refactoring and add it as a function, something like `check_kwargs()` , idk.
Because it was mostly duplicated accross all files. What do you think about it?
Weird, the error on CI has nothing to do with the files changed, its on other model |
transformers | 24,651 | closed | Update image_question_answering.py | In this modified version, the main changes are as follows:
The `encode` method now accepts a list of images and questions, and it returns a `DataLoader` object that batches the encoded inputs. This enables batch processing of multiple image and question pairs.
The `forward` method processes the inputs in batches using a `DataLoader` object. Each batch is sent to the device and processed by the model. The outputs are collected and concatenated along the batch dimension.
The `decode` method processes the outputs for each example in the batch and returns a list of answers.
The `description` and `inputs` sections are updated to reflect the changes and mention that the inputs should be provided as a list.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-04-2023 17:20:11 | 07-04-2023 17:20:11 | cc @LysandreJik <|||||>Hey @mzamini92! We want these tools to be the simplest possible so that all agents can use them appropriately.
I recommend pushing your tool to the Hub instead, and replacing the existing ImQA tool with yours as explained in this guide: https://huggingface.co/docs/transformers/custom_tools#replacing-existing-tools |
transformers | 24,650 | closed | CLIP pooling is not compatible with adding new tokens | ### System Info
Feature request (Duplicate of #21029)
For textual inversion in diffusers, we are adding tokens that have a higher token id than the eos token. So when we get clip embeddings for textual inv tokens, we need to change the pooling so it gets the eos token and not the arg max token.
Motivation
This is an issue that should be fixed as the clip embeddings won't work once we add more tokens to the tokenizer. This hasn't been a huge issue so far because most models use the hidden layers directly but [the new paper on SDXL](https://github.com/Stability-AI/generative-models/blob/main/assets/sdxl_report.pdf) also mentions using the pooled output now.
@ArthurZucker @younesbelkada
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Add new token to tokenizer
2. Encode tokens with CLIPTextModel
3. Get pooled output
### Expected behavior
Pooled output considers the added token ids vs eos id instead of argmax | 07-04-2023 17:10:22 | 07-04-2023 17:10:22 | Are you taking this on @ydshieh ? ๐ <|||||>Not yet started, but self-assign so I won't forget. <|||||>@okaris
If I understand correctly, the goal is to use the (fixed) eos id, rather than using `argmax`. Is this right?<|||||>@patrickvonplaten We need your experiste on `diffusers` for this issue ๐ Thank you.<|||||>@ydshieh correct, because newly added tokens to the tokenizer take ids bigger than the eos id. The tokenizer config has the correct information but might not be readily available in the text model<|||||>Thanks @okaris
Yes, the (text) model config file has `eos_token_id` being `2` which is even worse situation. We can probably try to use `vocab_size - 1`, but I want to have some discussion with the team first to take action.<|||||>Oh yes, the text encoder has the wrong one. Length - 1 might not work if you are adding more tokens. The easy solution is to expose the eos id in the encoder model so it can be changed from the outside<|||||>This is indeed a problem here - @patil-suraj can you take a look here? <|||||>Once we add more tokens (so `vocab_size` will change), I think one way to replace the `argmax` is to use `vocab_size - config.num_extra_tokens - 1`, where `num_extra_tokens` is a new attribute added to the config (default value would be `0`).
Happy to see if there are better/clean solution ideas.<|||||>when adding a token, the input embeddings are (and must be) resized. that length could also be used<|||||>I might be wrong, but isn't the new length of the (resized) embedding layer just the new `vocab_size`?<|||||>Thanks a lot for the issue @okaris !
IMO, updating the `eos_token_id` in the config would be better than adding a new `config` attribute. As far as I can tell, this should not break anything because `config` is never really used for tokenization, the `config` is used to get the `eos_token_id` if we are doing generation, but the CLIP model is not used for generation and also the current `config.eos_token_id` is incorrect, so updating the `eos_token_id` should be safe. We can send a mass PR on the hub to update this (cc @patrickvonplaten )
What do you think @ydshieh @patrickvonplaten ?<|||||>> the CLIP model is not used for generation --> not break anything
Sound correct! Let's have some word from the core maintainers (@amyeroberts and @sgugger) however.
<|||||>Even if we don't use the `eos_token_id` from the config doesn't mean nobody else does!
That being said, as @patil-suraj points out, the `eos_token_id` is wrong. I don't think it could be meaningfully or correctly used anywhere so happy for it to be updated.
It makes me think we should add some tests to make sure the model and tokenizer mappings are aligned when added to the library - at least as an integration check for an example checkpoint. <|||||>FYI: the inconsistency between config and the tokenier/processor is (one of) the main reason we **had** trouble in pipeline testing (using tiny models). I had make some extra work to avoid this problem (in the context of creating tiny models)<|||||>Fixed by #24777
@okaris
Let me know if this works well in your case, thank you!<|||||>Thanks @ydshieh looks like it will work for me as well. |
transformers | 24,649 | closed | Update warning messages reffering to post_process_object_detection | # What does this PR do?
Noticed that `post_process` will be replaced by `post_process_object_detection` in v5.
However, the (old) `post_process` does not threshold the bounding box scores (it has the same effect if using `threshold=0`).
But the (new) `post_process_object_detection` has a threshold parameter which, depending on the model, has different default values.
When this change occurs, users will have fewer boxes detected if the default threshold of `post_process_object_detection` is not `0`.
This PR includes:
1) Alerting the usage of threshold in existing warning messages of vision models, so when users stop calling `post_process` and start calling `post_process_object_detection`, their results will not be affected.
2) Changing `owlvit.md` as it was not making usage of the (new) `post_process_object_detection`.
I searched for other .md files and docstrings that will be affected when `post_process` stops working. But noticed that only `owlvit.md` will produce wrong results if not calling `post_process_object_detection` with the correct threshold. All others (e.g. `modeling_conditional_detr.py`, `modeling_deformable_detr.py`, `modeling_deta.py`, `modeling_detr.py`, `zero_shot_object_detection.md`, etc) already explicitly use a threshold and won't be affected.
## Before submitting
- [ x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
| 07-04-2023 16:01:05 | 07-04-2023 16:01:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,648 | closed | Enable `conversational` pipeline for `GPTSw3Tokenizer` | # What does this PR do?
The `ConversationalPipeline` is great for easily running dialogue models, and also enables smooth interfaces in the associated Hugging Face Hub widget. These seem to require a `_build_conversation_input_ids` method on the associated tokenizer, however, which takes a `Conversation` object and encodes it into the chat format that the model was trained on.
With this change, we can now easily use the GPT-SW3 models. Here's an example of asking a single question:
```python
from transformers import pipeline, Conversation
chatbot = pipeline(model="AI-Sweden-Models/gpt-sw3-20b-instruct")
conversation = chatbot(Conversation("Hvad hedder du?"))
output = conversation.generated_responses[-1]
print(output)
```
And here is an example with a never-ending multi-turn dialogue session:
```python
from transformers import pipeline, Conversation
chatbot = pipeline(model="AI-Sweden-Models/gpt-sw3-20b-instruct")
conversation = Conversation()
while True:
user_input = input('> ')
conversation.add_user_input(user_input)
conversation = chatbot(conversation)
output = conversation.generated_responses[-1]
print(output)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil @ArthurZucker @YouJiacheng @ekgren | 07-04-2023 14:30:06 | 07-04-2023 14:30:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @saattrupdan, thanks for this contribution and opening this PR.
As it stands, this isn't a change that we'd accept to be merged in. A few notes on why:
* The pipelines are a higher level abstraction than the tokenizers, and so shouldn't be imported into a tokenizer's module.
* The job of the tokenizer is to prepare raw text inputs for the model and decode its predicted tokens. `_build_conversation_input_ids` is higher level logic that belongs outside the class in e.g. a custom script.
* It's not necessary to add `load_in_4bit` to the pipeline - the model can be instantiated with `ModelClass.from_pretrained(checkpoint, load_in_4bit=True)` and then passed into the pipeline. We try to keep the number of arguments in our public APIs as small as possible.
* I think there might be a conflicting configuration, auto formatting from an IDE or different package version, but the line split changes in the PR shouldn't be there.
<|||||>> Hi @saattrupdan, thanks for this contribution and opening this PR.
Thanks for your review @amyeroberts!
> * The pipelines are a higher level abstraction than the tokenizers, and so shouldn't be imported into a tokenizer's module.
I've fixed this now, via @ArthurZucker's suggestion.
> * The job of the tokenizer is to prepare raw text inputs for the model and decode its predicted tokens. `_build_conversation_input_ids` is higher level logic that belongs outside the class in e.g. a custom script.
I'm a bit confused by this, as this method already exists for 9-10 tokenizers in the package (such as GPT2, Bloom, GPT-neox and more), and is also required by the conversational pipeline [here](https://github.com/huggingface/transformers/blob/469f4d0c29275473daf0627a0b26ec05256e47d2/src/transformers/pipelines/conversational.py#L256-L257).
> * It's not necessary to add `load_in_4bit` to the pipeline - the model can be instantiated with `ModelClass.from_pretrained(checkpoint, load_in_4bit=True)` and then passed into the pipeline. We try to keep the number of arguments in our public APIs as small as possible.
That's fair enough if that's a design goal, I've removed it now. I just liked the idea of being able to instantiate a pipeline without having to load in the model first ๐
> * I think there might be a conflicting configuration, auto formatting from an IDE or different package version, but the line split changes in the PR shouldn't be there.
Ah right, I just thought it was a mistake that an 88 character line limit wasn't enforced - I've reverted the changes back now I think!<|||||>@saattrupdan @ArthurZucker OK, my bad, I hadn't noticed the `_build_conversation_input_ids` before - happy for that to be added then :) <|||||>@ArthurZucker All formatting changes have been reversed now too ๐ <|||||>Really nice!
A quick comment from one of the developers of GPT-SW3, and the one responsible for the tokenization pipline.
Since there's a mismatch between the huggingface tokenizer and the sentencepiece tokenizer used during training, and how they treat special tokens, I'm a bit wary of this PR as it stands right now. To better match the training-procedure, each turn should be tokenized in isolation by the underlying sp_model, and joined with <bos>-tokens. This might result in the same thing, but I'm not 100% sure :sweat_smile:
<|||||>Regarding the special token issue, do you have small reproducer? I can have a look if needed! Currently working on our sentencepiece compatibility issues <|||||>> @Apsod Since there's a mismatch between the huggingface tokenizer and the sentencepiece tokenizer used during training, and how they treat special tokens, I'm a bit wary of this PR as it stands right now. To better match the training-procedure, each turn should be tokenized in isolation by the underlying sp_model, and joined with -tokens. This might result in the same thing, but I'm not 100% sure ๐
I just did some experiments to check this. The underlying sentencepiece model cannot deal with the special tokens, since these are dealt with by the `tokens_trie`, which is used in the `tokenize` method. Here's a sanity check:
```python
>>> tokenizer.tokens_trie.data
{'<': {'p': {'a': {'d': {'>': {'': 1}}}}, 's': {'>': {'': 1}}, 'u': {'n': {'k': {'>': {'': 1}}}}, '|': {'e': {'n': {'d': {'o': {'f': {'t': {'e': {'x': {'t': {'|': {'>': {'': 1}}}}}}}}}}}}}}
```
We see that it correctly deals with `<pad>`, `<s>`, `<unk>` and `<|endoftext|>` special tokens. The `encode` method uses the `encode_plus` method, which uses the `_encode_plus` method, which finally uses the `tokenize` method, so using `encode` should be fine here, I think.
Note that, in the `tokenize` method, after the special tokens have been removed using the `tokens_trie`, the underlying `_tokenize` method is used to do the actual tokenization, which is implemented in the `GPTSw3Tokenizer` as
```python
def _tokenize(self, text: str, **kwargs) -> List[str]:
text = self.preprocess_text(text)
return self.sp_model.encode(text, out_type=str)
```
If I replace the `self.encode` with `self.sp_model.encode` in the new function that's being added in this PR, then I end up with an incompatible tokenization:
```python
>>> tokenizer.sp_model.encode('<s>Hej med dig<|endoftext|>', out_type=str)
['โ<', 's', '>', 'Hej', 'โmed', 'โdig', '<', '|', 'end', 'of', 'text', '|', '>']
```
If I'm completely missing the point here, @Apsod, then please let me know ๐ <|||||>> If I replace the `self.encode` with `self.sp_model.encode` in the new function that's being added in this PR, then I end up with an incompatible tokenization:
>
> ```python
> >>> tokenizer.sp_model.encode('<s>Hej med dig<|endoftext|>', out_type=str)
> ['โ<', 's', '>', 'Hej', 'โmed', 'โdig', '<', '|', 'end', 'of', 'text', '|', '>']
> ```
This is an edge-case where the semantic discrepancy between sentencepiece and huggingface tokenization leads to different results.
If we encounter `<|endoftext|>` in text and tokenizes this using sentencepiece (as was done during training), it would tokenize this as `<, |, end, of, text, |, >` and not as the special eos-token, since in sentencepiece, special tokens are not textual and can never be produced by tokenizing text.
I think there's also differences in how sentencepice treats the initial token after a special token (due to whitespace-prefix-stuff), which leads to a general mismatch between the tokenizers:
```
TEXT = """
<|endoftext|><s>
Hej
<s>
Hoj
""".strip()
print(tokenizer.decode(tokenizer.encode(TEXT))
# will print out the following:
# <|endoftext|><s> Hej<s>Hoj
```
EDIT:
A simpler example of weird interactions between whitespace and special tokens:
```
TEXT = """ Hej <s>"""
print('"', TEXT, '"', sep='')
print('"', tokenizer.decode(tokenizer.encode(TEXT)), '"', sep='')
```
Results in:
```
" Hej <s>"
" Hej<s>"
```<|||||>@Apsod Thanks for the clarification. Just tried inspecting the result of using the `encode` method, and it removes some of the newline symbols. More specifically,
```python
prompt = "<|endoftext|><s>\nUser:\nJag tycker trรคd รคr fina\n<s>\nBot:\n"
```
is being tokenised as `[3, 2, 15088, 63458, 18, 3947, 1886, 7590, 377, 6173, 2, 22493, 63458, 18]`, which translates token-by-token to "<|endoftext|>\<s\>User:\nJag tycker trรคd รคr fina\<s\>Bot:\n". Notably, all newlines adjacent to a BOS token have been removed when encoded with this method.
I have been chatting to Amaru from the AI Sweden team (which might be you @Apsod? User names are always confusing!), and he said that they actually used multiple different prompts, sampled stochastically during training:
```
<eos><bos>{A}User:{B}{Query}{C}<bos>{A}Bot:{B}{Response}{C}...
A ~ ["\n", ""]
B ~ ["\n", " "]
C ~ ["\n", ""]
```
With this flexibility in mind, I propose that we change the above prompt to the following:
```python
prompt = "<|endoftext|><s>User: Jag tycker trรคd รคr fina<s>Bot: "
```
I compared the encodings of the `encode` and `sp_model.encode` methods, and they now yield equivalent tokens. Here's the code that I ran to check:
```python
all_responses_encoded = [self.sp_model.encode(response) for response in all_responses]
sp_encoded_prompt = [self.eos_token_id, self.bos_token_id]
for response in all_responses_encoded:
sp_encoded_prompt += response + [self.bos_token_id]
sp_encoded_prompt += self.sp_model.encode("Bot: ")
prompt = (
f"{self.eos_token}{self.bos_token}"
+ f"{self.bos_token}".join(all_responses)
+ f"{self.bos_token}Bot: "
)
hf_encoded_prompt = self.encode(text=prompt)
assert sp_encoded_prompt == hf_encoded_prompt
```
Another thing: I looked into the mysterious extra whitespace added during decoding, and found that it's all due to these two lines in the `GPTSw3Tokenizer.convert_tokens_to_string` method ([link](https://github.com/huggingface/transformers/blob/66a378429d0e085e4e72bc63a4147889a3b65a14/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py#L233-L234)):
```
if not prev_is_special:
out_string += " "
```
Is there any reason for this, or should it just be removed to ensure that `tokenizer.decode(tokenizer.encode(doc)) == doc`?<|||||>Looks good to me!
The only outstanding issue then is special-token-injection, but I guess that is a more general HF-issue? <|||||>> Looks good to me! The only outstanding issue then is special-token-injection, but I guess that is a more general HF-issue?
@Apsod Great. I've changed the prompt now. I also added a TODO comment to clarify whether [these two lines](https://github.com/huggingface/transformers/blob/66a378429d0e085e4e72bc63a4147889a3b65a14/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py#L233-L234) are needed, as they break the decode(encode(doc)) == doc consistency. But that can be dealt with in another PR, if needed.<|||||>@amyeroberts @ArthurZucker I cannot seem to merge in this PR - do any of you need to re-approve it first?<|||||>@saattrupdan Yes, the branch is protected so that only certain people can merge. It also needs an approval from a core maintainer (me in this case :) )
Merging for you now. Thanks again for this contribution! <|||||>Also regarding why spaces before / after special tokens is eating in the slow version of transformers:
- `add_tokens` does not support changing `lstrip` and `rstrip` thus by default it will strip. A fix is on its way here #23909
- text after special tokens is not properly handled. This leads to addition of spaces. A fix is also on its way for T5 and Llama but should be pushed to all `spm` based models. #24622 |
transformers | 24,647 | closed | documentation_tests.txt - sort filenames alphabetically | # What does this PR do?
Reorganises the file names listed in `documentation_tests.txt` so that they are in alphabetical order. This is to address two things:
* Make it obvious where to add new files
* Make it easier to spot if certain files are missing. For example, I didn't notice until recently that modleing_imagegpt.py wasn't included (its config was).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 07-04-2023 13:37:40 | 07-04-2023 13:37:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh I added a quick check in `utils/check_doctest_list.py` - let me know what you think :)<|||||>Yes! Eventually, we can provide a fix option to modify the file to sort and deduplicate the lines. But again, the PR itself is complete and can be merged already. |
transformers | 24,646 | closed | TrainingArguments.report_to is not configured as documented | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.1.0.dev20230616 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed?
### Who can help?
@stevhliu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://github.com/wilke0818/i3_speech_emotion_recognition/ - have code on here that examples creating a Trainer using different training arguments with the default report_to value.
Generally, creating even a basic TrainingArguments instance and passing it to any Trainer instance and then running trainer.remove_callback(WandbCallback()) will error saying that the callback is not there. This is actually what I want however this goes against the set documentation.
See documentation here: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/training_args.py#L499
The actual default value here: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/training_args.py#L1030
Which is then used in Trainer instantiation: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/trainer.py#L539
Which finally gives us that no report_to's are used: https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/integrations.py#L1613
### Expected behavior
Based on the documentation I would expect that when setting up a trainer all installed Callback packages for reporting will be used unless the user specifies otherwise. | 07-04-2023 13:25:00 | 07-04-2023 13:25:00 | Hi!
The actual default value is `None`, but it is set to `all` in this case, so it still corresponds to the doc.
See
https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/training_args.py#L1422-L1428<|||||>Hi, I appreciate you taking the time! Not sure how I missed that piece of code. I also realized the reason my code wasn't working was related to how remove_callback works when sending an instance vs. the type. |
transformers | 24,645 | closed | [WIP] Add LaVIN | # What does this PR do?
Adds the LaVIN model from - https://arxiv.org/pdf/2305.15023.pdf <br>
Model description - LaVIN is a vision-language instructed model that is affordable to train (it was trained in a few hours on 8 A100 GPUs) with good performance on ScienceQA.
Fixes issue #23846
## Who can review?
Models:
@amyeroberts @ArthurZucker
** Draft ** (Maintainers and reviewers can go through the PR as and when needed, I will ping the reviewers once the PR is ready. Guidance/Questions/Concerns related to the PR are always welcome. | 07-04-2023 11:48:15 | 07-04-2023 11:48:15 | Hi @shauray8, thanks for opening this PR!
The easiest and recommended way to make a model available in `transformers` is to add the modeling code directly on the hub: https://huggingface.co/docs/transformers/custom_models
This means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while.<|||||>Hi @amyeroberts, That makes sense, I have not seen a lot of people use this particular model. I'll make all the necessary changes and add it to the hub. But if there's anything I can help with to improve HuggingFace I'm more than happy to do it. |
transformers | 24,644 | closed | 'eos_token_id' for llama model.generate is not working | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import transformers, torch
weights_dir = "weights/recovered"
question = 'Hello, there!'
model = transformers.AutoModelForCausalLM.from_pretrained(weights_dir)
model = model.cuda()
print(model.config)
# LlamaConfig {
# "_name_or_path": "weights/recovered",
# "architectures": [
# "LlamaForCausalLM"
# ],
# "bos_token_id": 1,
# "eos_token_id": 2,
# "hidden_act": "silu",
# "hidden_size": 4096,
# "initializer_range": 0.02,
# "intermediate_size": 11008,
# "max_position_embeddings": 2048,
# "model_type": "llama",
# "num_attention_heads": 32,
# "num_hidden_layers": 32,
# "pad_token_id": 0,
# "rms_norm_eps": 1e-06,
# "tie_word_embeddings": false,
# "torch_dtype": "float32",
# "transformers_version": "4.30.2",
# "use_cache": true,
# "vocab_size": 32001
# }
tokenizer = transformers.AutoTokenizer.from_pretrained(weights_dir)
question_ids = tokenizer.encode(question + tokenizer.eos_token, return_tensors='pt')
question_ids = question_ids.cuda()
print(tokenizer.eos_token_id, tokenizer.bos_token_id, tokenizer.pad_token_id)
# 2, 1, 32000
print(question_ids)
# tensor([[ 1, 15043, 29892, 727, 29991, 829, 29879, 29958]],
device='cuda:0')
print(tokenizer.decode(question_ids[0]))
# <s> Hello, there!</s>
outputs = model.generate(
question_ids,
eos_token_id=2,
max_new_tokens=200,
num_beams=4,
num_return_sequences=2,
early_stopping=True
)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)
# Hello, there!</s>
# Hello, there!</s>
# <s>Hello, there!</s>
```
No matter how I changing the parameters of model.generate, it always ignores the `</s>` as the ending token (id:2).
In addition, the `skip_special_tokens` of tokenizer is not working too.
Where am I doing wrong? Please help, many thanks!
### Expected behavior
The `model.generate` stop at the first time of `</s>` | 07-04-2023 10:57:12 | 07-04-2023 10:57:12 | cc @ArthurZucker <|||||>Hey! A few things to note:
- `LlamaTokenizerFast` (which you are using through the `AutoTokenizer` API) has been fixed here #24042, addressing the issue with special tokens being encode.
- You are not sharing any repo, so we can't reproduce potential bugs.
- `it always ignores the </s> as the ending token ` what does that mean? Does the generation not stop? Then have a look here #22794.
- `skip_special_tokens` will work if you have the correct version of LlamaTokenizer.
- If you wish to add the ending token in your prompt, set `add_eos_token` to `True`. It will be done automatically
Here is a working snippet:
```python
from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer
weights_dir = "huggyllama/llama-7b"
question = 'Hello, there!'
# if you want to add eos, set `add_eos_token=True`
tokenizer = LlamaTokenizer.from_pretrained(weights_dir, add_eos_token=True)
question_ids = tokenizer.encode(question, return_tensors='pt')
print(question_ids)
# tensor([[ 1, 15043, 29892, 727, 29991, 2]])
print( tokenizer.decode(question_ids[0], skip_special_tokens = True))
# 'Hello, there!'
# if you are not using the correct version of tokenizer, special tokens are wrong
tokenizer = AutoTokenizer.from_pretrained(weights_dir, add_eos_token=True)
print(tokenizer.is_fast)
# True
question_ids = tokenizer.encode('Hello, there!</s>', return_tensors='pt')
print(question_ids)
# tensor([[ 1, 15043, 29892, 727, 29991, 829, 29879, 29958, 2]])
question_ids = tokenizer.encode('Hello, there! </s>', return_tensors='pt')
# tensor([[ 1, 15043, 29892, 727, 29991, 2, 2]])
print(question_ids)
```<|||||>@ArthurZucker Many thanks! `add_eos_token=True` did the trick! |
transformers | 24,643 | open | "RuntimeError: 'weight' must be 2-D" training with DeepSpeed | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@pacman100 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The dataset being used is my own dataset that is just a few hundred strings in a CSV file produced by pandas.
Running the following code
```Python
from transformers import GPTJForCausalLM, AutoTokenizer, Trainer, TrainingArguments, DataCollatorForLanguageModeling
import os
from torch.utils.data import Dataset
import pandas as pd
import evaluate
import numpy as np
import sklearn
import torch as nn
from transformers.trainer_pt_utils import get_parameter_names
model_name = "EleutherAI/gpt-j-6b"
d_type = "auto"
print("CUDA Available: "+ str(nn.cuda.is_available()))
print("CUDA Version: " + str(nn.version.cuda))
print("GPUs Available: "+ str(nn.cuda.device_count()))
def process_csv(filename, tknizer):
data = pd.read_csv(filename)
return tknizer(list(data["text"].values.flatten()), padding=True, truncation=True, return_tensors="pt")
tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=d_type)
collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
tokenizer.pad_token = tokenizer.eos_token
class MyDataset(Dataset):
def __init__(self, tokenized_input):
self.tokenized_input = tokenized_input
def __getitem__(self, idx):
return {key: val[idx] for key, val in self.tokenized_input.items()}
def __len__(self):
return len(self.tokenized_input.input_ids)
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
train_data = MyDataset(process_csv("train_data.csv", tokenizer))
eval_data = MyDataset(process_csv("test_data.csv", tokenizer))
training_args = TrainingArguments(
output_dir="test_trainer",
deepspeed="deepSpeedCPU.json",
)
model = GPTJForCausalLM.from_pretrained(model_name, torch_dtype=d_type).cuda()
print("Total Memory: " + str(nn.cuda.get_device_properties(0).total_memory))
print("Reserved: " + str(nn.cuda.memory_reserved(0)))
print("Allocated: " + str(nn.cuda.memory_allocated(0)))
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_data,
eval_dataset=eval_data,
data_collator=collator,
compute_metrics=compute_metrics,
)
trainer.train()
```
using the following config file
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Causes an error at trainer.train()
```
Traceback (most recent call last):
File "/home/augustus/ADAM/main2.py", line 82, in <module>
trainer.train()
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 1938, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2759, in training_step
loss = self.compute_loss(model, inputs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/trainer.py", line 2784, in compute_loss
outputs = model(**inputs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 854, in forward
transformer_outputs = self.transformer(
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 634, in forward
inputs_embeds = self.wte(input_ids)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: 'weight' must be 2-D
```
### Expected behavior
I would expect training to begin or a more verbose error to help fix the issue (if possible to do so from my side) | 07-04-2023 10:08:50 | 07-04-2023 10:08:50 | Hi
While waiting @pacman100 's comment maybe , you can check what's the shape of `self.wte`. It would be a good idea to double check if the issue also happens without the usage of deepspeed.
```
File "/home/augustus/miniconda3/envs/adamTraining/lib/python3.10/site-packages/transformers/models/gptj/modeling_gptj.py", line 634, in forward
inputs_embeds = self.wte(input_ids)
```<|||||>The issue does not happen without deepspeed, however we are unable to train without deepspeed due to not having much in the way of system resources.<|||||>DeepSpeed version and how are you launching the script?<|||||>Deepspeed 0.9.5, just launching it with ```python3 script.py```<|||||>Thought so, please use distributed launcher such as `torchrun`, `deepspeed` or `accelerate` when using DeepSpeed/DDP/FSDP or anytime you are doing distributed training.
Please refer:
1. https://huggingface.co/docs/transformers/main_classes/deepspeed#deployment-with-multiple-gpus
2. https://huggingface.co/docs/transformers/main/en/main_classes/trainer#using-accelerate-launcher-with-trainer
<|||||>that should resolve the issue
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>i also have the same problem. also deepspeed stage3 with trainner. @ZizoAdam do u solve the problem? |
transformers | 24,642 | open | openlm-research/open_llama_13b_easylm cannot be downloaded | ### System Info
transformers: 4.30.2.
Python: 3.9.17
OS: MacOS Ventura 13.3.1 (a)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```
model_id = "openlm-research/open_llama_13b_easylm"
model_name = model_id.split("/")[1]
model = pipeline(model=model_id)
model.save_pretrained(f"./models/{model_name}")
```
### Expected behavior
I expect the model to be downloadable locally for use in downstream NLP tasks. Noteworthy is that on the [website](https://huggingface.co/openlm-research/open_llama_13b_easylm) it is indicated that 0 people downloaded this model over the past month.
With the above script, I can easily fetch other models such as `"openlm-research/open_llama_13b"`:
```
model_id = "openlm-research/open_llama_13b"
model_name = model_id.split("/")[1]
model = pipeline(model=model_id)
model.save_pretrained(f"./models/{model_name}")
``` | 07-04-2023 09:36:10 | 07-04-2023 09:36:10 | Looking at
https://huggingface.co/openlm-research/open_llama_13b_easylm/tree/main
The file doesn't seem to be torch bin file.
However, https://huggingface.co/openlm-research/open_llama_13b/tree/main has those `.bin` files.
You will have to open an issue on that Hub repo. to discuss with the repo. owner.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,641 | closed | AssertionError: Dynamo only supports FSDP with use_orig_params=True | ### System Info
cuda 11.7
accelerate=0.21.0.dev0
transformers=4.31.0.dev0
torch=2.0.1
python=3.8
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### command:
accelerate launch --config_file accelerate_config.yaml --num_machines 7 --num_processes 28 --machine_rank $NODE_RANK --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT ./trainer.py --model_name_or_path ".." --data_path ".." --per_device_train_batch_size 24 --per_device_eval_batch_size 24 --do_train --evaluation_strategy no --output_dir outputs --learning_rate 2e-5 --num_train_epochs 4 --lr_scheduler_type cosine --warmup_ratio 0.03 --weight_decay 0.0 --logging_steps 1 --save_strategy epoch --bf16 true --tf32 true --load_best_model_at_end false --model_max_length 2048 --gradient_checkpointing true --save_total_limit 1 --model_resume_from_checkpoint false --torch_compile true
### accelerate_config.yaml
```
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_use_orig_params: true
main_training_function: main
num_machines: 1
num_processes: 2
mixed_precision: bf16
rdzv_backend: static
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### basic training code
```
def train():
print("Env Variables")
env_vars = os.environ
for key, value in env_vars.items():
print(key, "=", value)
parser = transformers.HfArgumentParser(
(ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
model = transformers.LlamaForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right"
)
if tokenizer.pad_token is None:
smart_tokenizer_and_embedding_resize(
tokenizer=tokenizer,
model=model,
)
data_module = make_hf_data_module(tokenizer=tokenizer,
data_args=data_args)
trainer = Trainer(model=model,
tokenizer=tokenizer,
args=training_args,
**data_module)
if model_args.model_resume_from_checkpoint:
trainer.train(resume_from_checkpoint=model_args.model_name_or_path)
else:
trainer.train()
trainer.save_state()
safe_save_model_for_hf_trainer(trainer=trainer,
output_dir=training_args.output_dir)
```
###Stacktrace
> You can suppress this exception and fall back to eager by setting:
> torch._dynamo.config.suppress_errors = True
> self.symbolic_locals = collections.OrderedDict(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1673, in <genexpr>
> self.symbolic_locals = collections.OrderedDict(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/symbolic_convert.py", line 1673, in <genexpr>
> VariableBuilder(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> return self._wrap(value).clone(**self.options())
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 248, in _wrap
> VariableBuilder(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> output = [
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 249, in <listcomp>
> return self._wrap(value).clone(**self.options())
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 248, in _wrap
> VariableBuilder(self.tx, GetItemSource(self.get_source(), i))(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> output = [
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 249, in <listcomp>
> return self._wrap(value).clone(**self.options())
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 345, in _wrap
> VariableBuilder(self.tx, GetItemSource(self.get_source(), i))(
> File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_dynamo/variables/builder.py", line 172, in __call__
> assert getattr(
> AssertionError: Dynamo only supports FSDP with use_orig_params=True
>
> Set torch._dynamo.config.verbose=True for more information
### Expected behavior
torch.compile works smoothly with FSDP | 07-04-2023 04:45:05 | 07-04-2023 04:45:05 | The command involves `accelerate` and `torch_compile` + the error involves `Dynamo`.
cc @fxmarty @pacman100 (maybe?)<|||||>Thank you for the issue, the above PR fixes it. |
transformers | 24,640 | open | 'DummyOptim' object has no attribute 'step' | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes 4*A100 80GB
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am trying to train a model from the given [script](https://github.com/salesforce/CodeT5/blob/main/CodeT5%2B/instruct_tune_codet5p.py) in a single node multi-GPU setting
with DeepSpeed integration and am getting the error as given below
To reproduce one can download the script from [here](https://github.com/salesforce/CodeT5/blob/main/CodeT5%2B) along with the config file and try to run it with the [CodeAlpaca dataset](https://raw.githubusercontent.com/sahil280114/codealpaca/master/data/code_alpaca_20k.json)
```
{'batch_size_per_replica': 1,
'cache_data': 'cache_data/instructions',
'data_num': -1,
'deepspeed': 'deepspeed_config.json',
'epochs': 3,
'fp16': False,
'grad_acc_steps': 16,
'instruct_data_path': 'code_alpaca_20k.json',
'load': 'codet5p-16b',
'local_rank': -1,
'log_freq': 10,
'lr': 2e-05,
'lr_warmup_steps': 30,
'max_len': 512,
'save_dir': 'saved_models/instruct_codet5p_16b',
'save_freq': 500}
==> Loaded 20022 samples
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5/5 [00:25<00:00, 5.15s/it]
==> Loaded model from codet5p-16b, model size 16493680640
Para before freezing: 16493680640, trainable para: 16494M
Para after freezing: 16493680640, trainable para: 462M
Starting main loop
0%| | 0/936 [00:00<?, ?it/s]/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
Traceback (most recent call last):
File "/root/Custom-LLM/CodeT5/CodeT5+/instruct_tune_codet5p.py", line 212, in <module>
main(args)
File "/root/Custom-LLM/CodeT5/CodeT5+/instruct_tune_codet5p.py", line 181, in main
run_training(args, model, train_data)
File "/root/Custom-LLM/CodeT5/CodeT5+/instruct_tune_codet5p.py", line 93, in run_training
trainer.train()
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/datachat_env/lib/python3.11/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop
self.optimizer.step()
^^^^^^^^^^^^^^^^^^^
AttributeError: 'DummyOptim' object has no attribute 'step'
```
The same script works completely fine in a single-GPU setting. But when I switch to a multi-GPU setup I get this error
### Expected behavior
Expect the training to proceed using Deepspeed as in the case of a single-GPU set up | 07-03-2023 21:36:41 | 07-03-2023 21:36:41 | cc @pacman100 <|||||>Hello, please provide a minimal reproducible example that we can directly run. Providing links to scripts and dataset doesn't help and is very involved and time taking. <|||||>Also, please provide the accelerate and DeepSpeed versions, the launch command for the minimal example and the minimal example as mentioned above.<|||||>Code:
```
"""
Finetune CodeT5+ models on instruction tuning data
You can customize your own training data by following the HF dataset format to cache it to args.cache_data
Author: Yue Wang
Date: June 2023
"""
import os
import pprint
import argparse
import numpy as np
import copy
import torch
from datasets import load_dataset, load_from_disk
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, TrainingArguments, Trainer
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
def get_model_size(model):
model_parameters = filter(lambda p: p.requires_grad, model.parameters())
model_size = sum([np.prod(p.size()) for p in model_parameters])
return "{}M".format(round(model_size / 1e+6))
def freeze_decoder_except_xattn_codegen(model):
print(f'Para before freezing: {model.num_parameters()}, trainable para: {get_model_size(model)}')
for param in model.decoder.parameters():
param.requires_grad = False
num_decoder_layers = model.decoder.config.num_layers
for i in range(num_decoder_layers):
each_decoder_layer = model.decoder.transformer.h[i]
if hasattr(each_decoder_layer, 'crossattention'):
for param in each_decoder_layer.crossattention.parameters():
param.requires_grad = True
each_decoder_layer.crossattention.to(torch.float32)
if hasattr(each_decoder_layer, 'alpha_xattn'):
each_decoder_layer.alpha_xattn.requires_grad = True
print(f'Para after freezing: {model.num_parameters()}, trainable para: {get_model_size(model)}')
def run_training(args, model, train_data):
print(f"Starting main loop")
training_args = TrainingArguments(
#report_to='tensorboard',
output_dir=args.save_dir,
overwrite_output_dir=False,
do_train=True,
save_strategy='epoch',
num_train_epochs=args.epochs,
per_device_train_batch_size=args.batch_size_per_replica,
gradient_accumulation_steps=args.grad_acc_steps,
learning_rate=args.lr,
weight_decay=0.0,
warmup_steps=args.lr_warmup_steps,
logging_dir=args.save_dir,
logging_first_step=True,
logging_steps=args.log_freq,
save_total_limit=2,
dataloader_drop_last=True,
dataloader_num_workers=4,
local_rank=args.local_rank,
deepspeed=args.deepspeed,
fp16=args.fp16,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_data,
)
trainer.train()
if args.local_rank in [0, -1]:
final_checkpoint_dir = os.path.join(args.save_dir, "final_checkpoint")
model.save_pretrained(final_checkpoint_dir)
print(f' ==> Finish training and save to {final_checkpoint_dir}')
def load_tokenize_data(args):
# Load and tokenize data
if os.path.exists(args.cache_data):
train_data = load_from_disk(args.cache_data)
print(f' ==> Loaded {len(train_data)} samples')
return train_data
else:
datasets = load_dataset('json', data_files=args.instruct_data_path)['train']
tokenizer = AutoTokenizer.from_pretrained(args.load)
def preprocess_function(examples):
prompt_input, prompt_no_input = PROMPT_DICT["prompt_input"], PROMPT_DICT["prompt_no_input"]
source = [prompt_input.format_map({'instruction': instruct, 'input': inp}) if inp != ''
else prompt_no_input.format_map({'instruction': instruct})
for instruct, inp in zip(examples["instruction"], examples["input"])]
target = [src + output + tokenizer.eos_token for src, output in zip(source, examples["output"])]
model_inputs = tokenizer(source, max_length=args.max_len, padding="max_length", truncation=True)
labels = tokenizer(target, max_length=args.max_len, padding="max_length", truncation=True)
model_inputs["decoder_input_ids"] = copy.deepcopy(labels["input_ids"])
# changing labels: convert all tokens in the duplicate prefix prompt and the padding part to -100
eos_token_id = tokenizer.eos_token_id
for x, y in zip(model_inputs["input_ids"], labels["input_ids"]):
label_prefix_len = x.index(eos_token_id) if eos_token_id in x else len(x)
y[:label_prefix_len] = [-100] * label_prefix_len
if eos_token_id in y:
pad_len = len(y) - y.index(eos_token_id) - 1
if pad_len > 0:
y[y.index(eos_token_id) + 1:] = [-100] * pad_len
# shift labels to the right as the decoder input and add decoder start token id
decoder_start_id = tokenizer.eos_token_id
for z in model_inputs["decoder_input_ids"]:
z[1:] = z[:-1]
z[0] = decoder_start_id
model_inputs["labels"] = copy.deepcopy(labels["input_ids"])
model_inputs["decoder_attention_mask"] = labels["attention_mask"]
return model_inputs
train_data = datasets.map(
preprocess_function,
batched=True,
remove_columns=datasets.column_names,
num_proc=64,
load_from_cache_file=False,
)
print(f' ==> Loaded {len(train_data)} samples')
train_data.save_to_disk(args.cache_data)
print(f' ==> Saved to {args.cache_data}')
return train_data
def main(args):
argsdict = vars(args)
print(pprint.pformat(argsdict))
# Save command to file
with open(os.path.join(args.save_dir, "command.txt"), 'w') as f:
f.write(pprint.pformat(argsdict))
# Load and tokenize data using the tokenizer from `args.load`. If the data is already cached, load it from there.
# You can customize this function to load your own data for any Seq2Seq LM tasks.
train_data = load_tokenize_data(args)
if args.data_num != -1:
train_data = train_data.select([i for i in range(args.data_num)])
# Load model from `args.load`
model = AutoModelForSeq2SeqLM.from_pretrained(args.load, torch_dtype=torch.float16,
low_cpu_mem_usage=True, trust_remote_code=True)
print(f" ==> Loaded model from {args.load}, model size {model.num_parameters()}")
#freeze_decoder_except_xattn_codegen(model)
run_training(args, model, train_data)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="CodeT5+ instruction tuning")
parser.add_argument('--data-num', default=-1, type=int)
parser.add_argument('--max-len', default=512, type=int)
parser.add_argument('--instruct-data-path', default='code_alpaca_20k.json', type=str)
parser.add_argument('--cache-data', default='cache_data/instructions', type=str)
parser.add_argument('--load', default='Salesforce/codet5p-16b', type=str)
# Training
parser.add_argument('--epochs', default=3, type=int)
parser.add_argument('--lr', default=2e-5, type=float)
parser.add_argument('--lr-warmup-steps', default=30, type=int)
parser.add_argument('--batch-size-per-replica', default=1, type=int)
parser.add_argument('--grad-acc-steps', default=16, type=int)
parser.add_argument('--local_rank', default=-1, type=int)
parser.add_argument('--deepspeed', default=None, type=str)
parser.add_argument('--fp16', default=False, action='store_true')
# Logging and stuff
parser.add_argument('--save-dir', default="saved_models/instruct_codet5p_16b", type=str)
parser.add_argument('--log-freq', default=10, type=int)
parser.add_argument('--save-freq', default=500, type=int)
args = parser.parse_args()
os.makedirs(args.save_dir, exist_ok=True)
main(args)
```
command:
```
deepspeed CodeT5+/instruct_tune_codet5p.py --load $MODEL --save-dir $SAVE_DIR --instruct-data-path code_alpaca_2k.json --fp16 --deepspeed ~/transformers/tests/deepspeed/ds_config_zero3.json
```
Output:

- `Accelerate` version: 0.21.0.dev0
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Numpy version: 1.24.4
- PyTorch version (GPU?): 2.0.1 (True)
- PyTorch XPU available: False
- System RAM: 503.55 GB
- GPU type: NVIDIA A100-SXM4-80GB
- `Accelerate` default config:
Not found
-
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```bash
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
[WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch']
torch version .................... 2.0.1
deepspeed install path ........... ['/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.9.5, unknown, unknown
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.8
deepspeed wheel compiled w. ...... torch 2.0, cuda 11.8
```
Therefore, unable to reproduce your error. <|||||>I see similar issue. <|||||>I'm also having this issue!
I believe it happens when you include an `"optimizer"` as part of your Deepspeed config. E.g.
```json
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto",
},
}
```
Then, `optimizer = DummyOptim(params=model_parameters)`.
https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L282-L288
And eventually `Trainer` tries to call `optimizer.step()`. However, `DummyOptim` doesn't have a `step()` function.
https://github.com/huggingface/accelerate/blob/8514c35192ac9762920f1ab052e5cea4c0e46eeb/src/accelerate/utils/deepspeed.py#L226-L246
I'm not sure what the appropriate fix is: should we not use `DummyOptim` at all or add a `step()` function (that does nothing?) to it?
P.S. the same problem applies to `"scheduler"`.
https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L303-L304
Please take a look, @pacman100 -- thanks!<|||||>@apoorvkh , post the `accelerator.prepare` they should be replaced with correct optimizer and scheduler from DeepSpeed and hence should not result in any issues. As shown above, unable to reproduce it. A minimal way to reproduce it would help me deep dive<|||||>See https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1653-L1656 wherein it `accelerator.prepare` internally calls `deepspeed.initialize` and replace the dummy objects with appropriate ones returned by DeepSpeed.<|||||>Thanks, that was helpful! Using that information, we found that the cause was because `ACCELERATE_USE_DEEPSPEED=true` was not already being set internally. (If you set it manually, everything works as expected.)
We unfortunately can't very easily share a minimal reproducible example, but will continue to debug soon. |
transformers | 24,639 | closed | Generate: force cache with `inputs_embeds` forwarding | # What does this PR do?
Fixes the issue raised in [this comment](https://github.com/huggingface/transformers/issues/23042#issuecomment-1618513599).
The issue and the solution is described in the comment added alongside the change :) | 07-03-2023 16:17:22 | 07-03-2023 16:17:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,638 | open | attention weight clipping | ### Feature request
If the attention weight overflows, probably when using float16 during mixed precision training, clip the weight to some configurable value.
### Motivation
Iโm training a `gpt2` model with auto mixed precision with `torch.amp.autocast` and I noticed Iโm running into `nan` loss values during training. I tracked the source of the `nan` to a `softmax` computation where thereโs a single `inf` in the input to the `softmax`. The `inf` is coming from a matrix multiply of the query and key matrices to calculate attention weights. this line: https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L184.
Specifically the dot product of two vectors from query/key overflows the `float16` dtype.
### Your contribution
Would a simple `torch.clamp` call work/be correct? | 07-03-2023 15:16:40 | 07-03-2023 15:16:40 | Hi!
There are a few places in the library having something below
```python
# clamp inf values to enable fp16 training
if hidden_states.dtype == torch.float16:
max_dtype = torch.finfo(hidden_states.dtype).max
clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)
hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
```
You can try similar thing locally :-)<|||||>@ydshieh thank you for the example, that's effectively exactly what I'm looking for.
My only concern is that implementing this clamping locally is far more overhead for an end user than implementing it in the codebase. My reasoning for this is because this inf is not really returned outside of the model forward call as it is passed into a softmax in the GPT2Attention block, and ultimately the user only sees the nans propagated by the inf in the softmax. So in order to enable this clamping, a user would have to override the forward call of `GPT2Attention` and ultimately subclass `GPT2Attention`, `GPT2Block`, `GPT2Model`, and whichever class they're using which contains the base model (eg `GPT2LMHeadModel`).<|||||>I understand @StevenSong . But it's better for you to try locally first to see if it actually solves the issue, or something more have to be done (like same fix at different places).
In theory, this kind of `overflows` can happen everywhere. In practice, we proably just need to add clamp at one or two places.
Also I would need to discuss internally to see if we want to do this change for a long-existing model like `gpt2`.<|||||>Just for my debugging, I ended up just modifying the source file at `models/gpt2/modeling_gpt2.py` and inserting the below chunk between these lines: https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L203-L205
`inf` clamping chunk, same as what was suggested above:
```python
if attn_weights.dtype == torch.float16:
max_dtype = torch.finfo(attn_weights.dtype).max
clamp_value = torch.where(torch.isinf(attn_weights).any(), max_dtype - 1000, max_dtype)
attn_weights = torch.clamp(attn_weights, min=-clamp_value, max=clamp_value)
```
and on my test case, I can confirm that this results in non-nan loss and non-nan logits. I can continue with my training loop with no errors from backprop and the next batch also returns successfully<|||||>Thanks!<|||||>Oh, @StevenSong
It's for attn weight. I just remembered that for it, it's starndard to cast it to fp32 instead of staying in fp16 and use claming.
Could you try if the following works in your case ๐ ? Thanks a lot!
```python
# upcast to fp32 if the weights are in fp16. Please see https://github.com/huggingface/transformers/pull/17437
if attn_weights.dtype == torch.float16:
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(torch.float16)
else:
attn_weights = nn.functional.softmax(attn_weights, dim=-1)
```
<|||||>Hi @ydshieh,
I did notice this upcasting was already being done implicitly when adding the `attention_mask` to `attn_weights` at this line:
https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L207
As mentioned in the referenced PR (#17437), `attention_mask` is filled with very large negative values. edit: I guess this also depends on `attention_mask` being passed
But this is already far after the `inf` is introduced at the matmul of the query and key matrices together. I guess if those are cast to fp32, that would probably also fix it?<|||||>Yes, the large negative value depends on the dtype
https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L822-L823.
If we upcast to fp32 as mentioned in my previous comment, it should be fine. This is done in a few models, like `opt` or `xglm`.
See also #17437<|||||>> Yes, the large negative value depends on the dtype
>
> https://github.com/huggingface/transformers/blob/4b26a61631b8fd30f845cf08ebcc5ed65fe83c9b/src/transformers/models/gpt2/modeling_gpt2.py#L822-L823
>
> .
Interesting, I'd have expected `attention_mask` to be `-65k` but I'm seeing `-3.4028e+38` as in fp32, yet my `attn_weights` are in fp16. Is this because of `autocast` only casting some but not all tensors ie query/key are cast to fp16 but the model dtype is still fp32?
> If we upcast to fp32 as mentioned in my previous comment, it should be fine. This is done in a few models, like `opt` or `xglm`.
>
> See also #17437
I agree, upcasting to fp32 should resolve the issue but I think it needs to be done earlier, at the level of query/key matrices. Otherwise the `inf` would just be upcast to fp32, no?<|||||>> Is this because of autocast only casting some but not all tensors ie query/key are cast to fp16 but the model dtype is still fp32?
I didn't use this before personally, but from torch doc
> where some operations use the torch.float32 (float) datatype and other operations use lower precision floating point datatype
it looks the same as you mentioned.
Regarding:
> it needs to be done earlier, at the level of query/key matrices. Otherwise the inf would just be upcast to fp32, no?
The `inf` you see on query/key matrices might be also a consequence of the computation on attn_weight or its softmax in an earlier step. It's not easy to say for sure where the problem starts to accumulate (even they might not cause failure at that early time).
Let's try the more standard approaces where people suggest to using fp32 for softmax (and let's upcast before this while adding attention mask) and see how things go ๐ค .
<|||||>Got to say my above comment is based on my experience on FP16, not with autocast. From your description, you mentioned the `attn_weights + attention_mask` would already be in FP32. It's good idea to double check this (what's the dtype of this step's output) and if the following softmax takes places in FP32 too.
However, it doesn't hurt to give it a try ๐ค <|||||>apologies for the late reply to this thread, here's what I've tried and what's worked/not worked (and by works, I mean if `inf` and `nan` no longer appear in `attn_weights` and loss is non-`nan`):
1. upcasting at the softmax call (see below for code): this does NOT work as `attn_weights.dtype` is no longer fp16 after adding `attention_mask` in fp32. so the `inf` is still passed to softmax and we get `nan`s.
```python
def _attn(self, query, key, value, attention_mask=None, head_mask=None):
[...]
if attention_mask is not None:
# Apply the attention mask
attn_weights = attn_weights + attention_mask
if attn_weights.dtype == torch.float16:
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(torch.float16)
else:
attn_weights = nn.functional.softmax(attn_weights, dim=-1)
```
2. explicitly upcasting before summing: this does NOT work as `attn_weights` already contains the `inf` and so is simply upcasting `inf` from fp16 to fp32.
```python
def _attn(self, query, key, value, attention_mask=None, head_mask=None):
[...]
attn_weights = attn_weights.to(torch.float32)
if attention_mask is not None:
# Apply the attention mask
attn_weights = attn_weights + attention_mask
attn_weights = nn.functional.softmax(attn_weights, dim=-1)
```
3. explicitly upcasting query and key matrices: this also does NOT work. this is because even though query and key are indeed in fp32, within the `autocast` context, `torch.matmul` downcasts back to fp16!
```python
def _attn(self, query, key, value, attention_mask=None, head_mask=None):
query = query.to(torch.float32)
key = key.to(torch.float32)
attn_weights = torch.matmul(query, key.transpose(-1, -2))
[...]
```
4. explicitly upcasting query and key with enforced fp32: the reason I'm so insistent on upcasting query and key is because I've already found that the line which produces the `inf` is the matmul between the query and key matrices, as I mentioned in the original post. However, as seen in the above attempt, the `autocast` context does not respect the explicit cast. So we need to disable the autocast context, if it exists ([see relevant docs](https://pytorch.org/docs/stable/notes/amp_examples.html#autocast-and-custom-autograd-functions)). Thus `attn_weights` is finally in fp32 as the product of two fp32 matrices.
```python
def _attn(self, query, key, value, attention_mask=None, head_mask=None):
query = query.to(torch.float32)
key = key.to(torch.float32)
if torch.is_autocast_enabled():
with torch.amp.autocast(device_type=query.device.type, enabled=False):
attn_weights = torch.matmul(query, key.transpose(-1, -2))
else:
attn_weights = torch.matmul(query, key.transpose(-1, -2))
```
I'd also put forth that upcasting the query and key vectors to fp32 is a generalizable solution, as `attn_weights` is then always fp32 and all subsequent operations with `attn_weights` in the attention block are also implicitly upcast to fp32. It can always be downcast later, in fact it seems like this was already considered in this same function at this line:
https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/gpt2/modeling_gpt2.py#L211-L213
<|||||>Is there a good way for me to share my example I've been debugging? I've pared it down to a single script with a specific batch which results in the `inf`/`nan`, the base model is on the hub already, and I'm specifically doing prompt tuning so there's only something like extra ~20K params for the example to work<|||||>@StevenSong
- thank you for the hard work on doing experiments! โค๏ธ
- You can post the script as in a comment, or maybe create a colab notebook and share with us ๐
Also, I haven't asked (I think) previously: could you share us the full error log (I know it's inf thing, but would like to see the log). Thank you ๐
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,637 | open | TFOPTForCausalLM Attention mask size mismatch exception | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.11 (cpu)
- Jax version: 0.4.10
- JaxLib version: 0.4.10
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to write my own decoding logic so I can export to TFLite (the app runs decoding logic itself, calling into the tflite model with past_key_values and input_ids but the code for that is a little more involved)
I'm not sure if I'm missing something important here but I was able to successfully export Whisper before with this sort of pattern
I've reduced the problem to this example:
[Colab Link](https://colab.research.google.com/drive/1chUspU_RBkHuZ12Ls3FKdLusmYeoXZC_?usp=sharing)
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFOPTForCausalLM, TFGPT2LMHeadModel
def decoding_example(model, tokenizer):
input_ids = tf.convert_to_tensor([[1]]) * int(tokenizer.bos_token_id)
outputs = model(input_ids, return_dict=True, use_cache=True, past_key_values=None)
past_key_values = outputs.past_key_values
max_new_tokens = 8
for i in range(max_new_tokens):
print(i)
decoded_next_token = 123 # just an example, this would depend on outputs.last_hidden_state
input_ids = tf.convert_to_tensor([[1]]) * decoded_next_token
outputs = model(input_ids, return_dict=True, use_cache=True, past_key_values=past_key_values)
past_key_values = outputs.past_key_values
print("Finished, all OK")
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m")
model = TFOPTForCausalLM.from_pretrained("facebook/opt-125m")
decoding_example(model, tokenizer) # fails
```
<details>
<summary>Output</summary>
```
0
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-5-07105bf5f115> in <cell line: 4>()
2 model = TFOPTForCausalLM.from_pretrained("facebook/opt-125m")
3
----> 4 decoding_example(model, tokenizer) # fails
9 frames
<ipython-input-3-94ad2e4e3e50> in decoding_example(model, tokenizer)
11 input_ids = tf.convert_to_tensor([[1]]) * decoded_next_token
12
---> 13 outputs = model(input_ids, return_dict=True, use_cache=True, past_key_values=past_key_values)
14 past_key_values = outputs.past_key_values
15
/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
440
441 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 442 return func(self, **unpacked_inputs)
443
444 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, input_ids, past_key_values, attention_mask, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs)
956 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
957
--> 958 outputs = self.model(
959 input_ids=input_ids,
960 past_key_values=past_key_values,
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
440
441 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 442 return func(self, **unpacked_inputs)
443
444 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, input_ids, attention_mask, head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs)
730 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
731
--> 732 outputs = self.decoder(
733 input_ids,
734 attention_mask=attention_mask,
/usr/local/lib/python3.10/dist-packages/transformers/modeling_tf_utils.py in run_call_with_unpacked_inputs(self, *args, **kwargs)
440
441 unpacked_inputs = input_processing(func, config, **fn_args_and_kwargs)
--> 442 return func(self, **unpacked_inputs)
443
444 # Keras enforces the first layer argument to be passed, and checks it through `inspect.getfullargspec()`. This
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, input_ids, inputs_embeds, attention_mask, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict, training)
657 past_key_value = past_key_values[idx] if past_key_values is not None else None
658
--> 659 hidden_states, layer_self_attn, present_key_value = decoder_layer(
660 hidden_states,
661 attention_mask=attention_mask,
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, hidden_states, attention_mask, layer_head_mask, past_key_value, training, output_attentions, use_cache)
323
324 # add present self-attn cache to positions 1,2 of present_key_value tuple
--> 325 hidden_states, self_attn_weights, present_key_value = self.self_attn(
326 hidden_states=hidden_states,
327 past_key_value=self_attn_past_key_value,
/usr/local/lib/python3.10/dist-packages/transformers/models/opt/modeling_tf_opt.py in call(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, training)
217
218 if attention_mask is not None:
--> 219 tf.debugging.assert_equal(
220 shape_list(attention_mask),
221 [bsz, 1, tgt_len, src_len],
InvalidArgumentError: Exception encountered when calling layer 'self_attn' (type TFOPTAttention).
Attention mask should be of size (1, 1, 0, 1), but is [1, 1, 1, 2]
Condition x == y did not hold.
Indices of first 2 different values:
[[2]
[3]]
Corresponding x values:
[1 2]
Corresponding y values:
[0 1]
First 3 elements of x:
[1 1 1]
First 3 elements of y:
[1 1 0]
Call arguments received by layer 'self_attn' (type TFOPTAttention):
โข hidden_states=tf.Tensor(shape=(1, 0, 768), dtype=float32)
โข key_value_states=None
โข past_key_value=('tf.Tensor(shape=(1, 12, 1, 64), dtype=float32)', 'tf.Tensor(shape=(1, 12, 1, 64), dtype=float32)')
โข attention_mask=tf.Tensor(shape=(1, 1, 1, 2), dtype=float32)
โข layer_head_mask=None
โข training=False
```
</details>
### Expected behavior
I expect it to work like it does with GPT2
```py
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2")
decoding_example(model, tokenizer) # works
``` | 07-03-2023 14:55:51 | 07-03-2023 14:55:51 | cc @Rocketknight1 <|||||>Yep, something is clearly being mangled in here. The `hidden_states` shape of `(1, 0, 768)` is alarming - there's obviously some incorrect array slicing happening somewhere. I'll investigate as soon as I get a chance, but if you want to try taking a look before then, the relevant code is [all in this file](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_tf_opt.py). If you want to try debugging it yourself, I'd advise:
1) Clone `transformers` yourself: `git clone https://github.com/huggingface/transformers.git`
2) Make an editable install from that local repo: `cd transformers && pip install -e .`
3) Start putting `breakpoint()` or tests in the `modeling_tf_opt.py` file and seeing if you can find where the arrays get sliced down to length `0`.
That's a lot of work, though - if you can wait, I'll get around to it in a few days!<|||||>Unfortunately, I didn't manage to finish this before a holiday due to some more Falcon chaos - cc @gante if you get a chance, and if not I can take it when I get back!
I identified the core problem as some confusion in the code about what the actual `seq_length` is. The first problem is [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_tf_opt.py#L618) - it uses the sequence length from `input_ids` / `input_embeds` to build an `attention_mask` if one isn't provided, but the actual shape should be `(batch_size, seq_length + past_key_values_length)`, whereas this just builds one with shape `(batch_size, seq_length)`.
However, fixing this led to other problems - the expanded/combined attention mask code also gets a bit confused when `past_key_values` is present. I'm not sure why generation tests don't pick this up, but possibly they explicitly pass an attention mask and avoid the issue!
This attention mask expansion code has been copied all around the codebase - I encountered in in PyTorch Falcon and BLOOM recently, where it also caused some problems. This might be worth doing a repo-wide refactor at some point, as I think the code is unclear and the variable names can be confusing, probably because it started as encoder-decoder code and is now being used to manage attention over past key-values.<|||||>Unrelated to this issue but for tflite export I end up having to do something hacky anyway to pass a custom past_key_values_length value, since the shape is dynamic and code cannot depend on it during tflite export (`past_key_values[0][0].shape[2]` just resolves to None and causes an exception later on trying to use None as a number). It'd be nice if there was a built-in way to pass a past_key_values_length value<|||||>Hi @abb128, good point! That might be a sign that we should be using `tf.shape()` instead, which will correctly allow the dynamic shape to be compiled. I'll investigate while I'm fixing the rest of this.<|||||>@abb128 I've filed a patch - please try it and let me know if it works for you! |
transformers | 24,636 | closed | Fix audio feature extractor deps | # What does this PR do?
The PR #21998 refactored many of the audio feature extractors to use a numpy backed for log Mel feature extraction (as opposed to `torchaudio` as was done previously). However, some of the feature extractors still required the `"speech"` backend for import, which states `torchaudio` as its sole dependency:
https://github.com/huggingface/transformers/blob/6eedfa6dd15dc1e22a55ae036f681914e5a0d9a1/src/transformers/utils/import_utils.py#L648-L650
This PR updates these four feature extractors to no longer require `"speech"`, since they're now numpy only. | 07-03-2023 12:02:43 | 07-03-2023 12:02:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,635 | closed | Generate: multi-device support for contrastive search | # What does this PR do?
Fixes #24634
In multi-gpu settings, the past KV cache may be scattered across devices -- the cache corresponding to a layer sits in the same device as the layer itself, and different layers may be in different devices.
In contrastive search, we must apply indexing operations on the past KV cache. The indexes are in a tensor, which sits on the same device as the model outputs by default. Applying these indexes on the past KV cache currently results in an exception if the model is split across devices (see the issue linked above).
This means we either move the indexing tensor to all possible devices or keep the tensor on CPU. Indexing is typically CPU-heavy on PyTorch, so the benchmarks on my end indicate that moving the indexing tensor to the CPU enables multi-device contrastive search without noticeable throughput degradation ๐ | 07-03-2023 10:55:15 | 07-03-2023 10:55:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>For future reference, here's the benchmark code:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from tqdm import tqdm
# Other configuration options
DEVICE = "cuda:0"
NUM_RUNS = 10
MAX_NEW_TOKENS = 1000
TEXT_INPUT = "def sieve_of_eratosthenes():"
# Load the model and prepare generate args
repo_id = "huggyllama/llama-7b"
model = AutoModelForCausalLM.from_pretrained(repo_id, device_map="auto", load_in_4bit=True)
assistant_model = None
tokenizer = AutoTokenizer.from_pretrained(repo_id, use_fast=True)
model_inputs = tokenizer(TEXT_INPUT, return_tensors="pt").to(DEVICE)
generate_kwargs = {
"max_new_tokens": MAX_NEW_TOKENS,
"top_k": 10,
"penalty_alpha": 0.6,
}
# Warmup
print("Warming up...")
for _ in range(2):
gen_out = model.generate(**model_inputs, **generate_kwargs)
print("Done!")
# Measure OR Stream
def measure_generate(model, model_inputs, generate_kwargs):
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
torch.cuda.reset_peak_memory_stats(DEVICE)
torch.cuda.empty_cache()
torch.cuda.synchronize()
start_event.record()
for _ in tqdm(range(NUM_RUNS)):
gen_out = model.generate(**model_inputs, **generate_kwargs)
end_event.record()
torch.cuda.synchronize()
max_memory = torch.cuda.max_memory_allocated(DEVICE)
print("Max memory (MB): ", max_memory * 1e-6)
print("Throughput (tokens/sec): ", (NUM_RUNS * MAX_NEW_TOKENS) / (start_event.elapsed_time(end_event) * 1.0e-3))
measure_generate(model, model_inputs, generate_kwargs)
```
On my end, with a RTX3090, I get 150 tokens/s before and after these changes.<|||||>@gante Thanks for adding the script! โค๏ธ |
transformers | 24,634 | closed | .generate() supports contrastive-search on multi-device? | ### System Info
### script
```python
import torch
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
checkpoint = "EleutherAI/polyglot-ko-12.8b"
tokenizer = AutoTokenizer.from_pretrained(
checkpoint,
padding_side="left",
pad_token_id=0,
)
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
pad_token_id=tokenizer.pad_token_id,
)
model.eval()
tokenized = tokenizer("hi there?", return_tensors='pt')
input_ids = tokenized.input_ids
attention_mask = tokenized.attention_mask
generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512)
```
### faced messages
when I ran upper script, I was faced following message.
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:6 โ
โ โ
โ 3 input_ids = tokenized.input_ids โ
โ 4 attention_mask = tokenized.attention_mask โ
โ 5 โ
โ โฑ 6 generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512) โ
โ 7 โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in โ
โ decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:1544 in โ
โ generate โ
โ โ
โ 1541 โ โ โ if not model_kwargs["use_cache"]: โ
โ 1542 โ โ โ โ raise ValueError("Contrastive search requires `use_cache=True`") โ
โ 1543 โ โ โ โ
โ โฑ 1544 โ โ โ return self.contrastive_search( โ
โ 1545 โ โ โ โ input_ids, โ
โ 1546 โ โ โ โ top_k=generation_config.top_k, โ
โ 1547 โ โ โ โ penalty_alpha=generation_config.penalty_alpha, โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in โ
โ decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2004 in โ
โ contrastive_search โ
โ โ
โ 2001 โ โ โ โ
โ 2002 โ โ โ logit_for_next_step = logits_processor(input_ids, logit_for_next_step) โ
โ 2003 โ โ โ logit_for_next_step = logits_warper(input_ids, logit_for_next_step) โ
โ โฑ 2004 โ โ โ next_probs = nn.functional.softmax(logit_for_next_step, dim=-1) โ
โ 2005 โ โ โ top_k_probs, top_k_ids = torch.topk(next_probs, dim=-1, k=top_k) โ
โ 2006 โ โ โ โ
โ 2007 โ โ โ # Store scores, attentions and hidden_states when required โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/nn/functional.py:1843 in softmax โ
โ โ
โ 1840 โ if dim is None: โ
โ 1841 โ โ dim = _get_softmax_dim("softmax", input.dim(), _stacklevel) โ
โ 1842 โ if dtype is None: โ
โ โฑ 1843 โ โ ret = input.softmax(dim) โ
โ 1844 โ else: โ
โ 1845 โ โ ret = input.softmax(dim, dtype=dtype) โ
โ 1846 โ return ret โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Half'
```
And then, I modified my script to `input_ids` move to `cuda:0` like this:
```
input_ids = tokenized.input_ids.to("cuda:0")
generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512)
```
Finally I met following message:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:6 โ
โ โ
โ 3 input_ids = tokenized.input_ids.to("cuda:0") โ
โ 4 attention_mask = tokenized.attention_mask.to("cuda:0") โ
โ 5 โ
โ โฑ 6 generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512) โ
โ 7 โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in โ
โ decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:1544 in โ
โ generate โ
โ โ
โ 1541 โ โ โ if not model_kwargs["use_cache"]: โ
โ 1542 โ โ โ โ raise ValueError("Contrastive search requires `use_cache=True`") โ
โ 1543 โ โ โ โ
โ โฑ 1544 โ โ โ return self.contrastive_search( โ
โ 1545 โ โ โ โ input_ids, โ
โ 1546 โ โ โ โ top_k=generation_config.top_k, โ
โ 1547 โ โ โ โ penalty_alpha=generation_config.penalty_alpha, โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in โ
โ decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /opt/conda/envs/py3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2076 in โ
โ contrastive_search โ
โ โ
โ 2073 โ โ โ โ # item is either the key or the value matrix โ
โ 2074 โ โ โ โ for item in layer: โ
โ 2075 โ โ โ โ โ item = torch.stack(torch.split(item, top_k, dim=0)) # [B, K, num_he โ
โ โฑ 2076 โ โ โ โ โ item = item[range(batch_size), selected_idx, ...] # [B, num_head, s โ
โ 2077 โ โ โ โ โ items += (item,) โ
โ 2078 โ โ โ โ new_key_values += (items,) โ
โ 2079 โ โ โ next_past_key_values = new_key_values โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:1)
```
### transforers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.30.2
- Platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, I'm using 2 P40 cores in my script.
- Using distributed or parallel set-up in script?: No, but I'm using accelerators's `device_map="auto"` option to automatically split model weights.
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Just run the following generating script on the env **multi-device assigned**.
```python
import torch
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
checkpoint = "EleutherAI/polyglot-ko-12.8b"
tokenizer = AutoTokenizer.from_pretrained(
checkpoint,
padding_side="left",
pad_token_id=0,
)
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
pad_token_id=tokenizer.pad_token_id,
)
model.eval()
tokenized = tokenizer("hi there?", return_tensors='pt')
input_ids = tokenized.input_ids
attention_mask = tokenized.attention_mask
generated = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=512)
```
### Expected behavior
I want to obtain the generated text regardless of the outcome. | 07-03-2023 09:49:20 | 07-03-2023 09:49:20 | Hey @pfldy2850 ๐
I believe I know the solution to your issues, but I don't have a multi-gpu setup. I'm going to open a PR, and then ask you to double-check whether it works :)<|||||>@pfldy2850 would you be able to test using [this PR](https://github.com/huggingface/transformers/pull/24635)?<|||||>@gante
Wow! Your outstanding work has successfully resolved the issue. ๐
I have achieved an expected output that I was aiming for.
I would like to use this changes in production.
Could you please provide information on the release cycle of this repository?<|||||>@pfldy2850 awesome! The PR should be merged within 24 hours :)
You have two options, after the PR gets merged:
1. Wait for the next release, which will probably happen in two or three weeks
2. Install from `main` with `pip install --upgrade git+https://github.com/huggingface/transformers.git` OR replace the requirement on your `setup.py`/`requirements.txt` with `transformers @ git+https://github.com/huggingface/transformers.git` |
transformers | 24,633 | closed | Pin `Pillow` for now | # What does this PR do?
`Pillow 10.0.0` is out 2 days ago.
Our CI get errors (via the usage of `detectron2`):
```bash
...
/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/transform.py:36: in <module>
...
> def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):
E AttributeError: module 'PIL.Image' has no attribute 'LINEAR'
```
This is due to the previous deprecation and the removal now
```bash
<stdin>:1: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use BILINEAR or Resampling.BILINEAR instead.
```
This PR pins `Pillow` for now until `detectron2` fixes it.
| 07-03-2023 07:26:23 | 07-03-2023 07:26:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merge as the 2 failed tests seems flaky. |
transformers | 24,632 | open | TrOCRProcessor.from_pretrained raise KeyError(key) | ### System Info
@amyeroberts
1. background
I fine-tuned a model of [microsoft/trocr-base-printed](https://huggingface.co/microsoft/trocr-base-printed)
the model is https://huggingface.co/hongyusir/trocr-base-printed_captcha_ocr
and I load my model raise KeyError(key)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
2. the source code is:
```
import os, sys, itertools
os.environ['TOKENIZERS_PARALLELISM']='false'
import pandas as pd
from PIL import Image
import torch
from torch.utils.data import Dataset
import datasets
from datasets import load_dataset
import transformers
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
from transformers import VisionEncoderDecoderModel, TrOCRProcessor, default_data_collator
import evaluate
print("Python:".rjust(15), sys.version[0:6])
print("Pandas:".rjust(15), pd.__version__)
print("Datasets:".rjust(15), datasets.__version__)
print("Transformers:".rjust(15), transformers.__version__)
print("Torch:".rjust(15), torch.__version__)
print("load model")
processor = TrOCRProcessor.from_pretrained('hongyusir/trocr-base-printed_captcha_ocr')
model = VisionEncoderDecoderModel.from_pretrained('hongyusir/trocr-base-printed_captcha_ocr')
print("finish load model")
```
3.the output is:
```
Python: 3.8.10
Pandas: 2.0.3
Datasets: 2.13.1
Transformers: 4.30.2
Torch: 2.0.1+cu117
load model
Traceback (most recent call last):
File "x.py", line 27, in <module>
processor = TrOCRProcessor.from_pretrained('hongyusir/trocr-base-printed_captcha_ocr')
File "/home/pc/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 184, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/pc/.local/lib/python3.8/site-packages/transformers/processing_utils.py", line 228, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
File "/home/pc/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 707, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/home/pc/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 665, in __getitem__
raise KeyError(key)
KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'>
```
### Expected behavior
load model success | 07-03-2023 06:57:07 | 07-03-2023 06:57:07 | Hi @laizhenhai88
It seems you forgot to upload the tokenizer you loaded/used during training to your own model repo.
If you upload it. the issue you reported will disappear.
Let us know if you have further question.<|||||>> Hi @laizhenhai88
>
> It seems you forgot to upload the tokenizer you loaded/used during training to your own model repo. If you upload it. the issue you reported will disappear.
>
> Let us know if you have further question.
thanks!
my train code is
```
trainer = Seq2SeqTrainer(
model=model,
tokenizer=processor.feature_extractor,
args=args,
compute_metrics=compute_metrics,
train_dataset=train_ds,
eval_dataset=test_ds,
data_collator=default_data_collator
)
trainer.train()
trainer.save_model()
trainer.save_state()
trainer.evaluate()
```
maybe I need `trainer.push_to_hub("All Dunn!!!")` ?<|||||>Yes, but you already have a repo, so I assume you already used `push_to_hub`. I am not sure why you don't have tokenizer on the repo then.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,631 | open | Fine tunning Bloom model - Failed to import transformers.training_args | ### System Info
falcon-7b-instruct(url)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline
sequences = pipeline(
"Write a poem about Valencia.",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
### Expected behavior
Hi,
while running transformers API models on local machine facing issue of, Failed to import transformers.pipelines because of the following error (look up to see its traceback):
module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. How to fix this? | 07-03-2023 06:21:31 | 07-03-2023 06:21:31 | Hi @seema-AIML
Could you post the full trace log, please. Thank you in advance.<|||||>from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bigscience/bloom-560m", num_labels=5)
from builtins import object
from transformers import TrainingArguments
training_args = TrainingArguments(output_dir="test_trainer")
While creating TrainingArguments getting below error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\transformers\utils\import_utils.py in _get_module(self, module_name)
1125 try:
-> 1126 return importlib.import_module("." + module_name, self.__name__)
1127 except Exception as e:
~\Anaconda3\lib\importlib\__init__.py in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
~\Anaconda3\lib\importlib\_bootstrap.py in _gcd_import(name, package, level)
~\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)
~\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)
~\Anaconda3\lib\importlib\_bootstrap.py in _load_unlocked(spec)
~\Anaconda3\lib\importlib\_bootstrap_external.py in exec_module(self, module)
~\Anaconda3\lib\importlib\_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
~\Anaconda3\lib\site-packages\transformers\training_args.py in <module>
29 from .debug_utils import DebugOption
---> 30 from .trainer_utils import (
31 EvaluationStrategy,
~\Anaconda3\lib\site-packages\transformers\trainer_utils.py in <module>
46 if is_tf_available():
---> 47 import tensorflow as tf
48
~\Anaconda3\lib\site-packages\tensorflow\__init__.py in <module>
40
---> 41 from tensorflow.python.tools import module_util as _module_util
42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
~\Anaconda3\lib\site-packages\tensorflow\python\__init__.py in <module>
45 # Bring in subpackages.
---> 46 from tensorflow.python import data
47 from tensorflow.python import distribute
~\Anaconda3\lib\site-packages\tensorflow\python\data\__init__.py in <module>
24 # pylint: disable=unused-import
---> 25 from tensorflow.python.data import experimental
26 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\__init__.py in <module>
96 # pylint: disable=unused-import
---> 97 from tensorflow.python.data.experimental import service
98 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py in <module>
352
--> 353 from tensorflow.python.data.experimental.ops.data_service_ops import distribute
354 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py in <module>
25 from tensorflow.python.compat import compat
---> 26 from tensorflow.python.data.experimental.ops import compression_ops
27 from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py in <module>
19
---> 20 from tensorflow.python.data.util import structure
21 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops
~\Anaconda3\lib\site-packages\tensorflow\python\data\util\structure.py in <module>
25
---> 26 from tensorflow.python.data.util import nest
27 from tensorflow.python.framework import composite_tensor
~\Anaconda3\lib\site-packages\tensorflow\python\data\util\nest.py in <module>
39
---> 40 from tensorflow.python.framework import sparse_tensor as _sparse_tensor
41 from tensorflow.python.util import _pywrap_utils
~\Anaconda3\lib\site-packages\tensorflow\python\framework\sparse_tensor.py in <module>
27 from tensorflow.python.framework import composite_tensor
---> 28 from tensorflow.python.framework import constant_op
29 from tensorflow.python.framework import dtypes
~\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in <module>
28 from tensorflow.python.eager import context
---> 29 from tensorflow.python.eager import execute
30 from tensorflow.python.framework import dtypes
~\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in <module>
26 from tensorflow.python.eager import core
---> 27 from tensorflow.python.framework import dtypes
28 from tensorflow.python.framework import ops
~\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py in <module>
584 types_pb2.DT_STRING:
--> 585 np.object,
586 types_pb2.DT_COMPLEX64:
~\Anaconda3\lib\site-packages\numpy\__init__.py in __getattr__(attr)
304 if attr in __former_attrs__:
--> 305 raise AttributeError(__former_attrs__[attr])
306
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
The above exception was the direct cause of the following exception:
__________________________________________________________________________________________________________________________
RuntimeError Traceback (most recent call last)
<ipython-input-10-fdfb390c11be> in <module>
1 from builtins import object
----> 2 from transformers import TrainingArguments
3
4 training_args = TrainingArguments(output_dir="test_trainer")
~\Anaconda3\lib\importlib\_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)
~\Anaconda3\lib\site-packages\transformers\utils\import_utils.py in __getattr__(self, name)
1114 value = self._get_module(name)
1115 elif name in self._class_to_module.keys():
-> 1116 module = self._get_module(self._class_to_module[name])
1117 value = getattr(module, name)
1118 else:
~\Anaconda3\lib\site-packages\transformers\utils\import_utils.py in _get_module(self, module_name)
1126 return importlib.import_module("." + module_name, self.__name__)
1127 except Exception as e:
-> 1128 raise RuntimeError(
1129 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1130 f" traceback):\n{e}"
RuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback):
module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
____________________________________________________________________________________________________________________________
How to fix this?
<|||||>The error occurs in tensorflow file.
```bash
~\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py in
584 types_pb2.DT_STRING:
--> 585 np.object,
```
If you don't need tensorflow, the quick way to check is to uninstall tensorflow and see if the issue is resolved.
You can also try to create a new virtual environment, and install as `pip install transformers[torch]`.
<|||||>created new a new virtual environment, and installed transformers[torch]. Still getting same error.
I have not installed tensorflow in new virtual environment. when tried to uninstall tensorflow getting warning as WARNING: Skipping tensorflow as it is not installed.<|||||>Please provide the new full error log (the one that is run within the new environment).<|||||>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\transformers\utils\import_utils.py in _get_module(self, module_name)
1125 try:
-> 1126 return importlib.import_module("." + module_name, self.__name__)
1127 except Exception as e:
~\Anaconda3\lib\importlib\__init__.py in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
~\Anaconda3\lib\importlib\_bootstrap.py in _gcd_import(name, package, level)
~\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)
~\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)
~\Anaconda3\lib\importlib\_bootstrap.py in _load_unlocked(spec)
~\Anaconda3\lib\importlib\_bootstrap_external.py in exec_module(self, module)
~\Anaconda3\lib\importlib\_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
~\Anaconda3\lib\site-packages\transformers\training_args.py in <module>
29 from .debug_utils import DebugOption
---> 30 from .trainer_utils import (
31 EvaluationStrategy,
~\Anaconda3\lib\site-packages\transformers\trainer_utils.py in <module>
46 if is_tf_available():
---> 47 import tensorflow as tf
48
~\Anaconda3\lib\site-packages\tensorflow\__init__.py in <module>
40
---> 41 from tensorflow.python.tools import module_util as _module_util
42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
~\Anaconda3\lib\site-packages\tensorflow\python\__init__.py in <module>
45 # Bring in subpackages.
---> 46 from tensorflow.python import data
47 from tensorflow.python import distribute
~\Anaconda3\lib\site-packages\tensorflow\python\data\__init__.py in <module>
24 # pylint: disable=unused-import
---> 25 from tensorflow.python.data import experimental
26 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\__init__.py in <module>
96 # pylint: disable=unused-import
---> 97 from tensorflow.python.data.experimental import service
98 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py in <module>
352
--> 353 from tensorflow.python.data.experimental.ops.data_service_ops import distribute
354 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py in <module>
25 from tensorflow.python.compat import compat
---> 26 from tensorflow.python.data.experimental.ops import compression_ops
27 from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy
~\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py in <module>
19
---> 20 from tensorflow.python.data.util import structure
21 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops
~\Anaconda3\lib\site-packages\tensorflow\python\data\util\structure.py in <module>
25
---> 26 from tensorflow.python.data.util import nest
27 from tensorflow.python.framework import composite_tensor
~\Anaconda3\lib\site-packages\tensorflow\python\data\util\nest.py in <module>
39
---> 40 from tensorflow.python.framework import sparse_tensor as _sparse_tensor
41 from tensorflow.python.util import _pywrap_utils
~\Anaconda3\lib\site-packages\tensorflow\python\framework\sparse_tensor.py in <module>
27 from tensorflow.python.framework import composite_tensor
---> 28 from tensorflow.python.framework import constant_op
29 from tensorflow.python.framework import dtypes
~\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in <module>
28 from tensorflow.python.eager import context
---> 29 from tensorflow.python.eager import execute
30 from tensorflow.python.framework import dtypes
~\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in <module>
26 from tensorflow.python.eager import core
---> 27 from tensorflow.python.framework import dtypes
28 from tensorflow.python.framework import ops
~\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py in <module>
584 types_pb2.DT_STRING:
--> 585 np.object,
586 types_pb2.DT_COMPLEX64:
~\Anaconda3\lib\site-packages\numpy\__init__.py in __getattr__(attr)
304 if attr in __former_attrs__:
--> 305 raise AttributeError(__former_attrs__[attr])
306
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-15-e0222726b472> in <module>
----> 1 from transformers import TrainingArguments
2
3 training_args = TrainingArguments(output_dir="test_trainer")
~\Anaconda3\lib\importlib\_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)
~\Anaconda3\lib\site-packages\transformers\utils\import_utils.py in __getattr__(self, name)
1114 value = self._get_module(name)
1115 elif name in self._class_to_module.keys():
-> 1116 module = self._get_module(self._class_to_module[name])
1117 value = getattr(module, name)
1118 else:
~\Anaconda3\lib\site-packages\transformers\utils\import_utils.py in _get_module(self, module_name)
1126 return importlib.import_module("." + module_name, self.__name__)
1127 except Exception as e:
-> 1128 raise RuntimeError(
1129 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1130 f" traceback):\n{e}"
RuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback):
module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
Its same error<|||||>The error still shows `tensorflow` is in your environment.
Could you show us the results of `transformers-cli env`, `pip show tensorflow` and `pip show tensorflow-cpu`<|||||>result of transformers-cli env
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
(hface) (base) D:\>pip show tensorflow
WARNING: Package(s) not found: tensorflow
(hface) (base) D:\>pip show tensorflow-cpu
WARNING: Package(s) not found: tensorflow-cpu
<|||||>Hmm. The TF detection logic is in the following block.
https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/utils/import_utils.py#L144-L183
You env. might still have something listed in
https://github.com/huggingface/transformers/blob/cd4584e3c809bb9e1392ccd3fe38b40daba5519a/src/transformers/utils/import_utils.py#L155-L165
You can either check each of them and uninstall if they appear. Otherwise much easier, you can try to set the env. varialbe `USE_TF` to `False`, either by `set USE_TF=0` or `export USE_TF=0`<|||||>I have set USE_TF = 0
%env USE_TF=0
from transformers import AutoTokenizer, BartForConditionalGeneration, Trainer, TrainingArguments
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
trainer.train()
Still same error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-10-d3d13a2b0587> in <module>
17
18
---> 19 trainer = Trainer(
20 model=model, # the instantiated Transformers model to be trained
21 args=training_args, # training arguments, defined above
~\Anaconda3\lib\site-packages\transformers\trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
517 default_callbacks = DEFAULT_CALLBACKS + get_reporting_integration_callbacks(self.args.report_to)
518 callbacks = default_callbacks if callbacks is None else default_callbacks + callbacks
--> 519 self.callback_handler = CallbackHandler(
520 callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler
521 )
~\Anaconda3\lib\site-packages\transformers\trainer_callback.py in __init__(self, callbacks, model, tokenizer, optimizer, lr_scheduler)
294 self.callbacks = []
295 for cb in callbacks:
--> 296 self.add_callback(cb)
297 self.model = model
298 self.tokenizer = tokenizer
~\Anaconda3\lib\site-packages\transformers\trainer_callback.py in add_callback(self, callback)
311
312 def add_callback(self, callback):
--> 313 cb = callback() if isinstance(callback, type) else callback
314 cb_class = callback if isinstance(callback, type) else callback.__class__
315 if cb_class in [c.__class__ for c in self.callbacks]:
~\Anaconda3\lib\site-packages\transformers\integrations.py in __init__(self)
926 if not is_mlflow_available():
927 raise RuntimeError("MLflowCallback requires mlflow to be installed. Run `pip install mlflow`.")
--> 928 import mlflow
929
930 self._MAX_PARAM_VAL_LENGTH = mlflow.utils.validation.MAX_PARAM_VAL_LENGTH
~\Anaconda3\lib\site-packages\mlflow\__init__.py in <module>
48 try:
49 # pylint: disable=unused-import
---> 50 import mlflow.catboost as catboost # noqa: E402
51 import mlflow.fastai as fastai # noqa: E402
52 import mlflow.gluon as gluon # noqa: E402
~\Anaconda3\lib\site-packages\mlflow\catboost.py in <module>
22
23 import mlflow
---> 24 from mlflow import pyfunc
25 from mlflow.models import Model, ModelInputExample
26 from mlflow.models.model import MLMODEL_FILE_NAME
~\Anaconda3\lib\site-packages\mlflow\pyfunc\__init__.py in <module>
217 from typing import Any, Union, List, Dict
218 import mlflow
--> 219 import mlflow.pyfunc.model
220 import mlflow.pyfunc.utils
221 from mlflow.models import Model, ModelSignature, ModelInputExample
~\Anaconda3\lib\site-packages\mlflow\pyfunc\model.py in <module>
15 import mlflow.utils
16 from mlflow.exceptions import MlflowException
---> 17 from mlflow.models import Model
18 from mlflow.models.model import MLMODEL_FILE_NAME
19 from mlflow.protos.databricks_pb2 import INVALID_PARAMETER_VALUE
~\Anaconda3\lib\site-packages\mlflow\models\__init__.py in <module>
24 from .model import Model
25 from .flavor_backend import FlavorBackend
---> 26 from .signature import ModelSignature, infer_signature
27 from .utils import ModelInputExample
28 from ..utils.environment import infer_pip_requirements
~\Anaconda3\lib\site-packages\mlflow\models\signature.py in <module>
10 import numpy as np
11
---> 12 from mlflow.types.schema import Schema
13 from mlflow.types.utils import _infer_schema
14
~\Anaconda3\lib\site-packages\mlflow\types\__init__.py in <module>
4 """
5
----> 6 from .schema import DataType, ColSpec, Schema, TensorSpec
7
8 __all__ = ["Schema", "ColSpec", "DataType", "TensorSpec"]
~\Anaconda3\lib\site-packages\mlflow\types\schema.py in <module>
18
19
---> 20 class DataType(Enum):
21 """
22 MLflow data types.
~\Anaconda3\lib\site-packages\mlflow\types\schema.py in DataType()
47 string = (6, np.dtype("str"), "StringType", _pandas_string_type())
48 """Text data."""
---> 49 binary = (7, np.dtype("bytes"), "BinaryType", np.object)
50 """Sequence of raw bytes."""
51 datetime = (8, np.dtype("datetime64"), "TimestampType")
~\Anaconda3\lib\site-packages\numpy\__init__.py in __getattr__(attr)
303
304 if attr in __former_attrs__:
--> 305 raise AttributeError(__former_attrs__[attr])
306
307 # Importing Tester requires importing all of UnitTest which is not a
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
โ<|||||>Try to set `report_to="none"` in `training_args = TrainingArguments`. Your environment has `mlflow` installed which might use some deprecated `numpy` code. Or you can upgrade your `mflow` versions.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,630 | open | Loading GPT-Neo-2.7B has error | ### System Info
transformers==4.28.1, torch==1.13.1, dgx-A100, python=3.8.15
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model_name = "/models/gpt-neo-2.7B"
model = AutoModelForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True)
```
Then I get following bug:
```
OSError: Unable to load weights from pytorch checkpoint file for '/models/gpt-neo-2.7B/pytorch_model.bin' at '/models/gpt-neo-2.7B/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
However, for gpt-neo-125m and gpt-neo-1.3b, no bug occurs.
Could you please help me with this issue? Many thanks!
### Expected behavior
Load the model successfully. | 07-03-2023 03:30:30 | 07-03-2023 03:30:30 | Hi @YIYANGCAI
Do you have a checkpoint in your local machine with path `/models/gpt-neo-2.7B`?<|||||>yes, I pre-downloaded it at this path.<|||||>could you check if you have absolute path `/models/gpt-neo-2.7B` or you intend to use relative path?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,629 | closed | [`MPT`] Add MosaicML's `MPT` model to transformers | # What does this PR do?
Fixes #23174
First questions:
- [ย ] Should we keep the nested config for attention and init configs? Pros: backward, Cons: not what we usually do, can't modify on the fly, harder to maintain
- [ ] Should we keep flash attention or go with better transformers
- [ ] Do we want 100% backward compatibility
# TODOS :
- [ ] Properly setup the config
- [ ] Write a mapping to go from original mosaicml config to new config (since attribute names have to be changed)
- [ ] Design tests, clone the repo to `hf-internal-testing` since at the end we intend to remove the code from the hub. Test attention patterns , flash and trition
- [x] One model on file.
# Notes :
Tokenizer is the same as GPTNeoX, only has a fast version, adds sentinel tokens. We don't really need a custom config for this and should just always have these in the tokenizer config.
| 07-03-2023 02:49:47 | 07-03-2023 02:49:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger we think the PR is ready for a review ๐ ! The logits tests pass with tolerance `1e-12` between the model on the Hub and ours. There is nothing to do on the Hub as the current code is perfectly backward compatible with their config and weights.
```python
import torch
from transformers import AutoModelForCausalLM, MptForCausalLM, AutoTokenizer, MptForCausalLM
model_id = "mosaicml/mpt-7b"
tok = AutoTokenizer.from_pretrained(model_id)
model = MptForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map={"":1}, load_in_4bit=True)
model_trust_remote_code = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map={"":0}, load_in_4bit=True, trust_remote_code=True)
outputs_transformers = model(torch.LongTensor([[1, 2, 3, 4, 5]]).to(1), output_hidden_states=True)
outputs_trust_remote_code = model_trust_remote_code(torch.LongTensor([[1, 2, 3, 4, 5]]).to(0))
print(torch.allclose(outputs_transformers.logits, outputs_trust_remote_code.logits.to(1), atol=1e-12, rtol=1e-12))
>>> True
```
Currently we don't support advanced features such as triton attention or custom init, hence we advise super users that want to use this feature to load the trust_remote_code model if they want to benefit from these features
cc also @Narsil and @OlivierDehaene for TGI - I think things should work smoothly on your side<|||||>> cc also @Narsil and @OlivierDehaene for TGI - I think things should work smoothly on your side
MPT is already supported actually. (No triton, nor flash either, because of alibi) |
transformers | 24,628 | closed | [i18n-<languageCode>] Translating docs to <languageName> | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community ๐ (currently 0 out of 267 complete)
Who would want to translate? Please follow the ๐ค [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers ๐ค).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* ๐ If you'd like others to help you with the translation, you can also post in the ๐ค [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go ๐ฅ
-->
| 07-03-2023 00:56:20 | 07-03-2023 00:56:20 | Hi @Everton-12, Could you make sure to fill in the template for this issue? At the moment there is no specified language. |
transformers | 24,627 | open | Create SECURITY.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-02-2023 17:00:45 | 07-02-2023 17:00:45 | Hi @tarzzii, thanks for opening this PR.
Could you fill out the PR description please?
The third box was checked - but I can't see any link to the relevant discussion. Could you add that too please?
The final box was checked, but I do not see any tests<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,626 | closed | Trainer็ไฝฟ็จ้ฎ้ข |
Trainerๅจๅฎไพๅ็ๆถๅไธๆฏๅทฒ็ปไผ ๅ
ฅๅ ่ฝฝ็ๆจกๅไบๅ๏ผไธบไปไน trainer.train(resume_from_checkpoint=checkpoint)่ฟๅฏไปฅไปไฟๅญ็ๆฃๆฅ็นๅ ่ฝฝๆจกๅ๏ผ๏ผ๏ผ๏ผ | 07-02-2023 14:30:39 | 07-02-2023 14:30:39 | Hi @fxb392
It's mostly for loading the optimizer's scheduler and other states. But it's also convient if you load a canonical model (say from the Hub) while instantiating a trainer but want to use other checkpoints.
You don'<|||||>OK๏ผthanks for you guidence.Does this only means the trainer can conveniently load any checkpoints for train?<|||||>Yes, but you have to be careful to load the checkpoint which is saved by a trainer that loaded the same model type and the same model configuration.<|||||>Okay, I understand, thank you again. |
transformers | 24,625 | open | ๐ [i18n-KO] Translated `model_summary.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `model_summary.md` file of the documentation to Korean ๐
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- Team PseudoLab, may you please review this PR? -->
@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? -->
<!-- @sgugger, @ArthurZucker, @eunseojo --> | 07-02-2023 11:38:31 | 07-02-2023 11:38:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24625). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,624 | closed | LlamaForCausalLM returning prompt without answer | ### System Info
transformers: 4.30.2
Python: 3.9.17
OS: MacOS 13.3.1 (a)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I took the below code from the official documentation on the HuggingFace website: https://huggingface.co/openlm-research/open_llama_13b_easylm, and slightly adapted it to match my use case that is information extraction from unstructured text (named entity recognition) using LLMs.
```
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map='auto',
)
prompt = "What are the named entities in the following text: 'The Moon revolves around the Earth for over 4 billion years.'"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
answer = tokenizer.decode(generation_output[0])
```
This code returns the following output:
`"<s>What are the named entities in the following text: 'The Moon revolves around the Earth for over 4 billion years.'?\nThe named entities in the following text are:\nThe Moon revolves around the Earth over 4 billion years.\nThe Moon revolves around the"`
which is not what I was hoping for. Interestingly, when I use the default example shown in the documentation, i.e.
`prompt = 'Q: What is the largest animal?\nA:'`
I get the following output:
`'<s>Q: What is the largest animal?\nA: A whale.\nQ: What is the largest animal?\nA: A whale.\nQ: What is the largest animal?\nA: A whale'`
which is slightly better, although I don't quite understand how to limit the engine not to keep repeating itself.
### Expected behavior
The expected output would be something like:
`{"ASTRONOMICAL_NAME": "Moon", "ASTRONOMICAL_NAME": "Earth", "PERIOD": "4 billion years"}`
| 07-02-2023 11:02:35 | 07-02-2023 11:02:35 | Hi @leweex95, thanks for raising an issue!
This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. |
transformers | 24,623 | closed | Hi | Hello | 07-02-2023 10:27:06 | 07-02-2023 10:27:06 | |
transformers | 24,622 | closed | [Patch-t5-tokenizer] Patches the changes on T5 to make sure previous behaviour is still valide for beginning of words | # What does this PR do?
There was a small typo that modified the behaviour in #24565, the test were not able to catch it. #24569
When a sentence does not start with a space, a space was added.
Before:
```python
>>>tokenizer.tokenize("Hello <extra_id_0>")
['_', '_Hello', '<extra_id_0>']
```
After:
```python
>>>tokenizer.tokenize("Hello <extra_id_0>")
['_Hello', '<extra_id_0>']
```
# Big bug 35 models involved
Not only punctuation but anything after a special token is basically wrong... Let's ignore the fact that we also split when it's the beginning of word less important
<img width="1020" alt="image" src="https://github.com/huggingface/transformers/assets/48595927/d805bd21-4f2a-411b-ad2b-754f4f69517c">
Tests were added as they were green before merging | 07-02-2023 03:55:17 | 07-02-2023 03:55:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ran all the tests with `RUN_SLOW`, switch-ci is fixed<|||||>Until the added tokens are fixed, this will break the slow version that use extra ids, because by default we strip left and right..... So buggy <|||||>The bug has always been in T5, but since some models were trained with the bugged T5, we will let the user decide whether or not they incorparate the change |
transformers | 24,621 | closed | Pop | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-02-2023 01:08:09 | 07-02-2023 01:08:09 | |
transformers | 24,620 | closed | BART is not found 404 | pages for BART models are not responding
e.g:
https://huggingface.co/facebook/bart-large-cnn
https://huggingface.co/facebook/bart-base | 07-01-2023 20:38:59 | 07-01-2023 20:38:59 | Back up now! ๐ค |
transformers | 24,619 | open | AutoTokenizer always tries to download from hub even if the model is cached. Thus it fails to run when running in an docker environment without SSL. | ### System Info
python=3.9
transformers=4.30.2
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run Autotokenizer.frompretrained("path_to_cached_snapshot_directory")
Will throw an SLL error because of no internet connection
Error:
requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /xlm-roberta-large/resolve/main/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)')))
problem found in /transformers/utils/hub.py file:
Method def cached_file #line 300
Problem is possibly here:
#line 401
if _commit_hash is not None and not force_download:
# If the file is cached under that commit hash, we return it directly.
resolved_file = try_to_load_from_cache(
path_or_repo_id, full_filename, cache_dir=cache_dir, revision=_commit_hash, repo_type=repo_type
)
if resolved_file is not None:
if resolved_file is not _CACHED_NO_EXIST:
return resolved_file
elif not _raise_exceptions_for_missing_entries:
return None
else:
raise EnvironmentError(f"Could not locate {full_filename} inside {path_or_repo_id}.")
The script only try to load from cache if it has a _commit_hash provided which will not be the case in the example above.
I tried to do solve this internally this might help:
#line 401
if not force_download:
# If the file is cached under that commit hash, we return it directly.
resolved_file = try_to_load_from_cache(
path_or_repo_id, full_filename, cache_dir=cache_dir, revision=_commit_hash, repo_type=repo_type
)
if resolved_file is not None:
if resolved_file is not _CACHED_NO_EXIST:
return resolved_file
elif not _raise_exceptions_for_missing_entries:
return None
elif _commit_hash is not None:
raise EnvironmentError(f"Could not locate {full_filename} inside {path_or_repo_id}.")
### Expected behavior
It should not download already cached file. | 07-01-2023 20:06:40 | 07-01-2023 20:06:40 | Hey! Thanks for reporting. Before diving a bit deeper, there is a `local_files_only` argument that you can set when calling from pretrained, which activated the `offline mode`. You can also set it using `TRANSFORMERS_OFFLINE=1`. Can you try with this? It was designed for specific cases like this one! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,618 | closed | precompiled_charsmap checking before adding to the normalizers' list for XLNetTokenizerFast conversion. | # What does this PR do?
There is a small change to check the `precompiled_charsmap` during the conversion of a slow tokenizer to `XLNetTokenizerFast`. It will check if the `precompiled_charsmap` is empty before initializing `normalizers.Precompiled` from the tokenizers library. If a [Sentencepiece](https://github.com/google/sentencepiece) tokenizer model is trained with `identity` normalization rule, i.e. no normalization is applied, it fails to initialize a XLNetTokenizerFast as discussed in issue #24616. This PR solves this issue.
Fixes #24616
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-01-2023 19:42:11 | 07-01-2023 19:42:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @ArthurZucker I have incorporated the changes so that it supports all the models.
It is failing in one test case and it seems the test case is not related to this PR. From details I see that it fails due to the `module 'PIL.Image' has no attribute 'LINEAR'` error. Can it be related to the module or environment where the test is running? Do I need to work on this test case for this PR?<|||||>For the `test_exotic_models tests`, a fix, pinning the Pillow version has now been merged into main. Could you rebase to include these and trigger a re-run of the CI?<|||||>Hey @ArthurZucker and @amyeroberts all tests passed after the rebase. Can you have a look?<|||||>Perfect! Thanks for addressing this and contributing! ๐ค |
transformers | 24,617 | closed | Seems some Bart models from facebook are removed | ### System Info
No response.
### Who can help?
@ArthurZucker @YouJiacheng @sgugger @stevhliu @MKhalusova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I found that some Bart models ([facebook/Bart-large](https://huggingface.co/facebook/bart-large), [facebook/Bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn), [facebook/Bart-base](https://huggingface.co/facebook/bart-base), ...) are removed from the hub recently, These models were previously referenced in the Bart documentation. Consequently, I recommend updating the sample scripts to use alternative models.
For example if I run the first [example](https://github.com/huggingface/transformers/blob/66ded238cd04e29ba98485984dd647e7d37d1603/docs/source/en/model_doc/bart.md?plain=1#L88-L101) in the Bart [doc page](https://huggingface.co/docs/transformers/model_doc/bart),
https://github.com/huggingface/transformers/blob/66ded238cd04e29ba98485984dd647e7d37d1603/docs/source/en/model_doc/bart.md?plain=1#L88-L101
it gives me the following error
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_errors.py:259 in โ
โ hf_raise_for_status โ
โ โ
โ 256 โ </Tip> โ
โ 257 โ """ โ
โ 258 โ try: โ
โ โฑ 259 โ โ response.raise_for_status() โ
โ 260 โ except HTTPError as e: โ
โ 261 โ โ error_code = response.headers.get("X-Error-Code") โ
โ 262 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/requests/models.py:1021 in raise_for_status โ
โ โ
โ 1018 โ โ โ ) โ
โ 1019 โ โ โ
โ 1020 โ โ if http_error_msg: โ
โ โฑ 1021 โ โ โ raise HTTPError(http_error_msg, response=self) โ
โ 1022 โ โ
โ 1023 โ def close(self): โ
โ 1024 โ โ """Releases the connection back to the pool. Once this method has been โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
HTTPError: 401 Client Error: Unauthorized for url:
https://huggingface.co/facebook/bart-large/resolve/main/config.json
The above exception was the direct cause of the following exception:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py:420 in cached_file โ
โ โ
โ 417 โ โ โ proxies=proxies, โ
โ 418 โ โ โ resume_download=resume_download, โ
โ 419 โ โ โ use_auth_token=use_auth_token, โ
โ โฑ 420 โ โ โ local_files_only=local_files_only, โ
โ 421 โ โ ) โ
โ 422 โ โ
โ 423 โ except RepositoryNotFoundError: โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py:120 in _inner_fn โ
โ โ
โ 117 โ โ if check_use_auth_token: โ
โ 118 โ โ โ kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha โ
โ 119 โ โ โ
โ โฑ 120 โ โ return fn(*args, **kwargs) โ
โ 121 โ โ
โ 122 โ return _inner_fn # type: ignore โ
โ 123 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/file_download.py:1170 in hf_hub_download โ
โ โ
โ 1167 โ โ โ โ โ url=url, โ
โ 1168 โ โ โ โ โ token=token, โ
โ 1169 โ โ โ โ โ proxies=proxies, โ
โ โฑ 1170 โ โ โ โ โ timeout=etag_timeout, โ
โ 1171 โ โ โ โ ) โ
โ 1172 โ โ โ except EntryNotFoundError as http_error: โ
โ 1173 โ โ โ โ # Cache the non-existence of the file and raise โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py:120 in _inner_fn โ
โ โ
โ 117 โ โ if check_use_auth_token: โ
โ 118 โ โ โ kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha โ
โ 119 โ โ โ
โ โฑ 120 โ โ return fn(*args, **kwargs) โ
โ 121 โ โ
โ 122 โ return _inner_fn # type: ignore โ
โ 123 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/file_download.py:1507 in โ
โ get_hf_file_metadata โ
โ โ
โ 1504 โ โ proxies=proxies, โ
โ 1505 โ โ timeout=timeout, โ
โ 1506 โ ) โ
โ โฑ 1507 โ hf_raise_for_status(r) โ
โ 1508 โ โ
โ 1509 โ # Return โ
โ 1510 โ return HfFileMetadata( โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_errors.py:291 in โ
โ hf_raise_for_status โ
โ โ
โ 288 โ โ โ โ " `repo_type`.\nIf you are trying to access a private or gated repo," โ
โ 289 โ โ โ โ " make sure you are authenticated." โ
โ 290 โ โ โ ) โ
โ โฑ 291 โ โ โ raise RepositoryNotFoundError(message, response) from e โ
โ 292 โ โ โ
โ 293 โ โ elif response.status_code == 400: โ
โ 294 โ โ โ message = ( โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-64a075a2-6fc983a27bdf3f765f2f8757)
Repository Not Found for url: https://huggingface.co/facebook/bart-large/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
During handling of the above exception, another exception occurred:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:3 โ
โ โ
โ 1 from transformers import BartForConditionalGeneration, BartTokenizer โ
โ 2 โ
โ โฑ 3 model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_t โ
โ 4 tok = BartTokenizer.from_pretrained("facebook/bart-large") โ
โ 5 example_english_phrase = "UN Chief Says There Is No <mask> in Syria" โ
โ 6 batch = tok(example_english_phrase, return_tensors="pt") โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py:2282 in from_pretrained โ
โ โ
โ 2279 โ โ โ โ subfolder=subfolder, โ
โ 2280 โ โ โ โ _from_auto=from_auto_class, โ
โ 2281 โ โ โ โ _from_pipeline=from_pipeline, โ
โ โฑ 2282 โ โ โ โ **kwargs, โ
โ 2283 โ โ โ ) โ
โ 2284 โ โ else: โ
โ 2285 โ โ โ model_kwargs = kwargs โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:547 in โ
โ from_pretrained โ
โ โ
โ 544 โ โ assert config.output_attentions == True โ
โ 545 โ โ assert unused_kwargs == {"foo": False} โ
โ 546 โ โ ```""" โ
โ โฑ 547 โ โ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwarg โ
โ 548 โ โ if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["m โ
โ 549 โ โ โ logger.warning( โ
โ 550 โ โ โ โ f"You are using a model of type {config_dict['model_type']} to instantia โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:574 in โ
โ get_config_dict โ
โ โ
โ 571 โ โ """ โ
โ 572 โ โ original_kwargs = copy.deepcopy(kwargs) โ
โ 573 โ โ # Get config dict associated with the base config file โ
โ โฑ 574 โ โ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwar โ
โ 575 โ โ if "_commit_hash" in config_dict: โ
โ 576 โ โ โ original_kwargs["_commit_hash"] = config_dict["_commit_hash"] โ
โ 577 โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:641 in โ
โ _get_config_dict โ
โ โ
โ 638 โ โ โ โ โ user_agent=user_agent, โ
โ 639 โ โ โ โ โ revision=revision, โ
โ 640 โ โ โ โ โ subfolder=subfolder, โ
โ โฑ 641 โ โ โ โ โ _commit_hash=commit_hash, โ
โ 642 โ โ โ โ ) โ
โ 643 โ โ โ โ commit_hash = extract_commit_hash(resolved_config_file, commit_hash) โ
โ 644 โ โ โ except EnvironmentError: โ
โ โ
โ /opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py:425 in cached_file โ
โ โ
โ 422 โ โ
โ 423 โ except RepositoryNotFoundError: โ
โ 424 โ โ raise EnvironmentError( โ
โ โฑ 425 โ โ โ f"{path_or_repo_id} is not a local folder and is not a valid model identifie โ
โ 426 โ โ โ "listed on 'https://huggingface.co/models'\nIf this is a private repository, โ
โ 427 โ โ โ "pass a token having permission to this repo with `use_auth_token` or log in โ
โ 428 โ โ โ "`huggingface-cli login` and pass `use_auth_token=True`." โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
OSError: facebook/bart-large is not a local folder and is not a valid model identifier listed on
'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or
log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
### Expected behavior
None output should be given. | 07-01-2023 19:20:56 | 07-01-2023 19:20:56 | They're back up now! ๐ค<|||||>Closing as this has been resolved! |
transformers | 24,616 | closed | XLNetTokenizerFast conversion fails with identity normalization in Sentencepiece tokenizer | ### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 1.12.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZ
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was trying to initialize an XLNetTokenizerFast tokenizer using a Sentencepiece tokenizer model. While training the Sentencepiece tokenizer, I used the `identity` normalization rule name as I did not want to normalize the texts. While initializing XLNetTokenizerFast using this Sentencepiece tokenizer, it fails and raises the following error:
```bash
Traceback (most recent call last):
File "xlnet_tok_test.py", line 10, in <module>
tokenizer = transformers.XLNetTokenizerFast(
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/models/xlnet/tokenization_xlnet_fast.py", line 150, in __init__
super().__init__(
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 118, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 1162, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 503, in converted
tokenizer.normalizer = self.normalizer(self.proto)
File "/home/shahad/miniconda3/envs/gen/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 786, in normalizer
list_normalizers.append(normalizers.Precompiled(precompiled_charsmap))
Exception: Error while attempting to build Precompiled normalizer: Cannot parse precompiled_charsmap
```
However, I can successfully initialize XLNetTokenizerFast when the Sentencepiece tokenizer is trained with `nfkc` or the default `nmt_nfkc` normalization rule.
This bug can be reproduces using the following colab notebook:
https://colab.research.google.com/drive/1kj17NAP3xn22MEwp_96eNBLYg5d5np9u?usp=sharing
### Expected behavior
The XLNetTokenizerFast should be initialized without any error. | 07-01-2023 19:11:05 | 07-01-2023 19:11:05 | To my mind, the bug can be fixed with a checking of the precompiled charmap like the following code snippet:
```python
precompiled_charsmap = proto.normalizer_spec.precompiled_charsmap
if precompiled_charsmap:
list_normalizers.append(normalizers.Precompiled(precompiled_charsmap))
```
I am creating a pull request with this checking. |
transformers | 24,615 | closed | Cannot load BART model | Trying to load the BART model as specified on the [website](https://huggingface.co/docs/transformers/model_doc/bart#mask-filling:~:text=Mask%20Filling-,The%20facebook,-/bart%2Dbase%20and) with the following code:
`model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0`
Error: facebook/bart-large is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
@patrickvonplaten | 07-01-2023 18:57:07 | 07-01-2023 18:57:07 | It seems like Facebook just disappeared from HuggingFace. Still waiting for something back...<|||||>Yeah, no models & datasets visible.<|||||>https://huggingface.co/facebook/bart-large is back up now ๐ค <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,614 | closed | [DOC] Clarify relationshi load_best_model_at_end and save_total_limit | Clarify the relationship between `load_best_model_at_end` and `save_total_limit`. Hope this is clear.
As discussed on Slack @sgugger
| 07-01-2023 17:46:04 | 07-01-2023 17:46:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can you just do a quick rebase/merge on the main branch? I think the issue in the tests (due to a release of Pillow apparently) is fixed on main. |
transformers | 24,613 | open | Fine-tune T5 on SQuAD | ### System Info
I was trying to use the official command to evaluate T5 on SQuAD data, but where can I find the prediction file that contains the actual answer T5 generated?
python run_seq2seq_qa.py \
--model_name_or_path t5-small \
--dataset_name squad \
--context_column context \
--question_column question \
--answer_column answers \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--predict_with_generate \
--output_dir /tmp/debug_seq2seq_squad/
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
find the prediction file | 07-01-2023 13:30:07 | 07-01-2023 13:30:07 | I didn't verify manually, but I think you have to modify
https://github.com/huggingface/transformers/blob/fc7ce2ebc52eccd8158a7feeeee11eb44f964937/examples/pytorch/question-answering/run_seq2seq_qa.py#L695-L706
in order to save the prediction (generation).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,612 | open | ValueError: An instance of tokenizer class BioGptTokenizer cannot be converted in a Fast tokenizer instance. No converter was found. | ### System Info
ValueError: An instance of tokenizer class BioGptTokenizer cannot be converted in a Fast tokenizer instance. No converter was found.
I am using microsoft/biogpt for token classification ner task script(https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py ) which is having slow tokenizer and the journey been so far is this
1. Got an error in the above mentioned script as this ("This example script only works for models that have a fast tokenizer.")
2. And then went for second option that is mentioned which is to use old script In which I got an Runtime error and I reported the issue and got as an answer that I need to use new run_ner.py
3. I got an option to convert slow tokenizer to fast but now I am getting this error
"ValueError: An instance of tokenizer class BioGptTokenizer cannot be converted
in a Fast tokenizer instance. No converter was found. Currently available
slow->fast convertors: ['AlbertTokenizer', 'BartTokenizer', 'BarthezTokenizer',
'BertTokenizer', 'BigBirdTokenizer', 'BlenderbotTokenizer',
'CamembertTokenizer', 'CLIPTokenizer', 'CodeGenTokenizer', 'ConvBertTokenizer',
'DebertaTokenizer', 'DebertaV2Tokenizer', 'DistilBertTokenizer',
'DPRReaderTokenizer', 'DPRQuestionEncoderTokenizer',
'DPRContextEncoderTokenizer', 'ElectraTokenizer', 'FNetTokenizer',
'FunnelTokenizer', 'GPT2Tokenizer', 'HerbertTokenizer', 'LayoutLMTokenizer',
'LayoutLMv2Tokenizer', 'LayoutLMv3Tokenizer', 'LayoutXLMTokenizer',
'LongformerTokenizer', 'LEDTokenizer', 'LxmertTokenizer', 'MarkupLMTokenizer',
'MBartTokenizer', 'MBart50Tokenizer', 'MPNetTokenizer', 'MobileBertTokenizer',
'MvpTokenizer', 'NllbTokenizer', 'OpenAIGPTTokenizer', 'PegasusTokenizer',
'RealmTokenizer', 'ReformerTokenizer', 'RemBertTokenizer', 'RetriBertTokenizer',
'RobertaTokenizer', 'RoFormerTokenizer', 'SqueezeBertTokenizer', 'T5Tokenizer',
'WhisperTokenizer', 'XLMRobertaTokenizer', 'XLNetTokenizer',
'SplinterTokenizer', 'XGLMTokenizer', 'LlamaTokenizer']"
It is my request to team to add BioGptTokenizer in the list.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
%cd /content/transformers/examples/pytorch/token-classification
!python run_ner.py \
--tokenizer_name microsoft/biogpt\
--model_name_or_path microsoft/biogpt\
--train_file /content/TRAIN.json \
--validation_file /content/DEV.json \
--test_file /content/DEV.json \
--output_dir $checkpoint_dir \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--num_train_epochs 4\
--evaluation_strategy epoch\
--task_name ner\
--overwrite_output_dir True\
--save_strategy epoch\
--ignore_mismatched_sizes=True
### Expected behavior
Successfully train after the conversion | 07-01-2023 11:21:08 | 07-01-2023 11:21:08 | Hi @TekeshwarHirwani, thanks for raising this issue.
Indeed, there doesn't exist a convert yet for this tokenizer. Would you like to add it? You can find examples of [converters here](https://github.com/huggingface/transformers/blob/main/src/transformers/convert_slow_tokenizer.py), and it will need to be added to the [SLOW_TO_FAST_CONVERTERS mapping](https://github.com/huggingface/transformers/blob/f4e4b4d0e2dc248433e808594f7595292037d891/src/transformers/convert_slow_tokenizer.py#L1230).
cc @Rocketknight1 @ArthurZucker <|||||>Also other models based on `moses` don't have a fast tokenizer version (`xlm`, `fsmt`, `flaubert` etc). It's probably because moses is already fast enough and `tokenizers` library is not really made for ruled base tokenization. Correct me if I am wrong @Narsil <|||||>Seems very correct. The reasons to skip `moses` are vague to me but indeed I'm not sure we should go down that path :).
<|||||>@TekeshwarHirwani A similar context was raised here: https://github.com/huggingface/transformers/pull/17254#issuecomment-1150669010
You may first create a slow-to-fast-converter that is similar to PhobertConverter/BertweetConverter from https://github.com/datquocnguyen/transformers/blob/main/src/transformers/convert_slow_tokenizer.py
Then you could create a BiogptTokenizerFast in the same manner to as PhobertTokenizerFast/BertweetTokenizerFast from https://github.com/datquocnguyen/transformers/blob/main/src/transformers/models/bertweet/tokenization_bertweet_fast.py
See more details [here](https://github.com/huggingface/transformers/pull/17254/files).<|||||>I am not really sure why `moses` was mentioned here @ArthurZucker @Narsil @amyeroberts
The reason why you'd have to **hack** the `tokenizers` to have a fast variant of such slow tokenizers for FlauBERT or BioGPT is that [many subwords appearing in the "merges" file do not appear in the "vocab" file as in CTRL, FlauBERT, BioGPT, PhoBERT and BERTweet and the like (i.e. slow tokenizers would convert those subwords into unkn_id during encoding), thus it is impossible to develop a fast tokenizer variant using documented approaches while keeping the same tokenization strategy](https://github.com/huggingface/transformers/pull/17254#issuecomment-1150669010).<|||||>Thanks for the link.
I'm confused in the thread you linked you say that fast tokenizers are possible: https://github.com/huggingface/transformers/pull/17254#issuecomment-1130248921 and I see one here: https://huggingface.co/vinai/bertweet-covid19-base-uncased/blob/main/tokenizer.json
This lead me to check BiotGPT **does** use moses: https://github.com/huggingface/transformers/blob/main/src/transformers/models/biogpt/tokenization_biogpt.py#L156-L162
Flaubert **uses it too**: https://github.com/huggingface/transformers/blob/main/src/transformers/models/flaubert/tokenization_flaubert.py#L312-L318
While CTRL **doesn't**: https://github.com/huggingface/transformers/blob/main/src/transformers/models/ctrl/tokenization_ctrl.py
And phobert indeed **doesn't** : https://github.com/huggingface/transformers/blob/main/src/transformers/models/phobert/tokenization_phobert.py
So phobert might be doable, but I'm not sure it's related to BioGPT
<|||||>@Narsil Please be aware of the difference between "subword" tokenization vs. "word" tokenization.
All the `tokenization_{model_name}.py` files you mentioned use "bpe" for `subword` tokenization, e.g. [https://github.com/huggingface/transformers/blob/2ab75add4b30c2fc44a8bf575156d448d9ed87a7/src/transformers/models/biogpt/tokenization_biogpt.py#L170](https://github.com/huggingface/transformers/blob/2ab75add4b30c2fc44a8bf575156d448d9ed87a7/src/transformers/models/biogpt/tokenization_biogpt.py#L170)
BiotGPT, Flaubert, CTRL, PhoBERT and BERTweet all have "merges" and "vocab" files for BPE-based subword tokenization (e.g. see https://huggingface.co/microsoft/biogpt/tree/main).
For BiotGPT and Flaubert, `mosestokenizer` is just a `word` tokenizer/normalizer, which can be used as an external preprocess w.r.t. a fast `subword` tokenization variant (likewise, to perform Vietnamese word segmentation before using PhobertTokenizerFast, or to perform Tweet normalization before using BertweetTokenizerFast).
PS: https://huggingface.co/vinai/bertweet-covid19-base-uncased/blob/main/tokenizer.json is just a saved output of the [convert_slow_tokenizer.py](https://github.com/datquocnguyen/transformers/blob/main/src/transformers/convert_slow_tokenizer.py) that takes "merges" and "vocab" files as input. |
transformers | 24,611 | open | translate the English documentation into Chinese | # Aim
Aims to translate the English documentation into Chinese, making it easier for Chinese developers to read and reducing the difficulty of accessing the documentation for them.
zh_translate:
@chenglu
| 07-01-2023 09:18:19 | 07-01-2023 09:18:19 | Hi @liteli1987gmail, thanks for opening this PR and starting the Chinese translation effort!
Is there a corresponding github issue for this translation? We recommend opening an issue (following [this template](https://github.com/huggingface/transformers/blob/f4e4b4d0e2dc248433e808594f7595292037d891/.github/ISSUE_TEMPLATE/i18n.md#L4)) so that others can easily track progress and contribute.
As we see here, translating all of the pages at once creates a very large diff that isn't realistic for people to review. Could you instead have each of the pages listed in a checklist on the github issue, and then open a separate PR for each of those pages? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,610 | closed | PreTrainedTokenizerFast - whitespace merge skipped | ### System Info
@ArthurZucker
Most likely I'm wrong, been digging through tokenization for 10 hours in a row and quite new to the topic.
Vocab: https://huggingface.co/tiiuae/falcon-40b/blob/main/tokenizer.json
String: `"Hello World"`
Two spaces are in the merge list right on top at line 19
" W" is in line 128
Running this through the tokenizer (`tokenizer = PreTrainedTokenizerFast(tokenizer_file='tokenizer.json')`)
Falcon IDs:
` [9856, 204, 2889]`
Falcon tokens:
` ['Hello', 'ฤ ', 'ฤ World']`
What I expected:
```
9856 -> 'Hello'
258 -> ' '
12670 -> 'World'
```
From my understanding the two whitespaces form a rank 19 merge (the 2nd lowest one next to 'o r' at 12)
I most likely just misunderstand a special rule in BPE in relation to white space characters
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import sys
from transformers import PreTrainedTokenizerFast
# Load the tokenizer
tokenizer = PreTrainedTokenizerFast(tokenizer_file='tokenizer.json')
# Define the string to tokenize
text = "Hello World"
# Check if a command line argument was provided
if len(sys.argv) > 1:
text = sys.argv[1]
# Tokenize the string
output = tokenizer.encode(text)
# Print the token IDs
print("Falcon IDs:\n\t", output)
tokens = tokenizer.convert_ids_to_tokens(output)
print("Falcon tokens:\n\t", tokens)
```
### Expected behavior
The two spaces form a higher rank than space + W so I'd expect this outcome
9856 -> 'Hello'
258 -> ' '
12670 -> 'World' | 07-01-2023 04:12:58 | 07-01-2023 04:12:58 | The behavior switches to what I expected if you disable pre_tokenizer->use_regex which ignores the rank and contains quite a bit of english grammar rules.
Not sure if that regex snake should really be used by default, given the international use of tokenizers. (ironically TII has chosen it despite being in an arabic speaking country) |
transformers | 24,609 | closed | Fix model referenced and results in documentation. Model mentioned was inaccessible | # What does this PR do?
This is a very small change on the documentation.
The mentioned model (`MariaK/detr-resnet-50_finetuned_cppe5`) was either removed or set to private. So I could not reproduce the shown example.
I basically reference the same model but from another user, which provides a slightly better result. That's why I also updated the metrics.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts @MKhalusova
| 07-01-2023 02:50:22 | 07-01-2023 02:50:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,608 | open | CUDA error: an illegal memory access was encountered | I encountered some errors when running the run_speech_recognition_ctc_streaming.sh by `deepspeed` ( `torchrun --nproc_per_node 1 ... `) and his issue consistently occurs with my custom corpora.
Does anyone have any ideas? (I can fine-tune successfully using the Common Voice corpus)
environment:
gpu number: 1
export CUDA_LAUNCH_BLOCKING=1
export TORCH_USE_CUDA_DSA=1
```
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7b400ef097 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f7b400aaa33 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f7b4019d5a8 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x1f3de (0x7f7b401663de in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x22650 (0x7f7b40169650 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x22a35 (0x7f7b40169a35 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x4ef710 (0x7f7af1667710 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #7: c10::TensorImpl::~TensorImpl() + 0x1e3 (0x7f7b400cc393 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f7b400cc529 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
frame #9: <unknown function> + 0x7761b8 (0x7f7af18ee1b8 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #10: THPVariable_subclass_dealloc(_object*) + 0x2c6 (0x7f7af18ee506 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x1388e1 (0x5580685a58e1 in /usr/bin/python3)
frame #12: <unknown function> + 0x1386dc (0x5580685a56dc in /usr/bin/python3)
frame #13: <unknown function> + 0x138787 (0x5580685a5787 in /usr/bin/python3)
frame #14: <unknown function> + 0x174ac1 (0x5580685e1ac1 in /usr/bin/python3)
frame #15: <unknown function> + 0x153090 (0x5580685c0090 in /usr/bin/python3)
frame #16: <unknown function> + 0x166918 (0x5580685d3918 in /usr/bin/python3)
frame #17: <unknown function> + 0x2593a7 (0x5580686c63a7 in /usr/bin/python3)
frame #18: <unknown function> + 0x17a7b0 (0x5580685e77b0 in /usr/bin/python3)
frame #19: <unknown function> + 0x25f5c1 (0x5580686cc5c1 in /usr/bin/python3)
frame #20: _PyEval_EvalFrameDefault + 0x7a99 (0x5580685b9b49 in /usr/bin/python3)
frame #21: <unknown function> + 0x16ac31 (0x5580685d7c31 in /usr/bin/python3)
frame #22: PyObject_Call + 0x122 (0x5580685d88e2 in /usr/bin/python3)
frame #23: <unknown function> + 0x27c30c (0x5580686e930c in /usr/bin/python3)
frame #24: _PyObject_MakeTpCall + 0x25b (0x5580685c04ab in /usr/bin/python3)
frame #25: _PyEval_EvalFrameDefault + 0x1a2f (0x5580685b3adf in /usr/bin/python3)
frame #26: <unknown function> + 0x16ac31 (0x5580685d7c31 in /usr/bin/python3)
frame #27: _PyEval_EvalFrameDefault + 0x1a2f (0x5580685b3adf in /usr/bin/python3)
frame #28: _PyFunction_Vectorcall + 0x7c (0x5580685ca1ec in /usr/bin/python3)
frame #29: _PyEval_EvalFrameDefault + 0x6d5 (0x5580685b2785 in /usr/bin/python3)
frame #30: <unknown function> + 0x141ed6 (0x5580685aeed6 in /usr/bin/python3)
frame #31: PyEval_EvalCode + 0x86 (0x5580686a5366 in /usr/bin/python3)
frame #32: <unknown function> + 0x265108 (0x5580686d2108 in /usr/bin/python3)
frame #33: <unknown function> + 0x25df5b (0x5580686caf5b in /usr/bin/python3)
frame #34: <unknown function> + 0x264e55 (0x5580686d1e55 in /usr/bin/python3)
frame #35: _PyRun_SimpleFileObject + 0x1a8 (0x5580686d1338 in /usr/bin/python3)
frame #36: _PyRun_AnyFileObject + 0x43 (0x5580686d1033 in /usr/bin/python3)
frame #37: Py_RunMain + 0x2be (0x5580686c22de in /usr/bin/python3)
frame #38: Py_BytesMain + 0x2d (0x55806869832d in /usr/bin/python3)
frame #39: <unknown function> + 0x29d90 (0x7f7b5c24ad90 in /lib/x86_64-linux-gnu/libc.so.6)
frame #40: __libc_start_main + 0x80 (0x7f7b5c24ae40 in /lib/x86_64-linux-gnu/libc.so.6)
frame #41: _start + 0x25 (0x558068698225 in /usr/bin/python3)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 24134) of binary: /usr/bin/python3
```
This doesn't solve my problem by `pip3 install numpy --pre torch torchvision torchaudio --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117`
| 07-01-2023 02:01:42 | 07-01-2023 02:01:42 | What's your deepspeed version. Probably try to upgrade it and check again.
You can follow [this issue page](https://github.com/microsoft/DeepSpeed/issues/3373).
<|||||>ds_report
```
DeepSpeed general environment info:
torch install path ............... ['/home/ubuntu/.local/lib/python3.10/site-packages/torch']
torch version .................... 2.0.1+cu117
deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
deepspeed info ................... 0.9.5, unknown, unknown
torch cuda version ............... 11.7
torch hip version ................ None
nvcc version ..................... 11.5
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
```
I have tried not using DeepSpeed, but the issue still occurs.<|||||>> I can fine-tune successfully using the Common Voice corpus
If the same way of launching the training works for one dataset but not for another, and we don't have access to the second dataset (your custom corpora), we won't be able to help unfortunately.<|||||>I think I can try debugging, but I don't have any ideas. Do you have any suggestions or directions?<|||||>This thread https://github.com/microsoft/DeepSpeed/issues/3373 is a better place. You can ask those people how they solved the issue.<|||||>ok, thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 24,607 | closed | `logging_dir` is not being generated. | ### System Info
Hi. I'm using huggingface model `bert base` on cloud TPU but it couldn't generate the `--logging_dir` that I expected.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The script is at [here](https://github.com/GoogleCloudPlatform/ml-testing-accelerators/blob/master/tests/pytorch/nightly/hf-lm.libsonnet).
To reproduce, run these commands in a docker container on a TPU VM:
```
$ cd
$ git clone https://github.com/huggingface/transformers.git
$ cd transformers && pip install .
$ pip install datasets evaluate scikit-learn
$ python3 examples/pytorch/xla_spawn.py \
--num_cores 8 \
examples/pytorch/language-modeling/run_mlm.py \
--logging_dir ./tensorboard-metrics \
--cache_dir ./cache_dir \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--overwrite_output_dir \
--output_dir language-modeling \
--logging_steps 30 \
--save_steps 3000 \
--overwrite_cache \
--debug tpu_metrics_debug \
--model_type=bert \
--model_name_or_path bert-base-cased \
--num_train_epochs 1 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16
```
### Expected behavior
I expect a folder `tensorboard-metrics` to be generated in `~/transformers` but I couldn't find it. | 07-01-2023 00:25:02 | 07-01-2023 00:25:02 | Hi @vanbasten23,
Do you have tensorboard installed in your environment?
Could you share the the running environment: run `transformers-cli env` in the terminal and copy-paste the output?<|||||>> Do you have tensorboard installed in your environment?
Thanks. It turns out once I installed tensorboard, the folder was generated. Btw, do you know how transformer writes to the tensorboard? I searched things like "from torch.utils.tensorboard import SummaryWriter" in the transformer codebase but I couldn't find any reference.
<|||||>Writing logic is defined in the [TensorBoardCallback](https://github.com/huggingface/transformers/blob/7edc33ac7a2572698045fed3b5115bca23f40805/src/transformers/integrations.py#L573).
This is added by default as a reporting callback if tensorboard is in your environment [here](https://github.com/huggingface/transformers/blob/7edc33ac7a2572698045fed3b5115bca23f40805/src/transformers/trainer.py#L539). <|||||>Thanks a lot @amyeroberts ! |
transformers | 24,606 | closed | RuntimeError: Could not infer dtype of NoneType | ### System Info
I was using microsoft/biogpt and gpt2 model on old script of run_ner.py that is (https://github.com/huggingface/transformers/blob/main/examples/legacy/token-classification/run_ner.py ) since both have slow tokenizer and they both encountered same error that is
RuntimeError: Could not infer dtype of NoneType
I have used the same dataset given in the repo and changed model name that's it.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. change the model name to microsoft/biogpt
2. Follow the instruction given on the repo
### Expected behavior
it should train successfully | 06-30-2023 22:01:32 | 06-30-2023 22:01:32 | Hi @TekeshwarHirwani
The files under [transformers](https://github.com/huggingface/transformers/tree/main)/[examples](https://github.com/huggingface/transformers/tree/main/examples)/[legacy](https://github.com/huggingface/transformers/tree/main/examples/legacy) is no longer maintained: (from the README file)
> This folder contains examples which are not actively maintained (mostly contributed by the community).
> Using these examples together with a recent version of the library usually requires to make small (sometimes big) adaptations to get the scripts working.
You can use the files under [token-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) ๐ค <|||||>Thankyou for response, But the model I am using is microsoft/biogpt and it is having slow tokenizer, readme file of examples/tokenclassification/pytorch you have mentioned this :
Note: This script only works with models that have a fast tokenizer (backed by the ๐ค Tokenizers library) as it uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in [this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version of the script.
<|||||>Hi @TekeshwarHirwani
In this situation, you will have to dive into the error messages, set some breakpoints, investigate the values of some variables to figure out why there is a None value. And see if you can modify the `legacy` code to make it work.
You can try [Hugging Face Forums](https://discuss.huggingface.co/) to see if someone else had the same issues and if there are already some approaches.<|||||>Thanks |
transformers | 24,605 | closed | Fix model referenced and results in documentation. Model mentioned was inaccessible. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@amyeroberts @MKhalusova
This is a (really) small change on the documentation.
The mentioned model (`MariaK/detr-resnet-50_finetuned_cppe5`) was either removed or set to private. So I could not reproduce the shown example.
I basically reference the same model but from another user, which provides a slightly better result. That's why I also updated the metrics. | 06-30-2023 21:16:58 | 06-30-2023 21:16:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,604 | open | Contradictory information in documentation about the ability to push qunatized models to hub | ### System Info
Using Google Colab and the main branch of the transformers library on GitHub.
### Who can help?
@sgugger @stevhliu @MKhalusova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The note at the end of the section [Load a large model in 4bit](https://huggingface.co/docs/transformers/main/main_classes/quantization#load-a-large-model-in-4bit) and [Load a large model in 8bit
](https://huggingface.co/docs/transformers/main/main_classes/quantization#load-a-large-model-in-8bit) suggests that it's not possibel to push the quantized weights on the hub:
> Note that once a model has been loaded in 4-bit it is currently not possible to push the quantized weights on the Hub.
> Note that once a model has been loaded in 8-bit it is currently not possible to push the quantized weights on the Hub except if you use the latest transformers and bitsandbytes.
But the example in [Push quantized models on the ๐ค Hub
](https://huggingface.co/docs/transformers/main/main_classes/quantization#push-quantized-models-on-the-hub) suggests that it's possible to push quantized models to the hub.
Same is suggested in [Load a quantized model from the ๐ค Hub
](https://huggingface.co/docs/transformers/main/main_classes/quantization#load-a-quantized-model-from-the-hub)
Does it mean that push to hub is only supported for 8-bit quantized models when using the latest transformers and bitsandbytes but NOT for 4-bit models?
Or is it actually possible to push to hub for both 8-bit and 4-bit quantized models?
### Expected behavior
Can 4-bit and 8-bit quantized models be pushed to hub and be loaded from hub? | 06-30-2023 20:36:14 | 06-30-2023 20:36:14 | cc @younesbelkada <|||||>Hi @amdnsr
Thanks for the issue
as explained in the mentioned paragraphs, it is possible to push 8bit quantized weights only if you use the latest transformers + bitsandbytes. However, pushing 4bit weights is currently not supported<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits