user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.57k
__index_level_0__
int64
0
8.05k
qgallouedec
2025-01-08T14:38:01
Maye we can add a comment here so that we don't revert the reversion in the future ;)
2,527
100
August-murr
2024-12-28T06:35:20
I recommend using GitHub Actions since they run the tests more reliably. Just enable it on your fork, push your changes, and it’ll automatically trigger the tests.
2,524
101
AMindToThink
2024-12-28T19:48:24
Does this mean that my environment is not set up incorrectly?
2,524
102
AMindToThink
2024-12-29T03:05:06
Thank you, took a while to figure out, but the tests that were triggered when I made an empty .py file in trl/trl worked. Somewhat bothersome that it tries and fails to post the results to slack, but the tests themselves pass. `Error: Need to provide at least one botToken or webhookUrl` I would appreciate if the [contributing](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) document explained that the tests may not run properly locally and are auto-run by Github when changes are pushed to main. My workflow will be: Make changes to a branch of my fork. When I want to test, I'll merge my branch into main. Github will run the tests They'll fail. If on inspection the failure is because of the slack upload attempt, then everything is fine. If on inspection there was an error before the slack upload attempt, then there's a problem with my code. If my code is fine and my feature is ready, I can make a pull request.
2,524
103
qgallouedec
2024-12-29T10:19:28
Which tests fail locally?
2,524
104
AMindToThink
2024-12-30T19:00:10
Oddly, it says 6 failed when I only see 5. I'm on this commit: `commit aed5da580e9fcba6517460daf65106bc42fb6167 (upstream/main, origin/sac, sac) Author: Quentin Gallouédec <[email protected]> Date: Sun Dec 22 12:44:07 2024 +0100` ` 📦 Packing documentation (#2503)` These are the failures: ``` [gw2] FAILED tests/test_dpo_trainer.py::DPOTrainerTester::test_dpo_lora_bf16_autocast_llama [gw11] FAILED tests/test_gkd_trainer.py::GKDTrainerTester::test_gkd_trainer [gw12] FAILED tests/test_callbacks.py::WinRateCallbackTester::test_basic [gw11] FAILED tests/test_peft_models.py::PeftModelTester::test_create_bnb_peft_model_from_config [gw15] FAILED tests/test_xpo_trainer.py::TestXPOTrainer::test_training_with_peft | 0/50 [00:00<?, ?it/s] ================== 6 failed, 345 passed, 25 skipped, 242 warnings, 45 rerun in 113.62s (0:01:53) =================== ```
2,524
105
umbilnm
2024-12-27T09:11:21
Fixes #2400
2,521
106
umbilnm
2024-12-29T13:33:30
@qgallouedec Hello, can you merge? or something else needed from me?
2,521
107
HuggingFaceDocBuilderDev
2025-01-08T14:38:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2521). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,521
108
HuggingFaceDocBuilderDev
2024-12-26T19:07:53
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2520). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,520
109
qgallouedec
2025-01-07T19:20:37
## Regression test: ```python from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from trl import DPOConfig, DPOTrainer import torch model_id = "Qwen/Qwen2-0.5B-Instruct" model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2") tokenizer = AutoTokenizer.from_pretrained(model_id) dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train[:10%]") # dataset = load_dataset("trl-internal-testing/zen", "standard_preference", split="train") training_args = DPOConfig(output_dir="Qwen2-0.5B-DPO-no_pf", max_prompt_length=128, max_completion_length=128, logging_steps=10, padding_free=False) trainer = DPOTrainer(model=model, args=training_args, train_dataset=dataset, processing_class=tokenizer) trainer.train() ``` Is the new `padding_free=False` (`no_pf` in screenshot) equivalent to DPO on current main branch (`main` in screenshot)? -> yes <img width="2132" alt="Screenshot 2025-01-07 at 20 15 31" src="https://github.com/user-attachments/assets/f2019381-722d-46bb-8258-6a99b75861d8" /> Does `padding_free=True` (`pf` in screenshot) results match `padding_free=False` (`no_pf` in screenshot) results? -> Yes <img width="2132" alt="Screenshot 2025-01-07 at 20 19 41" src="https://github.com/user-attachments/assets/4920afd1-92e8-415e-80fd-655604ef45bf" /> (note: on screenshots its written "Gemma" but it's actually a Qwen model trained)
2,520
110
oliveiraeliel
2024-12-28T02:19:22
Hi, I have the same question as you do. I think that there must be some easy way to simply write a reward function as a `nn.Module`, so we don't have to refactor anything, but I didn't tried it yet. But I also think that `PPOTrainer` should accept a `custom_get_reward_function` as an optional parameter. In this case, anyone could define its own reward function, and would be a clean solution.
2,518
111
nityadav
2024-12-29T19:24:39
@yananchen1989 Thanks for posting this as I was stuck with a similar issue (but for `OnlineDPOTrainer`). The easiest workaround for me was to subclass the trainer class (`OnlineDPOTrainer`) and override the `training_step` with my custom `get_reward` logic, and rest of the implementation being the same as in the original method.
2,518
112
August-murr
2024-12-30T18:28:29
@yananchen1989 @oliveiraeliel @nityadav @hwhyyds @schmidtj3 This has been a recurring question, so before implementing a solution, I would like to ask you all for examples of when you would need this feature so that we can think of a good solution.
2,518
113
yananchen1989
2024-12-30T18:51:22
correct me if i am wrong. would like to know the primary motivation to rewrite the dpo from older version to current trainer unified version. maybe for better efficiency ? i understand that recent TRL versions wants to unify the pipeline in a more neat and organized manner across these different RL methods, where Trainer is the pivotal module and kick off the trainer.train() and all set. so for some methods like ppo the reward module is needed, it is also directed passed into the trainer. while for say dpo, sft, there is no provision for reward module. however. this could cause excessive encapsulation since it if hard to modularize the the reward module. the core reason is that in practical cases, reward module can be of any form not just a single torch.nn module which just score the whole output. the reward module may be a mixture and may be of dependence on external parameters, the prompt and most importantly it can not score the ppo trainer' outputs in a batch mode. anyway the flexibility is significantly reduced. although as you know the current unified pipeline is very fine with other mothods such as dpo as they do not have the reward concerns and the reward module is implicitly expressed within the algorithm. in my view, there is no need to rigidly transfer these rl methods into a unified training framework. pls advise.
2,518
114
August-murr
2024-12-31T06:52:17
Ultimately, TRL is a Hugging Face library built on top of Transformers and is part of the Hugging Face ecosystem. If the Trainer does limit flexibility, then Transformers will need to adapt; otherwise, we will have to maintain a much larger and more complex codebase. We'll come up with a way to add these features and prepare a PR soon!
2,518
115
August-murr
2024-12-31T06:52:44
@qgallouedec, do you want to comment?
2,518
116
qgallouedec
2024-12-31T07:30:41
Maybe having a `reward_func` arg of type `Callable` is an option. Alternatively, releasing the type of `reward_model` to accept any `Callable` is also an option. But given that a custom reward func won't return the same type/shape as proper `reward_model` I'm a bit afraid that it would require overcomplicated logic. In any case, I believe that the best approach is to discuss around a PR if anyone is willing to propose their approach
2,518
117
yananchen1989
2024-12-31T12:57:59
i hear u. thanks
2,518
118
qgallouedec
2025-01-07T09:22:08
So we were wrong in https://github.com/huggingface/trl/pull/2433?
2,516
119
dawidm
2025-01-07T12:51:04
> So we were wrong in #2433? Yes, it looks like it. I'm not sure why it seemed to work.
2,516
120
dawidm
2024-12-27T20:40:20
Update: this approach (PR #2516) introduce another problem, because incrementing `self.state.global_step` by more than 1 needs parameters like `logging_steps` be divisible by the value of the increment. Solutions for this are: 1. Require `logging_steps` etc. to be divisible by `args.num_mini_batches * args.num_ppo_epochs`. 2. Change convention for what `step` is in RLOO - don't multiply `self.state.max_steps` by `args.num_mini_batches * args.num_ppo_epochs` (making `step` an equivalent of `episode`). I prefer the second one because it's simpler, but I'd appreciate comments on this. I'll update the PR. edit: 2. is also consistent with documentation: > episode: episode: The current global step or episode count in the training process.
2,515
121
dawidm
2024-12-29T13:12:02
Of course there's also 3. solution: update `global_step` after actual optimizer step (inside minibatch PPO loop), but also logging should have been moved here in this case. This will keep the most "correct" (i think) convention of steps but it requires the most changes.
2,515
122
qgallouedec
2025-01-08T14:11:41
Yes 2. make probably more sense
2,515
123
dawidm
2025-01-08T19:49:36
Sorry, I made a mistake saying that 2. will make `step` equivalent of `episode`. Same for my PR #2531. But apart from this, both PRs are still valid and fix what they are supposed to. I've updated PR for this issue and this is how it looks with both PRs: PPO and RLOO follow the same convention for `steps` (not affected by `num_mini_batches` and `num_ppo_epochs`). `steps` are actually iterations of the main training loop (that is: episodes divided by global batch size). Actual number of episodes is logged correctly.
2,515
124
SwayamInSync
2024-12-21T19:58:52
This accounted with `SFTTrainer` if this is a general issue with `Trainer` from transformers, can be re-located there
2,514
125
HuggingFaceDocBuilderDev
2024-12-21T12:12:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2513). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,513
126
HuggingFaceDocBuilderDev
2024-12-21T00:10:35
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2512). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,512
127
HuggingFaceDocBuilderDev
2024-12-20T23:42:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2511). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,511
128
HuggingFaceDocBuilderDev
2024-12-20T21:43:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,510
129
HuggingFaceDocBuilderDev
2024-12-20T16:10:32
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2509). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,509
130
metric-space
2024-12-21T21:30:33
@aivolcano There is a notebook that is related to this. The updated notebook is here: https://github.com/huggingface/trl/blob/main/examples/notebooks/best_of_n.ipynb
2,508
131
aivolcano
2024-12-27T08:53:25
thank u so much
2,508
132
HuggingFaceDocBuilderDev
2024-12-20T11:30:43
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2507). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,507
133
Mecoli1219
2024-12-20T06:46:11
Wait for https://github.com/linkedin/Liger-Kernel/pull/492
2,506
134
HuggingFaceDocBuilderDev
2025-01-03T16:00:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2506). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,506
135
kashif
2025-01-03T19:54:41
needs: https://github.com/linkedin/Liger-Kernel/pull/510
2,506
136
metric-space
2024-12-21T21:33:11
@nguyenhoa-uit I can help out with this as this was code I wrote more than a year ago. Mind you, I'll be very very slow. Let me take a look
2,505
137
metric-space
2024-12-23T09:46:31
@nguyenhoa-uit could you try this bit : https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_config.py#L64 ?
2,505
138
nguyenhoa-uit
2024-12-25T02:18:37
> @nguyenhoa-uit could you try this bit : https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_config.py#L64 ? When I used checkpoint resume from in config file, I ran and had a bug at https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_trainer.py#L541C20-L541C42 When I passed with try catch, it didnot use the parameters from this checkpoint but base model.
2,505
139
ggbetz
2024-12-20T15:19:13
It seems @philschmid has in implementation here: https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/391f19ba06c128a2a290b3bdcb717ad6ff794fd7/training/scripts/run_sft.py#L54-L77 and the question is maybe just what's the best cleanest way to integrate this natively in trl?
2,504
140
anakin87
2024-12-21T16:25:29
This would be great and would prevent users from making mistakes in the manual implementation of this method: for example, [the code for integration with other libraries reported in the official repo](https://github.com/cognitivecomputations/spectrum?tab=readme-ov-file) has some problems. In contrast, the simple implementation in [my tutorial](https://huggingface.co/blog/anakin87/spectrum) and Philipp's code should be correct. BTW, Spectrum is quite agnostic with respect to training method (SFT, DPO...): the [models by VAGO solutions](https://huggingface.co/VAGOsolutions) show that it works well for DPO too. LMK what's the better way to proceed and help with this integration.
2,504
141
HuggingFaceDocBuilderDev
2024-12-19T10:50:45
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,503
142
HuggingFaceDocBuilderDev
2024-12-19T10:13:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2502). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,502
143
qgallouedec
2024-12-23T12:38:06
Can you screenshot a result?
2,501
144
HuggingFaceDocBuilderDev
2024-12-23T12:41:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2501). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,501
145
yaricom
2024-12-23T12:43:48
Sure, here is s screenshot from my account at Comet. <img width="2106" alt="Screenshot 2024-12-23 at 14 42 20" src="https://github.com/user-attachments/assets/69629fdb-77de-4a2d-b1d2-087889d96a4c" />
2,501
146
yaricom
2024-12-23T12:45:02
And this is a DataFrame encoded as CSV. [game_log.csv](https://github.com/user-attachments/files/18229453/game_log.csv)
2,501
147
yaricom
2024-12-23T13:08:10
The script I was using to test DPO trainer integration. ```python import os from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer from trl import DPOConfig, DPOTrainer os.environ["TOKENIZERS_PARALLELISM"] = "false" def main(): output_dir = "models/minimal/dpo_my" model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5" # model_id = "Qwen/Qwen2-0.5B-Instruct" model = AutoModelForCausalLM.from_pretrained(model_id) ref_model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token training_args = DPOConfig( output_dir=output_dir, per_device_train_batch_size=2, max_steps=1, remove_unused_columns=False, gradient_accumulation_steps=8, precompute_ref_log_probs=False, learning_rate=5.0e-7, eval_strategy="steps", eval_steps=1, report_to="all", generate_during_eval=True, max_length=1024, ) # dummy_dataset = load_dataset("trl-internal-testing/zen", "standard_preference") dummy_dataset = load_dataset("trl-lib/ultrafeedback_binarized", "default") dummy_dataset["train"] = dummy_dataset["train"].select(range(20)) dummy_dataset["test"] = dummy_dataset["test"].select(range(40)) trainer = DPOTrainer( model=model, ref_model=ref_model, args=training_args, processing_class=tokenizer, train_dataset=dummy_dataset["train"], eval_dataset=dummy_dataset["test"], ) trainer.train() trainer.evaluate() if __name__ == "__main__": main() ``` Do not forget to set `COMET_APY_KEY` environment variable while executing it.
2,501
148
asparius
2024-12-18T13:50:40
trl uses accelerate which supports FSDP. However there is no recommeded config of FSDP in the repo unlike DeepSpeed, so you could refer to this [page](https://huggingface.co/docs/accelerate/en/usage_guides/fsdp) for FSDP. All in all, DPO and trl supports FSDP but not for online algo like PPO #1726.
2,500
149
yingtongxiong
2024-12-19T05:57:01
> trl uses accelerate which supports FSDP. However there is no recommeded config of FSDP in the repo unlike DeepSpeed, so you could refer to this [page](https://huggingface.co/docs/accelerate/en/usage_guides/fsdp) for FSDP. All in all, DPO and trl supports FSDP but not for online algo like PPO #1726. @asparius Thank you very much
2,500
150
HuggingFaceDocBuilderDev
2024-12-17T23:16:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2499). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,499
151
HuggingFaceDocBuilderDev
2024-12-17T19:12:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2498). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,498
152
qgallouedec
2024-12-17T22:30:39
Yeah! thanks @sergiopaniego 🤘
2,498
153
asparius
2024-12-18T14:14:35
This has been noted previously #2281. I believe this was introduced in PPOv2 which was replication of the openai tldr paper which also contains this INVALID_LOGPROB=1.0 which does not break training because it cancels out at kl reward. Perhaps @vwxyzjn can tell why this was used, instead of masked_mean version
2,496
154
Mecoli1219
2024-12-20T05:30:02
Hi, I want to check that SimPO is in CPO instead of DPO, right?
2,495
155
qgallouedec
2024-12-20T11:01:35
> Hi, I want to check that SimPO is in CPO instead of DPO, right? Correct! Message modified
2,495
156
HuggingFaceDocBuilderDev
2024-12-17T08:16:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2494). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,494
157
qgallouedec
2024-12-17T11:19:51
Probably simpler: ```python from huggingface_hub import ModelCard model_card = ModelCard(""" --- tags: [trl] --- # Some title """) if script_args.push_to_hub: model_card.push_to_hub(script_args.repo_id, repo_type="dataset") ```
2,491
158
August-murr
2024-12-17T12:15:50
Well, that's one way to overengineer it I also opened [issue on datasets](https://github.com/huggingface/datasets/issues/7336) to clarify. I assume the next step is to add this to all the dataset scripts.
2,491
159
qgallouedec
2024-12-17T13:14:11
Very good like this
2,491
160
HuggingFaceDocBuilderDev
2024-12-25T17:41:32
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2491). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,491
161
August-murr
2024-12-29T14:01:58
it doesn't add all the details requested in the issue #2470 but It's an improvement
2,491
162
qgallouedec
2024-12-16T12:14:28
Thanks for reporting, please provide a *minimal* code/steps to reproduce this.
2,490
163
sagie-dekel
2024-12-16T12:48:53
pipeline.zip (edit by maintainer: remove link) thanks @qgallouedec The attached files constitute a pipeline that using the DPOTrainer with DeepSpeed. I am sorry that its ain't minimal but i don't see easy way to reproduce. if you prefer I can write the main steps.
2,490
164
qgallouedec
2024-12-16T13:52:12
Sorry but we don't use zip files. The easy way to provide a MRE is to go line by line, if the error remains when you remove it, then you can discard the line. When there is no line left to remove, you have your MRE
2,490
165
qgallouedec
2024-12-16T11:04:09
Good point, given that for other trainers (like DPO), it's a truncation. In fact, the best thing would be to have a common behavior for all trainers (truncation), but the urgent thing is to clarify the documentation.
2,488
166
HuggingFaceDocBuilderDev
2024-12-16T09:16:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,487
167
Ciao-CA
2024-12-20T07:32:59
I have the same problem
2,486
168
karlcuinju
2025-01-02T03:11:41
Any solution now?
2,486
169
HuggingFaceDocBuilderDev
2024-12-15T19:39:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,485
170
HuggingFaceDocBuilderDev
2024-12-15T18:22:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,484
171
HuggingFaceDocBuilderDev
2024-12-15T16:35:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,483
172
HuggingFaceDocBuilderDev
2024-12-15T12:58:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2482). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,482
173
qgallouedec
2024-12-15T15:34:21
2 questions/remarks: - can you run benchmark so that we can (1) quantify the improvement and (2) check that results with and without liger are the same - we could have an additional tag for the hub when a model is trained with liger
2,482
174
qgallouedec
2024-12-15T15:48:46
I think we should bump liger version to v0.5 (it doesn't include the loss before), see https://github.com/linkedin/Liger-Kernel/releases/tag/v0.5.0
2,482
175
kashif
2024-12-18T10:46:56
waiting on https://github.com/linkedin/Liger-Kernel/pull/486
2,482
176
kashif
2024-12-19T10:09:55
waiting on https://github.com/huggingface/trl/pull/2502
2,482
177
qgallouedec
2024-12-19T10:33:44
@kashif can you share the curves once it's ready?
2,482
178
kashif
2024-12-29T14:45:28
tests fail as they need: https://github.com/linkedin/Liger-Kernel/pull/503
2,482
179
kashif
2024-12-15T09:31:55
@hteague-qti so I wanted to get it working with this collator and then come back and make it more general after that.. so would you have a suggestion on what the next generalization could be? make it work for the SFT default collator?
2,481
180
hteague-qti
2024-12-16T19:28:17
I was thinking it could be made completely independent of the collator. First thing might be to warn users that even though they are providing a collator in the args, you are switching to a different one (for now). Seems to me that trainer should not care about the data preprocessing or the collator, just the output logits, etc. Making it work with default collator in SFT would be fine. This one is quite common for language: DataCollatorForCompletionOnlyLM
2,481
181
hteague-qti
2024-12-19T21:39:17
btw, appreciate the response.
2,481
182
HuggingFaceDocBuilderDev
2024-12-14T21:45:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2480). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,480
183
August-murr
2024-12-14T18:57:56
Before adding it to all the trainers, what do you think of the overall structure? Is it okay to include the tools in each trainer configuration?
2,479
184
qgallouedec
2024-12-14T19:05:11
Thanks for this addition! Let's keep things as separate as possible, and keep this PR for DPO only. The code as is looks good to me. The only question is: can this type (`Optional[list[Union[dict, Callable]]]`) being parsed. I'll try.
2,479
185
qgallouedec
2024-12-14T19:27:17
That's why I thought: ```python from trl import DPOConfig, TrlParser parser = TrlParser((DPOConfig,)) parser.parse_args_and_config() ``` ``` $ python 2479.py --output_dir out --tools "{'type': 'function', 'function': {'name': 'multiply', 'description': 'A function that multiplies two numbers', 'parameters': {'type': 'object', 'properties': {'a': {'type': 'number', 'description': 'The first number to multiply'}, 'b': {'type': 'number', 'description': 'The second number to multiply'}}, 'required': ['a', 'b']}}}" [...] 2479.py: error: argument --tools: invalid Union value: "{'type': 'function', 'function': {'name': 'multiply', 'description': 'A function that multiplies two numbers', 'parameters': {'type': 'object', 'properties': {'a': {'type': 'number', 'description': 'The first number to multiply'}, 'b': {'type': 'number', 'description': 'The second number to multiply'}}, 'required': ['a', 'b']}}}" ``` I'm not sure what the best way to handle it right now, I'll sleep on it.
2,479
186
August-murr
2024-12-15T08:51:45
> Let's keep things as separate as possible, and keep this PR for DPO only. a different PR for each trainer then? > can this type `(Optional[list[Union[dict, Callable]]])` being parsed. Adding tools to the CLI would be quite complicated. It wouldn't be practical to add all the tools into the CLI. My best guess is to read the functions from another source, like another script, if there’s a request for it later.
2,479
187
August-murr
2024-12-16T08:22:54
does this need anything else? test or docs?
2,479
188
August-murr
2024-12-25T13:16:53
I also wanted to add it to `SFTTrainer` but it doesn't use `maybe_apply_chat_template`
2,479
189
HuggingFaceDocBuilderDev
2024-12-13T20:46:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,476
190
HuggingFaceDocBuilderDev
2024-12-13T19:02:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2475). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,475
191
HuggingFaceDocBuilderDev
2024-12-13T17:43:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,474
192
qgallouedec
2025-01-07T10:19:07
I realized that the problem was in fact not only confined to DPO but also to other trainers. So I thought twice about the approach, and I think the right solution is actually to gather at the time these metrics are calculated. It's the most logical and it's what's done in the other trainers. Thanks for spotting on the issue, I'll wait until the CI is green and merge.
2,474
193
kashif
2025-01-07T10:24:55
@zhc7 accelerate has a dedicated helper for metrics: https://huggingface.co/docs/accelerate/en/package_reference/accelerator#accelerate.Accelerator.gather_for_metrics which is what is recommended to be used
2,474
194
qgallouedec
2025-01-07T13:46:31
Valid point! thanks @kashif. I generalised `gather_for_metrics` in 17383f9
2,474
195
asparius
2024-12-14T00:28:52
It utilizes `self.model`, which is defined in [[this line](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162)](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162). This approach is also adopted in `PPOTrainer`. I believe this is a deliberate nomenclature choice, designed to remain consistent across various preference learning frameworks without introducing the complexity of aligning with the diverse terminologies used in academic papers.
2,472
196
qgallouedec
2024-12-13T16:33:05
Yes, that's a good point! All datasets in [hf.co/trl-lib](https://huggingface.co/trl-lib) are taken from an original dataset. We should at least indicate this dataset in the readme with something like: ``` This dataset is a processed version of [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) with this [script](https://github.com/huggingface/trl/blob/main/examples/datasets/ultrafeedback.py). ``` To do this, we should add to all script in https://github.com/huggingface/trl/blob/main/examples/datasets a model card that we push, like in https://github.com/huggingface/trl/blob/179ba5367181d9bd4bdaec70d50789b09754d04a/scripts/generate_tiny_models.py#L69-L97 We could also add the type/format of dataset with a link to the relevant section in this page of the documentation: https://huggingface.co/docs/trl/en/dataset_formats
2,470
197
qgallouedec
2024-12-13T16:44:51
What you're describing sounds closer to _padding-free_ than packing. We have a (currently draft) PR for this: #2437. Can you confirm that's it is what you're describing? --- At this point I'm not even sure that packing for DPO makes sense. How to ensure that you've as many chosen than rejected? How to ensure they match? How to handle partial sequences?
2,469
198
zhc7
2024-12-13T17:16:15
Hi, thank you for your response. I looked into the link you provided. I think we are talking about the same thing. I used the word "packing" from https://huggingface.co/blog/packing-with-FA2. The "packing" here actually means concatenating a fixed batch size of samples into one sequence, and use `position_ids` to mark the boundaries, rather than packing to a fixed length. So there won't be the problems you mentioned. I've also briefly read https://huggingface.co/blog/mayank-mishra/padding-free-transformer this blog, I think the ideas are the same. But I'm not sure how the latter is implemented. Maybe they are the same thing just with different names:) I breifly went through the pr, I see it is trying to add `position_ids` in the whole process, so I guess we are talking about the same thing.
2,469
199