Safetensors
qwen2
linqq9 commited on
Commit
530dfdc
·
verified ·
1 Parent(s): 0133e65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -13
README.md CHANGED
@@ -579,16 +579,4 @@ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, ret
579
  # tokenizer.eos_token_id is the id of <|EOT|> token
580
  outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
581
  print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
582
- ~~~
583
-
584
-
585
-
586
- ## References
587
- - 1.Yan F, Mao H, Ji C C-J, et al. Berkeley Function Calling Leaderboard.
588
-
589
- - 2. Abdelaziz I, Basu K, Agarwal M, et al. Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks[J]. arXiv preprint arXiv:2407.00121, 2024.
590
-
591
- - 3. Wu M, Zhu T, Han H, et al. Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmark[J]. arXiv preprint arXiv:2405.08355, 2024.
592
-
593
-
594
- Feel free to reach out for further clarifications or contributions!
 
579
  # tokenizer.eos_token_id is the id of <|EOT|> token
580
  outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
581
  print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
582
+ ~~~