manu commited on
Commit
f2f108d
1 Parent(s): 0952a5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md CHANGED
@@ -68,6 +68,42 @@ Our work can be cited as:
68
 
69
  This model is a Chat model, that is, it is finetuned for Chat function and works best with the provided template.
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  ```python
72
 
73
  import torch
 
68
 
69
  This model is a Chat model, that is, it is finetuned for Chat function and works best with the provided template.
70
 
71
+ #### With pipeline
72
+
73
+ ```python
74
+ import torch
75
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
76
+
77
+
78
+ model_name = "croissantllm/CroissantLLMChat-v0.1"
79
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
80
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
81
+
82
+ messages = [
83
+ {"role": "user", "content": "Qui est le président francais ?"},
84
+ ]
85
+
86
+ pipe = pipeline(
87
+ "text-generation",
88
+ model=model,
89
+ tokenizer=tokenizer,
90
+ )
91
+
92
+ generation_args = {
93
+ "max_new_tokens": 500,
94
+ "return_full_text": False,
95
+ "temperature": 0.0,
96
+ "do_sample": False,
97
+ }
98
+
99
+ output = pipe(messages, **generation_args)
100
+ print(output[0]['generated_text'])
101
+ ```
102
+
103
+ #### With generate
104
+
105
+ This might require a stopping criteria on <|im_end|> token.
106
+
107
  ```python
108
 
109
  import torch