Text Generation
Transformers
Safetensors
Turkish
English
llama
conversational
text-generation-inference
Inference Endpoints
bol20162021 commited on
Commit
a185d32
·
verified ·
1 Parent(s): e2ed40c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -33,10 +33,23 @@ SambaLingo-Turkish-Chat is a human aligned chat model trained in Turkish and Eng
33
  ```python
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
 
36
- tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat")
37
  model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat", device_map="auto", torch_dtype="auto")
38
  ```
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ### Suggested Inference Parameters
41
  - Temperature: 0.8
42
  - Repetition penalty: 1.0
 
33
  ```python
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
 
36
+ tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat", use_fast=False)
37
  model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat", device_map="auto", torch_dtype="auto")
38
  ```
39
 
40
+ ### Interacting with the model with pipeline
41
+ ```python
42
+ from transformers import AutoModelForCausalLM, AutoTokenizer
43
+ from transformers import pipeline
44
+ pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Turkish-Chat", device_map="auto", use_fast=False)
45
+ messages = [
46
+ {"role": "user", "content": {YOUR_QUESTION}},
47
+ ]
48
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
49
+ outputs = pipe(prompt)[0]
50
+ outputs = outputs["generated_text"]
51
+ ```
52
+
53
  ### Suggested Inference Parameters
54
  - Temperature: 0.8
55
  - Repetition penalty: 1.0