malhajar commited on
Commit
6d5a877
·
1 Parent(s): 413820f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
  # Model Card for Model ID
9
 
10
  <!-- Provide a quick summary of what the model is/does. -->
11
- malhajar/Llama-2-13b-chat-dolly-tr is a finetuned version of Llama-2-7b-hf using SFT Training.
12
  This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`databricks-dolly-15k-tr`]( https://huggingface.co/datasets/atasoglu/databricks-dolly-15k-tr)
13
 
14
  ![llama](./llama.png)
@@ -31,7 +31,7 @@ Use the code sample provided in the original post to interact with the model.
31
  ```python
32
  from transformers import AutoTokenizer,AutoModelForCausalLM
33
 
34
- model_id = "malhajar/Llama-2-13b-chat-dolly-tr"
35
  model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
36
  device_map="auto",
37
  torch_dtype=torch.float16,
@@ -39,17 +39,15 @@ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
39
 
40
  tokenizer = AutoTokenizer.from_pretrained(model_id)
41
 
42
- question: "what is the will to truth?"
43
  # For generating a response
44
  prompt = '''
45
- ### Instruction:
46
- {question}
47
-
48
- ### Response:'''
49
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
50
  output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
51
  top_p=0.95)
52
  response = tokenizer.decode(output[0])
53
 
54
  print(response)
55
- ```
 
8
  # Model Card for Model ID
9
 
10
  <!-- Provide a quick summary of what the model is/does. -->
11
+ malhajar/Llama-2-13b-chat-dolly-tr is a finetuned version of Llama-2-13b-hf using SFT Training.
12
  This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`databricks-dolly-15k-tr`]( https://huggingface.co/datasets/atasoglu/databricks-dolly-15k-tr)
13
 
14
  ![llama](./llama.png)
 
31
  ```python
32
  from transformers import AutoTokenizer,AutoModelForCausalLM
33
 
34
+ model_id = "malhajar/Llama-2-7b-chat-dolly-tr"
35
  model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
36
  device_map="auto",
37
  torch_dtype=torch.float16,
 
39
 
40
  tokenizer = AutoTokenizer.from_pretrained(model_id)
41
 
42
+ question: "Türkiyenin en büyük şehir nedir?"
43
  # For generating a response
44
  prompt = '''
45
+ <s>[INST] {question} [/INST]
46
+ '''
 
 
47
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
48
  output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
49
  top_p=0.95)
50
  response = tokenizer.decode(output[0])
51
 
52
  print(response)
53
+ ```