MISHANM commited on
Commit
ccef24d
·
verified ·
1 Parent(s): 0175655

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -13
README.md CHANGED
@@ -3,23 +3,28 @@ base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
  library_name: peft
4
  ---
5
 
6
- # MISHANM/Sindhi_text_generation_Llama3_8B_instruction
7
-
8
- This model is fine-tuned for the Sindhi language, capable of answering queries and translating text Between English and Sindhi . It leverages advanced natural language processing techniques to provide accurate and context-aware responses.
9
 
 
10
 
11
 
12
  ## Model Details
13
- 1. Language: Sindhi
14
- 2. Tasks: Question Answering, Translation (English to Sindhi )
15
- 3. Base Model: meta-llama/Meta-Llama-3-8B-Instruct
 
 
 
 
 
 
16
 
17
 
18
 
19
  # Training Details
20
 
21
- The model is trained on approx 29K instruction samples.
22
- 1. GPUs: 4*AMD RadeonPRO V620
23
 
24
 
25
 
@@ -34,7 +39,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
34
  device = "cuda" if torch.cuda.is_available() else "cpu"
35
 
36
  # Load the fine-tuned model and tokenizer
37
- model_path = "MISHANM/Sindhi_text_generation_Llama3_8B_instruction"
38
  model = AutoModelForCausalLM.from_pretrained(model_path)
39
 
40
  # Wrap the model with DataParallel if multiple GPUs are available
@@ -53,7 +58,7 @@ def generate_text(prompt, max_length=1000, temperature=0.9):
53
  messages = [
54
  {
55
  "role": "system",
56
- "content": "You are a Sindhi language expert and linguist, with same knowledge give answers in Sindhi language. ",
57
  },
58
  {"role": "user", "content": prompt}
59
  ]
@@ -69,7 +74,7 @@ def generate_text(prompt, max_length=1000, temperature=0.9):
69
  return tokenizer.decode(output[0], skip_special_tokens=True)
70
 
71
  # Example usage
72
- prompt = """Write a poem LLM ."""
73
  translated_text = generate_text(prompt)
74
  print(translated_text)
75
 
@@ -78,9 +83,9 @@ print(translated_text)
78
 
79
  ## Citation Information
80
  ```
81
- @misc{MISHANM/Sindhi_text_generation_Llama3_8B_instruction,
82
  author = {Mishan Maurya},
83
- title = {Introducing Fine Tuned LLM for Sindhi Language},
84
  year = {2024},
85
  publisher = {Hugging Face},
86
  journal = {Hugging Face repository},
 
3
  library_name: peft
4
  ---
5
 
6
+ # MISHANM/Multilingual_Llama-3-8B-Instruct
 
 
7
 
8
+ This model is fine-tuned for Multi languages , capable of answering queries and translating text from English to Multiple languages . It leverages advanced natural language processing techniques to provide accurate and context-aware responses.
9
 
10
 
11
  ## Model Details
12
+ This model is based on meta-llama/Llama-3.2-3B-Instruct and has been LoRA finetuned on Indic datasets:
13
+ 1.Gujarati
14
+ 2.Kannada
15
+ 3.Hindi
16
+ 4.Odia
17
+ 5.Punjabi
18
+ 6.Bengali
19
+ 7.Tamil
20
+ 8.Telugu
21
 
22
 
23
 
24
  # Training Details
25
 
26
+ The model is trained on approx 321K instruction samples.
27
+ 1. GPUs: 2*AMD InstinctMI210 Accelerators
28
 
29
 
30
 
 
39
  device = "cuda" if torch.cuda.is_available() else "cpu"
40
 
41
  # Load the fine-tuned model and tokenizer
42
+ model_path = "MISHANM/Multilingual_Llama-3-8B-Instruct"
43
  model = AutoModelForCausalLM.from_pretrained(model_path)
44
 
45
  # Wrap the model with DataParallel if multiple GPUs are available
 
58
  messages = [
59
  {
60
  "role": "system",
61
+ "content": "You are a language expert and linguist, with same knowledge give response in ().", #In place of "()" write your desired language in which response is required. ",
62
  },
63
  {"role": "user", "content": prompt}
64
  ]
 
74
  return tokenizer.decode(output[0], skip_special_tokens=True)
75
 
76
  # Example usage
77
+ prompt = """Write a story about LLM ."""
78
  translated_text = generate_text(prompt)
79
  print(translated_text)
80
 
 
83
 
84
  ## Citation Information
85
  ```
86
+ @misc{MISHANM/Multilingual_Llama-3-8B-Instruct,
87
  author = {Mishan Maurya},
88
+ title = {Introducing Fine Tuned LLM for Indic Languages},
89
  year = {2024},
90
  publisher = {Hugging Face},
91
  journal = {Hugging Face repository},