File size: 3,079 Bytes
7fe7a62
0175655
 
90ab329
 
 
 
 
 
 
 
 
 
 
7fe7a62
 
ccef24d
7fe7a62
ccef24d
7fe7a62
 
 
6b1db0a
3c9e78a
 
 
 
 
 
 
 
7fe7a62
 
 
0175655
7fe7a62
ccef24d
 
0175655
 
7fe7a62
 
0175655
 
 
 
 
7fe7a62
0175655
 
7fe7a62
0175655
ccef24d
0175655
7fe7a62
0175655
 
 
 
7fe7a62
0175655
 
7fe7a62
0175655
7fe7a62
0175655
 
 
 
 
 
ccef24d
0175655
 
 
7fe7a62
0175655
 
7fe7a62
0175655
 
 
 
 
 
7fe7a62
0175655
ccef24d
0175655
 
7fe7a62
 
0175655
7fe7a62
0175655
 
ccef24d
0175655
ccef24d
0175655
 
 
 
 
 
7fe7a62
 
0175655
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: peft
datasets:
- Tensoic/Alpaca-Gujarati
- Tensoic/airoboros-3.2_kn
- ravithejads/samvaad-hi-filtered
- HydraIndicLM/hindi_alpaca_dolly_67k
- OdiaGenAI/Odia_Alpaca_instructions_52k
- OdiaGenAI/gpt-teacher-roleplay-odia-3k
- HydraIndicLM/punjabi_alpaca_52K
- HydraIndicLM/bengali_alpaca_dolly_67k
- abhinand/tamil-alpaca
- Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized
---

# MISHANM/Multilingual_Llama-3-8B-Instruct

This model is fine-tuned for Multi languages , capable of answering queries and translating text from English to Multiple languages . It leverages advanced natural language processing techniques to provide accurate and context-aware responses.


## Model Details
This model is based on meta-llama/Llama-3.2-3B-Instruct and has been LoRA finetuned on Multi language datasets:
1. Gujarati
2. Kannada
3. Hindi
4. Odia
5. Punjabi
6. Bengali
7. Tamil
8. Telugu 



# Training Details

The model is trained on approx 321K instruction samples.
1. GPUs: 2*AMD Instinct™ MI210 Accelerators
  
   


 ## Inference with HuggingFace
 ```python3
 
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Set the device
device = "cuda" if torch.cuda.is_available() else "cpu"

# Load the fine-tuned model and tokenizer
model_path = "MISHANM/Multilingual_Llama-3-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_path)

# Wrap the model with DataParallel if multiple GPUs are available
if torch.cuda.device_count() > 1:
    print(f"Using {torch.cuda.device_count()} GPUs")
    model = torch.nn.DataParallel(model)

# Move the model to the appropriate device
model.to(device)

tokenizer = AutoTokenizer.from_pretrained(model_path)

# Function to generate text
def generate_text(prompt, max_length=1000, temperature=0.9):
    # Format the prompt according to the chat template
    messages = [
        {
            "role": "system",
            "content": "You are a language expert and linguist, with same knowledge give response in ().", #In place of "()" write your desired language in which response is required. ",
        },
        {"role": "user", "content": prompt}
    ]

    # Apply the chat template
    formatted_prompt = f"<|system|>{messages[0]['content']}<|user|>{messages[1]['content']}<|assistant|>"

    # Tokenize and generate output
    inputs = tokenizer(formatted_prompt, return_tensors="pt").to(device)
    output = model.module.generate(  # Use model.module for DataParallel
        **inputs, max_new_tokens=max_length, temperature=temperature, do_sample=True
    )
    return tokenizer.decode(output[0], skip_special_tokens=True)

# Example usage
prompt = """Write a story about LLM ."""
translated_text = generate_text(prompt)
print(translated_text)


```

## Citation Information
```
@misc{MISHANM/Multilingual_Llama-3-8B-Instruct,
  author = {Mishan Maurya},
  title = {Introducing Fine Tuned LLM for Indic Languages},
  year = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  
}
```


- PEFT 0.12.0