File size: 1,951 Bytes
2937563
c437b73
 
2937563
c437b73
 
2937563
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7e38494
 
2937563
 
 
 
7e38494
2937563
 
 
 
 
 
029cb84
 
 
 
 
 
 
 
2937563
 
 
c437b73
7e38494
c437b73
2937563
7e38494
c437b73
7e38494
2937563
 
c437b73
7e38494
 
 
2937563
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---

# LLAMA-3.1 8B Chat Nuclear Model

- **Developed by:** inetnuc
- **License:** apache-2.0
- **Finetuned from model:** unsloth/Meta-Llama-3.1-8B-bnb-4bit

This LLAMA-3.1 model was finetuned to enhance capabilities in text generation for nuclear-related topics. The training was accelerated using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library, achieving a 2x faster performance.

## Finetuning Process
The model was finetuned using the Unsloth library, leveraging its efficient training capabilities. The process included the following steps:

1. **Data Preparation:** Loaded and preprocessed nuclear-related data.
2. **Model Loading:** Utilized `unsloth/llama-3-8b-bnb-4bit` as the base model.
3. **LoRA Patching:** Applied LoRA (Low-Rank Adaptation) for efficient training.
4. **Training:** Finetuned the model using Hugging Face's TRL library with optimized hyperparameters.

## Model Details

- **Base Model:** `unsloth/llama-3.1-8b-bnb-4bit`
- **Language:** English (`en`)
- **License:** Apache-2.0

## Author

**MUSTAFA UMUT OZBEK**

**https://www.linkedin.com/in/mustafaumutozbek/**
**https://x.com/m_umut_ozbek**


## Usage

### Loading the Model

You can load the model and tokenizer using the following code snippet:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("inetnuc/Llama-3.1-8B-bnb-4bit-chat-nuclear-lora")
model = AutoModelForCausalLM.from_pretrained("inetnuc/Llama-3.1-8B-bnb-4bit-chat-nuclear-lora")

# Example of generating text
inputs = tokenizer("what is the iaea approach for cyber security?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))