Daemontatox
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -74,4 +74,26 @@ model = AutoModelForCausalLM.from_pretrained(model_name)
|
|
74 |
prompt = "Explain the Pythagorean theorem step-by-step:"
|
75 |
inputs = tokenizer(prompt, return_tensors="pt")
|
76 |
outputs = model.generate(**inputs)
|
77 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
prompt = "Explain the Pythagorean theorem step-by-step:"
|
75 |
inputs = tokenizer(prompt, return_tensors="pt")
|
76 |
outputs = model.generate(**inputs)
|
77 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
78 |
+
```
|
79 |
+
|
80 |
+
## Optimized Inference:
|
81 |
+
Install the transformers and text-generation-inference libraries.
|
82 |
+
Deploy on servers or edge devices using quantized models for optimal performance.
|
83 |
+
Training Data
|
84 |
+
The fine-tuning process utilized reasoning-specific datasets, including:
|
85 |
+
|
86 |
+
### MATH Dataset: Focused on logical and mathematical problems.
|
87 |
+
### Custom Corpora: Tailored datasets for multi-domain reasoning and structured problem-solving.
|
88 |
+
### Ethical Considerations
|
89 |
+
### Bias Awareness: The model reflects biases present in the training data. Users should carefully evaluate outputs in sensitive contexts.
|
90 |
+
### Safe Deployment: Not recommended for generating harmful or unethical content.
|
91 |
+
|
92 |
+
## Acknowledgments
|
93 |
+
This model was developed with contributions from Daemontatox and the Unsloth team, utilizing state-of-the-art techniques in fine-tuning and optimization.
|
94 |
+
|
95 |
+
For more information or collaboration inquiries, please contact:
|
96 |
+
|
97 |
+
Author: Daemontatox
|
98 |
+
GitHub: Daemontatox GitHub Profile
|
99 |
+
Unsloth: Unsloth GitHub
|