prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -9,5 +9,7 @@ library_name: transformers
|
|
9 |
tags:
|
10 |
- text-generation-inference
|
11 |
---
|
12 |
-
|
13 |
![fghfghf.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/zcI6XhYs9oKapZxWDkVQJ.png)
|
|
|
|
|
|
|
|
9 |
tags:
|
10 |
- text-generation-inference
|
11 |
---
|
|
|
12 |
![fghfghf.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/zcI6XhYs9oKapZxWDkVQJ.png)
|
13 |
+
|
14 |
+
Calcium 20B, based on the Llama 3.1 collection of multilingual large language models (LLMs), is a collection of pretrained and instruction-tuned generative models optimized for multilingual dialogue use cases. These models outperform many available open-source alternatives.
|
15 |
+
Model Architecture: Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions are fine-tuned using supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Calcium 20B is trained on synthetic reasoning datasets for mathematical reasoning and science-based problem solving, focusing on following instructions or keywords embedded in the input.
|