parinitarahi
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -56,8 +56,8 @@ To easily get started with the model, you can use our newly introduced ONNX Runt
|
|
56 |
## ONNX Models
|
57 |
Here are some of the optimized configurations we have added:
|
58 |
|
59 |
-
1. ONNX model for int4 CPU and Mobile: ONNX model for CPU and mobile using int4 quantization via RTN
|
60 |
-
2. ONNX model for int4
|
61 |
|
62 |
|
63 |
## Hardware Supported
|
@@ -73,10 +73,11 @@ Minimum Configuration Required:
|
|
73 |
|
74 |
## Model Description
|
75 |
|
76 |
-
- **Developed by:**
|
77 |
- **Model type:** ONNX
|
78 |
- **Language(s) (NLP):** Python, C, C++
|
79 |
- **License:** MIT
|
|
|
80 |
- **Model Description:** This is a conversion of the Llama 3.2 model for ONNX Runtime inference.
|
81 |
- **Disclaimer:** Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for you scenarios. There may be a slight difference in output from the base model with the optimizations applied. **
|
82 |
|
@@ -103,29 +104,7 @@ This joke plays on the double meaning of "make up." In science, atoms are the fu
|
|
103 |
- Up to 1.4 X faster than llama.cpp on Standard F16s v2 (16 vcpus, 32 GiB memory).
|
104 |
- Up to 39X faster than PyTorch compile on Standard_ND96amsr_A100_v4.
|
105 |
|
106 |
-
|
107 |
## Base Model Information
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
** Base Model Developer:** Meta
|
112 |
-
|
113 |
-
** Base Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
114 |
-
|
115 |
-
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
|
116 |
-
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
117 |
-
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
|
118 |
-
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
|
119 |
-
|
120 |
-
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
|
121 |
-
|
122 |
-
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
|
123 |
-
|
124 |
-
** Base Model Release Date:** Sept 25, 2024
|
125 |
-
|
126 |
-
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
|
127 |
-
|
128 |
-
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
|
129 |
-
|
130 |
-
"See Meta's model card at [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
|
131 |
-
for more information about the base model, including the base model's specific approach to responsible AI risks"
|
|
|
56 |
## ONNX Models
|
57 |
Here are some of the optimized configurations we have added:
|
58 |
|
59 |
+
1. ONNX model for int4 CPU and Mobile: ONNX model for CPU and mobile using int4 quantization via RTN
|
60 |
+
2. ONNX model for int4 GPU using quantization via RTN.
|
61 |
|
62 |
|
63 |
## Hardware Supported
|
|
|
73 |
|
74 |
## Model Description
|
75 |
|
76 |
+
- **Developed by:** ONNX Runtime, Microsoft
|
77 |
- **Model type:** ONNX
|
78 |
- **Language(s) (NLP):** Python, C, C++
|
79 |
- **License:** MIT
|
80 |
+
- **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
|
81 |
- **Model Description:** This is a conversion of the Llama 3.2 model for ONNX Runtime inference.
|
82 |
- **Disclaimer:** Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for you scenarios. There may be a slight difference in output from the base model with the optimizations applied. **
|
83 |
|
|
|
104 |
- Up to 1.4 X faster than llama.cpp on Standard F16s v2 (16 vcpus, 32 GiB memory).
|
105 |
- Up to 39X faster than PyTorch compile on Standard_ND96amsr_A100_v4.
|
106 |
|
|
|
107 |
## Base Model Information
|
108 |
+
"See Meta's model card
|
109 |
+
[Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
|
110 |
+
for more information about the base model, including the base model's specific approach to responsible AI risks"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|