Anmol Dubey
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -209,7 +209,7 @@ base_model: meta-llama/Llama-3.2-1B
|
|
209 |
|
210 |
# bobachicken/Llama-3.2-1B-Alpaca
|
211 |
|
212 |
-
The Model [bobachicken/Llama-3.2-1B-Alpaca](https://huggingface.co/bobachicken/Llama-3.2-1B-Alpaca) was
|
213 |
|
214 |
## Use with mlx
|
215 |
|
|
|
209 |
|
210 |
# bobachicken/Llama-3.2-1B-Alpaca
|
211 |
|
212 |
+
The Model [bobachicken/Llama-3.2-1B-Alpaca](https://huggingface.co/bobachicken/Llama-3.2-1B-Alpaca) was finetuned using the Dataset [bobachicken/alpaca-split](https://huggingface.co/datasets/bobachicken/alpaca-split) , a split version of the Alpaca Dataset [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) using the MLX Framework(https://github.com/ml-explore) developed by Apple. The fine-tuning process was conducted using DORA(https://developer.nvidia.com/blog/introducing-dora-a-high-performing-alternative-to-lora-for-fine-tuning/) for a total of 2000 iterations.
|
213 |
|
214 |
## Use with mlx
|
215 |
|