Update README.md
Browse files
README.md
CHANGED
@@ -2,21 +2,29 @@
|
|
2 |
base_model: unsloth/Llama-3.2-1B
|
3 |
language:
|
4 |
- en
|
5 |
-
|
|
|
6 |
tags:
|
7 |
- text-generation-inference
|
8 |
- transformers
|
9 |
- unsloth
|
10 |
- llama
|
11 |
- trl
|
|
|
|
|
12 |
---
|
|
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
15 |
|
16 |
- **Developed by:** mpasila
|
17 |
-
- **License:**
|
18 |
- **Finetuned from model :** unsloth/Llama-3.2-1B
|
19 |
|
20 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
2 |
base_model: unsloth/Llama-3.2-1B
|
3 |
language:
|
4 |
- en
|
5 |
+
- fi
|
6 |
+
license: llama3.2
|
7 |
tags:
|
8 |
- text-generation-inference
|
9 |
- transformers
|
10 |
- unsloth
|
11 |
- llama
|
12 |
- trl
|
13 |
+
datasets:
|
14 |
+
- wikimedia/wikipedia
|
15 |
---
|
16 |
+
Here's a "continued pre-trained" model using [Finnish Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. I still don't understand why no one in Finland has figured out that they could just do continued pre-training on existing models that are already supported by every frontend.. I've seen Japanese models perform pretty well with that kind of continued pre-training, yet Finnish models are still done from scratch which means they suck ass. If you compare them to Llama 3 or Gemma 2 they just suck so much. They can't even match Mistral 7B a model from last year. Just stop wasting money on training models from scratch, use these better models as base and train it on all your closed-source data I don't have access to. Thank you.
|
17 |
|
18 |
+
Merged model: [mpasila/Llama-3.2-Finnish-Wikipedia-1B](https://huggingface.co/mpasila/Llama-3.2-Finnish-Wikipedia-1B)
|
19 |
+
|
20 |
+
Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 1 epoch using RTX 4090 for about 12,5 hours.
|
21 |
+
|
22 |
+
# Uploaded Llama-3.2-Finnish-Wikipedia-LoRA-1B model
|
23 |
|
24 |
- **Developed by:** mpasila
|
25 |
+
- **License:** Llama 3.2 Community License Agreement
|
26 |
- **Finetuned from model :** unsloth/Llama-3.2-1B
|
27 |
|
28 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
29 |
|
30 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|