Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Ahma-3B - GGUF

Name Quant method Size
Ahma-3B.Q2_K.gguf Q2_K 2.0GB
Ahma-3B.IQ3_XS.gguf IQ3_XS 2.0GB
Ahma-3B.IQ3_S.gguf IQ3_S 2.0GB
Ahma-3B.Q3_K_S.gguf Q3_K_S 2.0GB
Ahma-3B.IQ3_M.gguf IQ3_M 2.07GB
Ahma-3B.Q3_K.gguf Q3_K 2.15GB
Ahma-3B.Q3_K_M.gguf Q3_K_M 2.15GB
Ahma-3B.Q3_K_L.gguf Q3_K_L 2.22GB
Ahma-3B.IQ4_XS.gguf IQ4_XS 2.02GB
Ahma-3B.Q4_0.gguf Q4_0 2.0GB
Ahma-3B.IQ4_NL.gguf IQ4_NL 2.02GB
Ahma-3B.Q4_K_S.gguf Q4_K_S 2.41GB
Ahma-3B.Q4_K.gguf Q4_K 2.57GB
Ahma-3B.Q4_K_M.gguf Q4_K_M 2.57GB
Ahma-3B.Q4_1.gguf Q4_1 2.2GB
Ahma-3B.Q5_0.gguf Q5_0 2.4GB
Ahma-3B.Q5_K_S.gguf Q5_K_S 2.6GB
Ahma-3B.Q5_K.gguf Q5_K 2.74GB
Ahma-3B.Q5_K_M.gguf Q5_K_M 2.74GB
Ahma-3B.Q5_1.gguf Q5_1 2.6GB
Ahma-3B.Q6_K.gguf Q6_K 3.6GB
Ahma-3B.Q8_0.gguf Q8_0 3.6GB

Original model description:

language:

  • fi license: apache-2.0 tags:
  • finnish
  • llama datasets:
  • Finnish-NLP/CulturaX_fi_cleaned
  • Finnish-NLP/HPLT_1.2_fi_cleaned
  • Finnish-NLP/wikipedia_20231101_fi_cleaned
  • Finnish-NLP/Reddit_fi_2006_2022
  • intfloat/multilingual_cc_news inference: false pipeline_tag: text-generation

Ahma-3B for Finnish

Ahma-3B is 3B parameter decoder-only transformer model based on Meta's Llama (v1) architecture pretrained from scratch on Finnish language. Original Llama model architecture was introduced in this paper and first released at this page.

What does Ahma mean? Ahma is the Finnish word for wolverine! In the Finnish Lapland, wolverines are the biggest cause of reindeer damage.

There are two different sized base Ahma models, all pretrained from scratch for 139B tokens:

Model Context length Layers Dim Heads Params
Ahma-3B 2048 26 3200 32 3.6B
Ahma-7B 2048 32 4096 32 7.0B

And two instruct-tuned versions:

Model Context length Layers Dim Heads Params
Ahma-3B-Instruct 2048 26 3200 32 3.6B
Ahma-7B-Instruct 2048 32 4096 32 7.0B

Intended uses & limitations

This model was pretrained only in a self-supervised way, without any supervised training. You can use this model for text generation or fine-tune it for a downstream task. This model followed a 2-stage pretraining approach where single-turn instruction-following examples were mixed in with the other training data in the second stage (explained more later in this readme). Thanks to this approach, this pretrained model is already capable of instruction following, but you might get even better results if you specifically fine-tune it for instruction following or other use cases. For instruction-following fine-tuning, you should use the same prompt format showcased below.

How to use

Fine-tuning

We have now added finetuning example notebook along with video!
Notebook: https://huggingface.co/Finnish-NLP/Ahma-3B/blob/main/Finetune_Ahma_3B_example.ipynb
Video: https://www.youtube.com/watch?v=6mbgn9XzpS4

Inference

If you want to use this model for instruction-following, you need to use the same prompt format we used in the second stage of the pretraining (basically the same format what Meta used in their Llama2 models). Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer. Here is an example using the instruction-following prompt format, with some generation arguments you can modify for your use:

from transformers import AutoTokenizer, AutoModelForCausalLM

system_prompt = "Olet tekoälyavustaja. Vastaat aina mahdollisimman avuliaasti. Vastauksesi eivät saa sisältää mitään haitallista, epäeettistä, rasistista, seksististä, vaarallista tai laitonta sisältöä. Jos kysymyksessä ei ole mitään järkeä tai se ei ole asiasisällöltään johdonmukainen, selitä miksi sen sijaan, että vastaisit jotain väärin. Jos et tiedä vastausta kysymykseen, älä kerro väärää tietoa."


def format_prompt(prompt: str) -> str:
    prompt = f" [INST] <<SYS>>\n{system_prompt.strip()}\n<</SYS>>\n\n{prompt.strip()} [/INST] "
    return prompt


tokenizer = AutoTokenizer.from_pretrained("Finnish-NLP/Ahma-3B")
model = AutoModelForCausalLM.from_pretrained("Finnish-NLP/Ahma-3B")
model = model.to("cuda")

# use the custom prompt format function or the chat template feature in the tokenizer to format your inputs

# prompt = format_prompt("Mitä hyötyjä pienet avoimen lähdekoodin kielimallit tuovat?")
# inputs = tokenizer(prompt, return_tensors="pt")

messages = [
    {
        "role": "system",
        "content": system_prompt,
    },
    {"role": "user", "content": "Mitä hyötyjä pienet avoimen lähdekoodin kielimallit tuovat?"},
]
inputs = tokenizer.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)
inputs = inputs.to("cuda")

generated_ids = model.generate(
    inputs,
    temperature=0.6,
    penalty_alpha=0.6,
    top_k=4,
    do_sample=True,
    repetition_penalty=1.2,
    min_length=5,
    max_length=2048,
)
generated_text = tokenizer.batch_decode(
    generated_ids, skip_special_tokens=False
)[0]

# Pienillä avoimen lähdekoodin kielimalleilla on lukuisia etuja, kuten parempi tarkkuus, nopeampi käsittelyaika ja parempi skaalautuvuus. Ne ovat myös usein edullisempia käyttää kuin kaupalliset mallit, joten ne ovat hyvä valinta pienemmille organisaatioille ja yksityishenkilöille, joilla on rajoitettu budjetti. Lisäksi ne voivat tarjota paremman joustavuuden ja mukauttamisen, koska käyttäjät voivat räätälöidä malleja vastaamaan omia tarpeitaan. Kaiken kaikkiaan pienet avoimen lähdekoodin kielimallit tarjoavat merkittäviä etuja, kuten paremman suorituskyvyn, paremman tarkkuuden, nopeamman käsittelyajan ja paremman skaalautuvuuden.

You may experiment with different system prompt instructions too if you like.

Limitations and bias

This model was trained only with Finnish texts excluding code so it should not be used for multilingual and code generation use cases.

The training data used for this model contains a lot of content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.

To reduce toxic content, training data was filtered with a toxicity classifier but it cannot truly eliminate all toxic text.

Training data

This model was pretrained on the combination of 14 datasets:

Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a perplexity score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. To reduce toxic text, we used Finnish toxicity classifier TurkuNLP/bert-large-finnish-cased-toxicity released by TurkuNLP to classify all text examples. Classified toxicity label scores can then be used to determine how toxic the text is.

All datasets were concatenated and the whole dataset near deduplicated using MinHashLSH from text-dedup. Top 95% perplexity score was used as a filtering threshold to filter out the worst quality 5% of texts. To reduce amount of toxic content, the dataset was filtered to include text examples having lower than 80% score for the toxicity labels "label_identity_attack", "label_insult", "label_threat" and "label_severe_toxicity".

Finally, 20,000 text examples from each of the CulturaX, Wikipedia, Yle, STT, Suomi24, and Reddit datasets were randomly selected for evaluation dataset.

The final training dataset had 23 billion words (calculated with regex "\w+") and the evaluation dataset had 23 million words. After tokenization, the training dataset had 41 billion tokens and the evaluation dataset had 40 million tokens. For the 2-stage pretraining, training datasets are divided as follows:

The first stage:

Dataset Words Ratio
CulturaX 12.820B 59.88%
HPLT v1.2 5.034B 23.51%
Suomi24 3.018B 14.09%
Reddit 0.141B 0.66%
CC-News 0.311B 1.45%
FI news corpus 0.004B 0.02%
Project Lönnrot 0.083B 0.39%
TOTAL 21.410B 100.0%

The second stage:

Dataset Words Ratio
CulturaX (cleaner sample using KenLM perplexity score) 2.252B 55.48%
Wikipedia 0.095B 2.34%
STT 0.253B 6.23%
Yle 0.212B 5.22%
Finnish parliament speeches 0.021B 0.52%
Finnish higher education public theses 0.855B 21.07%
Finnish instruction-following datasets (note: 2X upsampled) 0.371B 9.14%
TOTAL 4.059B 100.0%

Training procedure

Preprocessing

Texts are tokenized using Byte Pair Encoding (BPE) using the implementation from SentencePiece splitting all numbers into individual digits and using bytes to decompose unknown UTF-8 characters. The total vocabulary size is 64k tokens. Inputs are sequences of 2048 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. Both BOS and EOS tokens were used in the pretraining.

2-stage pretraining

The model was trained on TPUv4-32 VM, sponsored by the Google TPU Research Cloud. Training was conducted with a slightly modified Jax/Flax based EasyLM framework, and inspired by the OpenLLaMA project. The optimizer used was a Lion.

The 2-stage pretraining approach was inspired by MiniCPM findings. For the first stage (85% of the entire training), we used noisier web-scraped datasets. For the second stage (15% of the entire training), we primarily used cleaner datasets and instruction-following datasets shuffled together, like in MiniCPM. The learning rate schedule for the 2-stage pretraining was Warmup-Stable-Decay (WSD). During the first stage, the learning rate schedule had a linear warmup for about 8 billion tokens to a peak learning rate of 1e-4 (note: with the Lion optimizer, the learning rate had to be about 10 times smaller than with the commonly used AdamW), followed by a stable phase where the rate of 1e-4 was kept constant. During the second stage, the learning rate schedule had a linear decay from 1e-4 to 1e-5 for the first 13 billion tokens, followed by a stable phase for the remaining tokens.

In the first stage, the model was trained for 118 billion tokens, which is about three epochs of the first-stage training data, inspired by the findings of this paper. In the second stage, the model was trained for 21 billion tokens, which is about three epochs of the second-stage training data.

Thanks to the WSD learning rate schedule, you can more easily experiment with different first-stage model checkpoints. For example, you could apply the second-stage training on an earlier checkpoint or continue pretraining further before the second stage. Model checkpoints were pushed to this repository every 100,000 training steps (approximately 13 billion tokens).

Evaluation results

FIN-bench

This Ahma 3B base model was primarily evaluated using FIN-bench by TurkuNLP, and the same evaluation was carried out for other relevant Finnish models for comparison: FinGPT 8B by TurkuNLP, Viking 7B by TurkuNLP, SiloGen and HPLT, and Poro 34B by SiloGen, TurkuNLP and HPLT. Below are the results with 0-shot and 3-shot settings in FIN-bench.

0-shot results:

Benchmark Ahma 3B base (instruct prompt format) Ahma 3B Instruct (instruct prompt format) Ahma 7B base (instruct prompt format) Ahma 7B Instruct (instruct prompt format) FinGPT 8B Viking 7B Poro 34B (8bit quant)
Analogies 50.77 48.46 TBA TBA 49.23 40.00 54.62
Arithmetic 27.64 22.14 TBA TBA 33.15 30.16 30.34
Cause and Effect 59.48 58.82 TBA TBA 66.01 58.82 62.74
Emotions 36.25 28.12 TBA TBA 22.50 26.25 35.63
Empirical Judgements 33.33 35.35 TBA TBA 27.27 33.33 49.49
General Knowledge 44.29 48.57 TBA TBA 40.00 24.29 51.43
HHH Alignment 42.09 41.66 TBA TBA 41.81 42.51 42.92
Intent Recognition 24.42 26.16 TBA TBA 17.49 22.40 68.35
Misconceptions 46.27 47.01 TBA TBA 53.73 53.73 52.24
Paraphrase 59.50 73.00 TBA TBA 51.00 50.00 51.00
Sentence Ambiguity 53.33 65.00 TBA TBA 51.67 48.33 50.00
Similarities Abstraction 65.79 68.42 TBA TBA 60.53 65.79 60.53
Non-Arithmetic Average 47.55 48.95 TBA TBA 46.17 44.42 52.08
Overall Average 36.49 34.06 TBA TBA 38.93 36.50 40.00

3-shot results:

Benchmark Ahma 3B base (instruct prompt format) Ahma 3B Instruct (instruct prompt format) Ahma 7B base (instruct prompt format) Ahma 7B Instruct (instruct prompt format) FinGPT 8B Viking 7B Poro 34B (8bit quant)
Analogies 50.77 49.23 TBA TBA 40.77 54.62 76.92
Arithmetic 38.38 43.89 TBA TBA 43.63 45.78 53.68
Cause and Effect 60.78 64.71 TBA TBA 64.05 58.17 67.32
Emotions 30.00 41.25 TBA TBA 44.37 48.13 56.87
Empirical Judgements 46.46 44.44 TBA TBA 32.32 43.43 63.64
General Knowledge 47.14 40.00 TBA TBA 54.29 28.57 74.29
HHH Alignment 43.53 44.80 TBA TBA 45.39 44.80 46.07
Intent Recognition 20.52 44.22 TBA TBA 51.45 58.82 83.67
Misconceptions 50.75 52.24 TBA TBA 52.99 46.27 52.99
Paraphrase 50.50 58.50 TBA TBA 53.00 54.50 55.00
Sentence Ambiguity 53.33 48.33 TBA TBA 51.67 53.33 66.67
Similarities Abstraction 69.74 72.37 TBA TBA 64.47 73.68 75.00
Non-Arithmetic Average 48.48 51.49 TBA TBA 51.19 50.94 61.96
Overall Average 42.87 47.27 TBA TBA 46.99 48.07 57.36

As we can see, Ahma 3B base model outperforms 2X larger models like the FinGPT 8B and Viking 7B, especially in non-arithmetic tasks in 0-shot usage. Even the 10X larger Poro 34B model, which is generally better, doesn't show a huge performance difference considering its size, and Ahma 3B actually surpasses it in some tasks. This result might be attributed to Ahma's 2-stage pretraining and the inclusion of instruct-following examples during the pretraining phase.

In a 3-shot setting, we can see that the Ahma 3B base model slightly improves on overall non-arithmetic score. The performance of Ahma 3B base model in 3-shot settings might be due to the use of the instruct prompt format and having only single-turn instruction-following training examples instead of few-shot examples.

MTBench Finnish

This Ahma 3B base model was also evaluated using MTBench Finnish by LumiOpen even though this Ahma model is not fine-tuned for chat. Since the MTBench evaluates also multi-turn chats while Ahma base models were only pretrained with single-turn instruction following examples, we have reported MTBench Finnish results separately for their single-turn and multi-turn evaluation examples. Poro 34B Chat by SiloGen, TurkuNLP and HPLT model's presumably multi-turn results are copied from their model card for the comparison.

Single-turn results:

Benchmark Ahma 3B base (instruct prompt format) Ahma 3B Instruct Ahma 7B base (instruct prompt format) Ahma 7B Instruct
Coding 1.00 1.00 TBA TBA
Extraction 2.00 1.30 TBA TBA
Humanities 4.05 6.20 TBA TBA
Math 3.00 3.20 TBA TBA
Reasoning 2.90 4.60 TBA TBA
Roleplay 4.80 6.50 TBA TBA
STEM 5.10 5.95 TBA TBA
Writing 6.60 9.00 TBA TBA
Overall Average 3.68 4.72 TBA TBA

Multi-turn results:

Benchmark Ahma 3B base (instruct prompt format) Ahma 3B Instruct Ahma 7B base (instruct prompt format) Ahma 7B Instruct Poro 34B Chat
Coding 1.00 1.00 TBA TBA 3.70
Extraction 1.55 1.15 TBA TBA 6.37
Humanities 3.25 6.20 TBA TBA 9.25
Math 2.20 2.70 TBA TBA 1.20
Reasoning 2.45 3.50 TBA TBA 4.35
Roleplay 4.90 6.40 TBA TBA 7.35
STEM 4.20 4.78 TBA TBA 7.80
Writing 3.80 6.65 TBA TBA 8.50
Overall Average 2.92 4.05 TBA TBA 6.06

As we can see, Ahma 3B base model struggles with multi-turn examples, as expected, since it has only been pretrained with single-turn instruction following examples. In addition, coding performance was expectedly poor because the Ahma 3B model is not trained with code data. Ahma 3B also seemed to have problems with the fact that it started to constantly repeat the generated text in some evaluation examples, which affected the scoring. With the addition of a repetition penalty setting to the evaluation script generation method, the scores already improved significantly, so the Ahma 3B model should be used with better generation settings in real-world use compared to the settings used in this benchmark.

Acknowledgements

This project would not have been possible without compute generously provided by Google through the TPU Research Cloud.

Team Members

Feel free to contact us for more details 🤗

Ahma

Downloads last month
332
GGUF
Model size
3.63B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .