Falcon3
Collection
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
โข
40 items
โข
Updated
โข
70
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
Falcon3-3B-Instruct achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tiiuae/Falcon3-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
We report in the following table our internal pipeline benchmarks.
Category | Benchmark | Llama-3.2-3B-Instruct | Qwen2.5-3B-Instruct | Nemotron-Mini-4B-Instruct | Falcon3-3B-Instruct |
---|---|---|---|---|---|
General | MMLU (5-shot) | 29.3 | 56.2 | 56.4 | 55.7 |
MMLU-PRO (5-shot) | 11.9 | 17.2 | 23.3 | 29.7 | |
IFEval | 73.9 | 64.2 | 66.5 | 68.3 | |
Math | GSM8K (5-shot) | 68.5 | 58.5 | 46.9 | 71.9 |
GSM8K (8-shot, COT) | 74.5 | 64.0 | 46.5 | 71.6 | |
MATH Lvl-5 (4-shot) | 2.4 | 0.0 | 0.0 | 19.9 | |
Reasoning | Arc Challenge (25-shot) | 38.9 | 50.0 | 51.2 | 58.5 |
GPQA (0-shot) | 28.1 | 29.2 | 27.0 | 29.6 | |
GPQA (0-shot, COT) | 11.3 | 11.0 | 12.2 | 26.5 | |
MUSR (0-shot) | 34.9 | 40.2 | 38.9 | 39.0 | |
BBH (3-shot) | 33.1 | 44.1 | 38.1 | 45.4 | |
CommonSense Understanding | PIQA (0-shot) | 74.6 | 73.8 | 74.6 | 75.6 |
SciQ (0-shot) | 77.2 | 60.7 | 71.0 | 95.5 | |
Winogrande (0-shot) | - | - | - | 65.0 | |
OpenbookQA (0-shot) | 40.8 | 41.2 | 43.2 | 42.2 | |
Instructions following | MT-Bench (avg) | 7.1 | 8.0 | 6.7 | 7.2 |
Alpaca (WC) | 19.4 | 19.4 | 9.6 | 15.5 | |
Tool use | BFCL AST (avg) | 85.2 | 84.8 | 59.8 | 65.3 |
Code | EvalPlus (0-shot) (avg) | 55.2 | 69.4 | 40.0 | 52.9 |
Multipl-E (0-shot) (avg) | 31.6 | 29.2 | 19.6 | 32.9 |
Coming soon....
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
@misc{Falcon3,
title = {The Falcon 3 Family of Open Models},
url = {https://huggingface.co/blog/falcon3},
author = {Falcon-LLM Team},
month = {December},
year = {2024}
}
Unable to build the model tree, the base model loops to the model itself. Learn more.