library_name: transformers
tags:
- code
- math
license: apache-2.0
language:
- en
pipeline_tag: text-generation
Huginn-0125
This is Huginn, version 01/25. This is a latent recurrent-depth model with 3.5B parameters, trained for 800B tokens. This is a proof-of-concept model, but surprisingly capable in reasoning and code given its training budget and size. All details on this model can be found in the tech report: "Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach."
Table of Contents
Downloading and Using the Model
Load the model like this:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("tomg-group-umd/huginn-0125", torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("tomg-group-umd/huginn-0125")
Fixed depth Usage
By providing the argument num_steps
, the model will execute a pass with that amount of compute:
input_ids = tokenizer.encode("The capital of Westphalia is", return_tensors="pt", add_special_tokens=True).to(device)[:, :-1]
model.eval()
model.to(device)
model(input_ids, num_steps=32)
The model has about 1.5B parameters in non-recurrent code, 0.5B parameters in the embedding, and 1.5B recurrent parameters, so, as a guideline,
the number of materialized parameters is num_steps * 1.5B + 2B
. Playing with this parameter is what makes this model interesting (and different from fixed-depth) transformers!
The model is trained to accept an arbitrary number of steps. However, using fewer than 4 steps will result in very coarse answers. If given enough context to reason about, benchmarks show the model improving up to around num_steps=64
. Beyond that, more steps generally do not hurt, but we see no further improvements.
Inference
The model was trained with bfloat16-mixed precision, so we recommend using bfloat16
to run inference (or AMP bfloat16-mixed precision, if you really want). All benchmarks were evaluated in pure bfloat16
.
Sampling
The model can be used like a normal HF model to generate text with KV-caching working as expected. You can provide num_steps
directly to the generate
call, for example:
model.eval()
config = GenerationConfig(max_length=256, stop_strings=["<|end_text|>", "<|end_turn|>"],
use_cache=True,
do_sample=False, temperature=None, top_k=None, top_p=None, min_p=None,
return_dict_in_generate=True,
eos_token_id=65505,bos_token_id=65504,pad_token_id=65509)
input_ids = tokenizer.encode("The capital of Westphalia is", return_tensors="pt", add_special_tokens=True).to(device)[:, :-1]
outputs = model.generate(input_ids, config, tokenizer=tokenizer, num_steps=16)
Note: num_steps
and other model arguments CANNOT be included in the GenerationConfig
, they will shadow model args at runtime.
Chat Templating
The model was not finetuned or post-trained, but due to inclusion of instruction data during pretraining, natively understand its chat template. You can chat with the model like so
messages = []
messages.append({"role": "system", "content" : You are a helpful assistant."}
messages.append({"role": "user", "content" : What do you think of Goethe's Faust?"}
formatted_messages = [{"role": "Huginn" if m["role"] == "assistant" else m["role"], "content": m.content.strip()} for m in messages]
chat_input = tokenizer.apply_chat_template(formatted_messages, tokenize=False, add_generation_prompt=True)
print(chat_input)
input_ids = tokenizer.encode(chat_input, return_tensors="pt", add_special_tokens=False).to(device)
model.generate(input_ids, config, num_steps=64, tokenizer=tokenizer)
KV-cache Details
The model requires its own KV-cache implementation HuginnDynamicCache
, otherwise the KV-caches of later calls to the recurrent block will overwrite the earlier ones.
This should be handled automatically by this implementation, but may break with huggingface updates. If you do not use generate, but implement your own generation, use a pattern like this:
# first step:
past_key_values = None
outputs = model(input_ids=input_ids, use_cache=True, past_key_values=past_key_values)
past_key_values = outputs.past_key_values # Should be an instance of HuginnDynamicCache
# next step
outputs = model(input_ids=input_ids, use_cache=True, past_key_values=past_key_values)
Advanced Features
Per-Token Adaptive Compute
model.to(device=device, dtype=torch.bfloat16)
model.eval()
past_key_values = DynamicCache()
config = GenerationConfig(max_length=64, stop_strings=["<|end_text|>", "<|end_turn|>"],
use_cache=True, past_key_values=past_key_values,
do_sample=False, temperature=None, top_k=None, top_p=None, min_p=None,
return_dict_in_generate=True,
eos_token_id=65505,bos_token_id=65504,pad_token_id=65509)
# Note: num_steps and other model arguments CANNOT be included here, they will shadow model args at runtime
input_ids = tokenizer.encode("The capital of Westphalia is", return_tensors="pt", add_special_tokens=True).to(device)[:, :-1]
outputs = model.generate(input_ids, config, tokenizer=tokenizer)
KV-cache Sharing
Model Summary
The model is primarily structured around decoder-only transformer blocks. However these blocks are structured into three functional groups, the prelude , which embeds the input data into a latent space using multiple transformer layers, then the core recurrent block , which is the central unit of recurrent computation modifying states , and finally the coda , which un-embeds from latent space using several layers and also contains the prediction head of the model.
Given a number of recurrent iterations , and a sequence of input tokens these groups are used in the following way to produce output probabilities .
where is the standard deviation of the initial random state. Given an init random state , the model repeatedly applies the core block , which accepts the latent state and the embedded input and outputs a new latent state . After finishing all iterations, the coda block processes the last state and produces the probabilities of the next token.
Please refer to the paper for benchmark performance on standard benchmarks.
Limitations
Our checkpoint is trained for only 47000 steps on a broadly untested mixture, and the learning rate is never cooled down from its peak. As an academic project, the model is trained only on publicly available data and the 800B token count, while large in comparison to older fully open-source models such as the Pythia series, is small in comparison to modern open-source efforts such as OLMo, and tiny in comparison to the datasets used to train industrial open-weight models.
License
This model is released under the apache-2.0 licence.
Citation
@article{geiping2025scaling,
title={Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach},
author={Jonas Geiping and Sean McLeish and Neel Jain and John Kirchenbauer and Siddharth Singh and Brian R. Bartoldson and Bhavya Kailkhura and Abhinav Bhatele and Tom Goldstein},
year={2025},
eprint={2502.},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Contact
Please, feel free to contact us with any questions, or open an discussion thread on Hugging Face.