KARAKURI LM 32B Thinking 2501 Experimental

Model Details

Model Description

Usage

Run the model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "karakuri-ai/karakuri-lm-32b-thinking-2501-exp"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
    {"role": "user", "content": "こんにちは。"}
]
input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])

Training Details

Training Infrastructure

  • Hardware: The model was trained on 16 nodes of an Amazon EC2 trn1.32xlarge instance.
  • Software: We use code based on neuronx-nemo-megatron.

Acknowledgments

This work was supported by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO) through the Generative AI Accelerator Challenge (GENIAC).

Citation

@misc{karakuri_lm_32b_thinking_2501_exp,
    author       = { {KARAKURI} {I}nc. },
    title        = { {KARAKURI} {LM} 32{B} {T}hinking 2501 {E}xperimental },
    year         = { 2025 },
    url          = { https://huggingface.co/karakuri-ai/karakuri-lm-32b-thinking-2501-exp },
    publisher    = { Hugging Face },
    journal      = { Hugging Face repository }
}
Downloads last month
8
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for karakuri-ai/karakuri-lm-32b-thinking-2501-exp

Base model

Qwen/Qwen2.5-32B
Finetuned
(37)
this model
Quantizations
1 model

Collection including karakuri-ai/karakuri-lm-32b-thinking-2501-exp