_______       ___      .___  ___.   ______   .______      .______    __    __  
|       \     /   \     |   \/   |  /  __  \  |   _  \     |   _  \  |  |  |  | 
|  .--.  |   /  ^  \    |  \  /  | |  |  |  | |  |_)  |    |  |_)  | |  |__|  | 
|  |  |  |  /  /_\  \   |  |\/|  | |  |  |  | |      /     |   ___/  |   __   | 
|  '--'  | /  _____  \  |  |  |  | |  `--'  | |  |\  \----.|  |      |  |  |  | 
|_______/ /__/     \__\ |__|  |__|  \______/  | _| `._____|| _|      |__|  |__| 
                                                                               

DA-MIXED-CEREBRAS

This is an experimental Danish language model fine-tuned on a combination of tokenizers, including both morphological and BPE approaches. Built on the CerebrasGPT-111M architecture, it explores how mixed tokenization strategies affect Danish text generation.

Downloads last month
216
Safetensors
Model size
111M params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including meelu/DA-MIXED-CEREBRAS