--- license: apache-2.0 --- | Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss | | --- | --- | --- | --- | --- | --- | --- | | GerbilLab/GerbilBlender-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.7417 | "Blender" models, inspired by UL2 pretraining, are trained equally in fill-in-the-middle, causal modelling, and masked language modelling tasks. Special tokens for these models include: ``` '', '', '', '', '', '', '' # Example fill in the middle ' this is an for fill-in-the-middle example text <|endoftext|>' # Example causal language modelling ' this is an example text for causal language modelling <|endoftext|>' # Example masked language modelling ' this is an text for masked language modelling example <|endoftext|>' ```