|
--- |
|
license: apache-2.0 |
|
--- |
|
| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss | |
|
| --- | --- | --- | --- | --- | --- | --- | |
|
| GerbilLab/GerbilBlender-A-3.3m | 3.3m | A-Class | 20 | 60M | 65.5k | 6.7417 | |
|
|
|
"Blender" models, inspired by UL2 pretraining, are trained equally in fill-in-the-middle, causal modelling, and masked language modelling tasks. Special tokens for these models include: |
|
|
|
``` |
|
'<fitm_start>', '<multiple_tok_mask>', '<fitm_result>', '<causal>', '<mlm_start>', '<single_tok_mask>', '<mlm_end>' |
|
|
|
# Example fill in the middle |
|
'<fitm_start> this is an <multiple_tok_mask> for fill-in-the-middle <fitm_result> example text <|endoftext|>' |
|
|
|
# Example causal language modelling |
|
'<causal> this is an example text for causal language modelling <|endoftext|>' |
|
|
|
# Example masked language modelling |
|
'<mlm_start> this is an <single_tok_mask> text for masked language modelling <mlm_end> example <|endoftext|>' |
|
|
|
``` |