tFINE-850m-24x24-1024ctx

Pretrained T5 model with nanoT5:

  • ~850m parameters, 24 layers in encoder, 24 layers in decoder
  • sentencepiece tokenizer with 48k vocab & byte-pair fallback
    • handles whitespaces etc correctly (unlike original T5 tokenizer)
  • 1024 ctx during pretrain
  • relative_attention_num_buckets increased to 48 from 32 for context length upscaling

Experiment logs

Training consisted of two phases:

  • TODO
  • TODO
Downloads last month
3
Safetensors
Model size
854M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train pszemraj/tFINE-850m-24x24-1024ctx

Collection including pszemraj/tFINE-850m-24x24-1024ctx