|
--- |
|
license: mit |
|
datasets: |
|
- omarmomen/babylm_10M |
|
language: |
|
- en |
|
metrics: |
|
- perplexity |
|
library_name: transformers |
|
pipeline_tag: fill-mask |
|
--- |
|
# Model Card for omarmomen/sf_babylm_1 |
|
|
|
This model is part of the experiments in my master's thesis titled "Linguistic Structure Induction from Language Models" (https://arxiv.org/abs/2403.09714). |
|
|
|
"omarmomen/sf_babylm_1" is the StructFormer (SF_m=0) referred to in Chapter 5 (p. 59), where the parser network is position ahead of all the attention blocks. |
|
|
|
The model is trained on the BabyLM 10M dataset, with a RobertaTokenizer pretrained on the BabyLM 10M dataset with 16K tokens (https://huggingface.co/omarmomen/babylm_bpe_tokenizer_16k). |
|
|
|
https://arxiv.org/abs/2403.09714 |