Edit model card

DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew

State-of-the-art language model for parsing Hebrew, released here.

This is the fine-tuned model for the joint parsing of the following tasks:

  • Prefix Segmentation
  • Morphological Disabmgiuation
  • Lexicographical Analysis (Lemmatization)
  • Syntactical Parsing (Dependency-Tree)
  • Named-Entity Recognition

This model was initialized from dictabert-joint and tuned on the Hebrew UD Treebank and NEMO corpora, to align the predictions of the model to the tagging methodology in those corpora.

A live demo of the dictabert-joint model with instant visualization of the syntax tree can be found here.

For a faster model, you can use the equivalent bert-tiny model for this task here.

For the bert-base models for other tasks, see here.

For our most accurate model, built upon BERT-Large, see here.


The model currently supports 3 types of output:

  1. JSON: The model returns a JSON object for each sentence in the input, where for each sentence we have the sentence text, the NER entities, and the list of tokens. For each token we include the output from each of the tasks.

    model.predict(..., output_style='json')
    
  2. UD: The model returns the full UD output for each sentence, according to the style of the Hebrew UD Treebank.

    model.predict(..., output_style='ud')
    
  3. UD, in the style of IAHLT: This model returns the full UD output, with slight modifications to match the style of IAHLT. This differences are mostly granularity of some dependency relations, how the suffix of a word is broken up, and implicit definite articles. The actual tagging behavior doesn't change.

    model.predict(..., output_style='iahlt_ud')
    

If you only need the output for one of the tasks, you can tell the model to not initialize some of the heads, for example:

model = AutoModel.from_pretrained('dicta-il/dictabert-parse', trust_remote_code=True, do_lex=False)

The list of options are: do_lex, do_syntax, do_ner, do_prefix, do_morph.


Sample usage:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictabert-parse')
model = AutoModel.from_pretrained('dicta-il/dictabert-parse', trust_remote_code=True)

model.eval()

sentence = 'בשנת 1948 השלים אפרים קישון את לימודיו בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים'
print(model.predict([sentence], tokenizer, output_style='json')) # see below for other return formats

Output:

[
  {
    "text": "בשנת 1948 השלים אפרים קישון את לימודיו בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים",
    "tokens": [
      {
        "token": "בשנת",
        "offsets": {
          "start": 0,
          "end": 4
        },
        "syntax": {
          "word": "בשנת",
          "dep_head_idx": 2,
          "dep_func": "obl",
          "dep_head": "השלים"
        },
        "seg": [
          "ב",
          "שנת"
        ],
        "lex": "שנה",
        "morph": {
          "token": "בשנת",
          "pos": "NOUN",
          "feats": {
            "Gender": "Fem",
            "Number": "Sing"
          },
          "prefixes": [
            "ADP"
          ],
          "suffix": false
        }
      },
      {
        "token": "1948",
        "offsets": {
          "start": 5,
          "end": 9
        },
        "syntax": {
          "word": "1948",
          "dep_head_idx": 0,
          "dep_func": "compound:smixut",
          "dep_head": "בשנת"
        },
        "seg": [
          "1948"
        ],
        "lex": "1948",
        "morph": {
          "token": "1948",
          "pos": "NUM",
          "feats": {},
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "השלים",
        "offsets": {
          "start": 10,
          "end": 15
        },
        "syntax": {
          "word": "השלים",
          "dep_head_idx": -1,
          "dep_func": "root",
          "dep_head": "הומוריסטיים"
        },
        "seg": [
          "השלים"
        ],
        "lex": "השלים",
        "morph": {
          "token": "השלים",
          "pos": "VERB",
          "feats": {
            "Gender": "Masc",
            "Number": "Sing",
            "Person": "3",
            "Tense": "Past"
          },
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "אפרים",
        "offsets": {
          "start": 16,
          "end": 21
        },
        "syntax": {
          "word": "אפרים",
          "dep_head_idx": 2,
          "dep_func": "nsubj",
          "dep_head": "השלים"
        },
        "seg": [
          "אפרים"
        ],
        "lex": "אפרים",
        "morph": {
          "token": "אפרים",
          "pos": "PROPN",
          "feats": {},
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "קישון",
        "offsets": {
          "start": 22,
          "end": 27
        },
        "syntax": {
          "word": "קישון",
          "dep_head_idx": 3,
          "dep_func": "flat:name",
          "dep_head": "אפרים"
        },
        "seg": [
          "קישון"
        ],
        "lex": "קישון",
        "morph": {
          "token": "קישון",
          "pos": "PROPN",
          "feats": {},
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "את",
        "offsets": {
          "start": 28,
          "end": 30
        },
        "syntax": {
          "word": "את",
          "dep_head_idx": 6,
          "dep_func": "case:acc",
          "dep_head": "לימודיו"
        },
        "seg": [
          "את"
        ],
        "lex": "את",
        "morph": {
          "token": "את",
          "pos": "ADP",
          "feats": {},
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "לימודיו",
        "offsets": {
          "start": 31,
          "end": 38
        },
        "syntax": {
          "word": "לימודיו",
          "dep_head_idx": 2,
          "dep_func": "obj",
          "dep_head": "השלים"
        },
        "seg": [
          "לימודיו"
        ],
        "lex": "לימוד",
        "morph": {
          "token": "לימודיו",
          "pos": "NOUN",
          "feats": {
            "Gender": "Masc",
            "Number": "Plur"
          },
          "prefixes": [],
          "suffix": "ADP_PRON",
          "suffix_feats": {
            "Gender": "Masc",
            "Number": "Sing",
            "Person": "3"
          }
        }
      },
      {
        "token": "בפיסול",
        "offsets": {
          "start": 39,
          "end": 45
        },
        "syntax": {
          "word": "בפיסול",
          "dep_head_idx": 6,
          "dep_func": "nmod",
          "dep_head": "לימודיו"
        },
        "seg": [
          "ב",
          "פיסול"
        ],
        "lex": "פיסול",
        "morph": {
          "token": "בפיסול",
          "pos": "NOUN",
          "feats": {
            "Gender": "Masc",
            "Number": "Sing"
          },
          "prefixes": [
            "ADP"
          ],
          "suffix": false
        }
      },
      {
        "token": "מתכת",
        "offsets": {
          "start": 46,
          "end": 50
        },
        "syntax": {
          "word": "מתכת",
          "dep_head_idx": 7,
          "dep_func": "compound:smixut",
          "dep_head": "בפיסול"
        },
        "seg": [
          "מתכת"
        ],
        "lex": "מתכת",
        "morph": {
          "token": "מתכת",
          "pos": "NOUN",
          "feats": {
            "Gender": "Fem",
            "Number": "Sing"
          },
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "ובתולדות",
        "offsets": {
          "start": 51,
          "end": 59
        },
        "syntax": {
          "word": "ובתולדות",
          "dep_head_idx": 7,
          "dep_func": "conj",
          "dep_head": "בפיסול"
        },
        "seg": [
          "וב",
          "תולדות"
        ],
        "lex": "תולדה",
        "morph": {
          "token": "ובתולדות",
          "pos": "NOUN",
          "feats": {
            "Gender": "Fem",
            "Number": "Plur"
          },
          "prefixes": [
            "CCONJ",
            "ADP"
          ],
          "suffix": false
        }
      },
      {
        "token": "האמנות",
        "offsets": {
          "start": 60,
          "end": 66
        },
        "syntax": {
          "word": "האמנות",
          "dep_head_idx": 9,
          "dep_func": "compound:smixut",
          "dep_head": "ובתולדות"
        },
        "seg": [
          "ה",
          "אמנות"
        ],
        "lex": "אומנות",
        "morph": {
          "token": "האמנות",
          "pos": "NOUN",
          "feats": {
            "Gender": "Fem",
            "Number": "Sing"
          },
          "prefixes": [
            "DET"
          ],
          "suffix": false
        }
      },
      {
        "token": "והחל",
        "offsets": {
          "start": 67,
          "end": 71
        },
        "syntax": {
          "word": "והחל",
          "dep_head_idx": 2,
          "dep_func": "conj",
          "dep_head": "השלים"
        },
        "seg": [
          "ו",
          "החל"
        ],
        "lex": "החל",
        "morph": {
          "token": "והחל",
          "pos": "VERB",
          "feats": {
            "Gender": "Masc",
            "Number": "Sing",
            "Person": "3",
            "Tense": "Past"
          },
          "prefixes": [
            "CCONJ"
          ],
          "suffix": false
        }
      },
      {
        "token": "לפרסם",
        "offsets": {
          "start": 72,
          "end": 77
        },
        "syntax": {
          "word": "לפרסם",
          "dep_head_idx": 11,
          "dep_func": "xcomp",
          "dep_head": "והחל"
        },
        "seg": [
          "לפרסם"
        ],
        "lex": "פרסם",
        "morph": {
          "token": "לפרסם",
          "pos": "VERB",
          "feats": {},
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "מאמרים",
        "offsets": {
          "start": 78,
          "end": 84
        },
        "syntax": {
          "word": "מאמרים",
          "dep_head_idx": 12,
          "dep_func": "obj",
          "dep_head": "לפרסם"
        },
        "seg": [
          "מאמרים"
        ],
        "lex": "מאמר",
        "morph": {
          "token": "מאמרים",
          "pos": "NOUN",
          "feats": {
            "Gender": "Masc",
            "Number": "Plur"
          },
          "prefixes": [],
          "suffix": false
        }
      },
      {
        "token": "הומוריסטיים",
        "offsets": {
          "start": 85,
          "end": 96
        },
        "syntax": {
          "word": "הומוריסטיים",
          "dep_head_idx": 13,
          "dep_func": "amod",
          "dep_head": "מאמרים"
        },
        "seg": [
          "הומוריסטיים"
        ],
        "lex": "הומוריסטי",
        "morph": {
          "token": "הומוריסטיים",
          "pos": "ADJ",
          "feats": {
            "Gender": "Masc",
            "Number": "Plur"
          },
          "prefixes": [],
          "suffix": false
        }
      }
    ],
    "root_idx": 2,
    "ner_entities": [
      {
        "phrase": "אפרים קישון",
        "label": "PER",
        "start": 16,
        "end": 27,
        "token_start": 3,
        "token_end": 4
      }
    ]
  }
]

You can also choose to get your response in UD format:

sentence = 'בשנת 1948 השלים אפרים קישון את לימודיו בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים'
print(model.predict([sentence], tokenizer, output_style='ud')) 

Results:

[
  [
    "# sent_id = 1",
    "# text = בשנת 1948 השלים אפרים קישון את לימודיו בפיסול מתכת ובתולדות האמנות והחל לפרסם מאמרים הומוריסטיים",
    "1-2\tבשנת\t_\t_\t_\t_\t_\t_\t_\t_",
    "1\tב\tב\tADP\tADP\t_\t2\tcase\t_\t_",
    "2\tשנת\tשנה\tNOUN\tNOUN\tGender=Fem|Number=Sing\t4\tobl\t_\t_",
    "3\t1948\t1948\tNUM\tNUM\t\t2\tcompound:smixut\t_\t_",
    "4\tהשלים\tהשלים\tVERB\tVERB\tGender=Masc|Number=Sing|Person=3|Tense=Past\t0\troot\t_\t_",
    "5\tאפרים\tאפרים\tPROPN\tPROPN\t\t4\tnsubj\t_\t_",
    "6\tקישון\tקישון\tPROPN\tPROPN\t\t5\tflat:name\t_\t_",
    "7\tאת\tאת\tADP\tADP\t\t8\tcase:acc\t_\t_",
    "8-10\tלימודיו\t_\t_\t_\t_\t_\t_\t_\t_",
    "8\tלימוד_\tלימוד\tNOUN\tNOUN\tGender=Masc|Number=Plur\t4\tobj\t_\t_",
    "9\t_של_\tשל\tADP\tADP\t_\t10\tcase\t_\t_",
    "10\t_הוא\tהוא\tPRON\tPRON\tGender=Masc|Number=Sing|Person=3\t8\tnmod:poss\t_\t_",
    "11-12\tבפיסול\t_\t_\t_\t_\t_\t_\t_\t_",
    "11\tב\tב\tADP\tADP\t_\t12\tcase\t_\t_",
    "12\tפיסול\tפיסול\tNOUN\tNOUN\tGender=Masc|Number=Sing\t8\tnmod\t_\t_",
    "13\tמתכת\tמתכת\tNOUN\tNOUN\tGender=Fem|Number=Sing\t12\tcompound:smixut\t_\t_",
    "14-16\tובתולדות\t_\t_\t_\t_\t_\t_\t_\t_",
    "14\tו\tו\tCCONJ\tCCONJ\t_\t16\tcc\t_\t_",
    "15\tב\tב\tADP\tADP\t_\t16\tcase\t_\t_",
    "16\tתולדות\tתולדה\tNOUN\tNOUN\tGender=Fem|Number=Plur\t12\tconj\t_\t_",
    "17-18\tהאמנות\t_\t_\t_\t_\t_\t_\t_\t_",
    "17\tה\tה\tDET\tDET\t_\t18\tdet\t_\t_",
    "18\tאמנות\tאומנות\tNOUN\tNOUN\tGender=Fem|Number=Sing\t16\tcompound:smixut\t_\t_",
    "19-20\tוהחל\t_\t_\t_\t_\t_\t_\t_\t_",
    "19\tו\tו\tCCONJ\tCCONJ\t_\t20\tcc\t_\t_",
    "20\tהחל\tהחל\tVERB\tVERB\tGender=Masc|Number=Sing|Person=3|Tense=Past\t4\tconj\t_\t_",
    "21\tלפרסם\tפרסם\tVERB\tVERB\t\t20\txcomp\t_\t_",
    "22\tמאמרים\tמאמר\tNOUN\tNOUN\tGender=Masc|Number=Plur\t21\tobj\t_\t_",
    "23\tהומוריסטיים\tהומוריסטי\tADJ\tADJ\tGender=Masc|Number=Plur\t22\tamod\t_\t_"
  ]
]

Citation

If you use DictaBERT-parse in your research, please cite MRL Parsing without Tears: The Case of Hebrew

BibTeX:

@misc{shmidman2024mrl,
      title={MRL Parsing Without Tears: The Case of Hebrew}, 
      author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel and Reut Tsarfaty},
      year={2024},
      eprint={2403.06970},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

Shield: CC BY 4.0

This work is licensed under a Creative Commons Attribution 4.0 International License.

CC BY 4.0

Downloads last month
161
Safetensors
Model size
186M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Collection including dicta-il/dictabert-parse