--- license: apache-2.0 base_model: - KISTI-AI/Scideberta-full library_name: transformers tags: - relation extraction - nlp model-index: - name: iter-scierc-deberta-full results: - task: type: relation-extraction dataset: name: scierc type: scierc metrics: - name: F1 type: f1 value: 39.359 --- # ITER: Iterative Transformer-based Entity Recognition and Relation Extraction This model checkpoint is part of the collection of models published alongside our paper ITER, [accepted at EMNLP 2024](https://aclanthology.org/2024.findings-emnlp.655/).
To ease reproducibility and enable open research, our source code has been published on [GitHub](https://github.com/fleonce/iter). This model achieved an F1 score of `39.359` on dataset `scierc` ### Using ITER in your code First, install ITER in your preferred environment: ```text pip install git+https://github.com/fleonce/iter ``` To use our model, refer to the following code: ```python from iter import ITERForRelationExtraction model = ITERForRelationExtraction.from_pretrained("fleonce/iter-scierc-deberta-full") tokenizer = model.tokenizer encodings = tokenizer( "An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion .", return_tensors="pt" ) generation_output = model.generate( encodings["input_ids"], attention_mask=encodings["attention_mask"], ) # entities print(generation_output.entities) # relations between entities print(generation_output.links) ``` ### Checkpoints We publish checkpoints for the models performing best on the following datasets: - **ACE05**: 1. [fleonce/iter-ace05-deberta-large](https://huggingface.co/fleonce/iter-ace05-deberta-large) - **CoNLL04**: 1. [fleonce/iter-conll04-deberta-large](https://huggingface.co/fleonce/iter-conll04-deberta-large) - **ADE**: 1. [fleonce/iter-ade-deberta-large](https://huggingface.co/fleonce/iter-ade-deberta-large) - **SciERC**: 1. [fleonce/iter-scierc-deberta-large](https://huggingface.co/fleonce/iter-scierc-deberta-large) 2. [fleonce/iter-scierc-scideberta-full](https://huggingface.co/fleonce/iter-scierc-scideberta-full) - **CoNLL03**: 1. [fleonce/iter-conll03-deberta-large](https://huggingface.co/fleonce/iter-conll03-deberta-large) - **GENIA**: 1. [fleonce/iter-genia-deberta-large](https://huggingface.co/fleonce/iter-genia-deberta-large) ### Reproducibility For each dataset, we selected the best performing checkpoint out of the 5 training runs we performed during training. This model was trained with the following hyperparameters: - Seed: `3` - Config: `scierc/d_ff_150` - PyTorch `2.3.0` with CUDA `12.1` and precision `torch.bfloat16` - GPU: `1 NVIDIA GeForce RTX 4090` Varying GPU and CUDA version as well as training precision did result in slightly different end results in our tests for reproducibility. To train this model, refer to the following command: ```shell python3 train.py --dataset scierc/d_ff_150 --transformer KISTI-AI/Scideberta-full --use_bfloat16 --seed 3 ``` ```text @inproceedings{citation} ```