Datasets:

ArXiv:
License:
ghh001 commited on
Commit
9190f8d
β€’
1 Parent(s): 6606b77

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +200 -162
README.md CHANGED
@@ -8,62 +8,69 @@ language:
8
  ---
9
 
10
 
11
- # IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus
12
-
13
  <p align="left">
14
  <b> English | <a href="https://huggingface.co/datasets/zjunlp/IEPILE/blob/main/README_ZH.md">Chinese</a> </b>
15
  </p>
16
 
17
 
18
- - [IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus](#iepile-unearthing-large-scale-schema-based-information-extraction-corpus)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  - [🎯1.Introduction](#1introduction)
20
  - [πŸ“Š2.Data](#2data)
21
- - [2.1Construction of IEPILE](#21construction-of-iepile)
22
- - [2.2Statistics of IEPILE](#22statistics-of-iepile)
23
- - [🚴3Using IEPILE to Train Models](#3using-iepile-to-train-models)
24
  - [3.1Environment](#31environment)
25
- - [3.2Download Data](#32download-data)
26
- - [3.3Models](#33models)
27
  - [3.4LoRA Fine-tuning](#34lora-fine-tuning)
28
- - [3.4.1Fine-tuning LLaMA2 with LoRA](#341fine-tuning-llama2-with-lora)
29
- - [3.4.3LoRA Fine-tuning Baichuan2](#343lora-fine-tuning-baichuan2)
30
- - [3.4.3LoRA Fine-tuning Other Models](#343lora-fine-tuning-other-models)
31
- - [3.5Continued Model Training](#35continued-model-training)
32
- - [3.5.1Training Data Conversion](#351training-data-conversion)
33
- - [3.5.2Continued Training](#352continued-training)
34
- - [4.Prediction](#4prediction)
35
- - [4.1Test Data Conversion](#41test-data-conversion)
36
- - [4.2IE-Specific Model Prediction](#42ie-specific-model-prediction)
37
- - [4.3Basic Model + LoRA Prediction](#43basic-model--lora-prediction)
38
- - [5.Evaluation](#5evaluation)
39
- - [6. Statement and License](#6-statement-and-license)
40
- - [7. Limitations](#7-limitations)
41
 
42
 
43
  ## 🎯1.Introduction
44
 
45
 
46
- **`IEPILE`** dataset download links: [Google Drive](https://drive.google.com/file/d/1jPdvXOTTxlAmHkn5XkeaaCFXQkYJk5Ng/view?usp=sharing) | [Hugging Face](https://huggingface.co/datasets/zjunlp/IEPILE)
47
 
48
 
49
  > Please be aware that the data contained in the dataset links provided above has already excluded any part related to the ACE2005 dataset. Should you require access to the unfiltered, complete dataset and have successfully obtained the necessary permissions, please do not hesitate to contact us via email at [email protected] or [email protected]. We will provide the complete dataset resources for your use.
50
 
51
 
52
- Model download links for **`LLaMA2-IEPILE`** | **`Baichuan2-IEPILE`** | **`KnowLM-IE-v2`**: [zjunlp/llama2-13b-iepile-lora](https://huggingface.co/zjunlp/llama2-13b-iepile-lora/tree/main) | [zjunlp/baichuan2-13b-iepile-lora](https://huggingface.co/zjunlp/baichuan2-13b-iepile-lora) | [zjunlp/KnowLM-IE-v2]()
53
-
54
-
55
- **Large Language Models (LLMs)** demonstrate remarkable potential across various domains; however, they exhibit a significant performance gap in **Information Extraction (IE)**. Note that high-quality instruction data is the vital key for enhancing the specific capabilities of LLMs, while current IE datasets tend to be small in scale, fragmented, and lack standardized schema. To this end, we introduce **IEPILE**, a comprehensive bilingual (English and Chinese) IE instruction corpus, which contains approximately **0.32B** tokens. We construct IEPILE by collecting and cleaning 33 existing IE datasets, and introduce schema-based instruction generation to unearth a large-scale corpus. Experimental results on LLaMA and Baichuan demonstrate that using IEPILE can enhance the performance of LLMs for IE, especially the zero-shot generalization. We open-source the resource and pre-trained models, hoping to provide valuable support to the NLP community.
56
-
57
 
58
 
59
  ![statistic](./assets/statistic.jpg)
60
 
61
 
62
- We collected a total of 15 English NER datasets, 3 Chinese NER datasets, 8 English RE datasets, 2 Chinese RE datasets, as well as 3 English EE datasets and 2 Chinese EE datasets. Figure 1 shows the statistical information of these datasets, covering a wide range of fields including **general**, **medicine**, **financial**, and more. We not only standardized the data format across various tasks but also conducted a meticulous audit for each dataset, creating detailed **data records** including quantity, domain, schema, and other important information.
63
 
 
64
 
 
65
 
66
- Based on **IEPILE**, we fine-tuned the `Baichuan2-13B-Chat` and `LLaMA2-13B-Chat` models using `Lora` technology. The experimental results showed that the fine-tuned models `Baichuan2-IEPILE` and `LLaMA2-IEPILE` not only achieved comparable results on fully supervised training sets but also saw significant improvements in **zero-shot information extraction**.
67
 
68
 
69
  ![zero_en](./assets/zero_en.jpg)
@@ -86,13 +93,13 @@ Based on **IEPILE**, we fine-tuned the `Baichuan2-13B-Chat` and `LLaMA2-13B-Chat
86
  ## πŸ“Š2.Data
87
 
88
 
89
- ### 2.1Construction of IEPILE
90
 
91
- We concentrate on schema-based IE, thus the construction of schema within the instructions is crucial. This is because they reflect the specific extraction requirements and are dynamically variable. Previous approaches with existing IE datasets often employ a rather extensive schema processing strategy when constructing instructions, utilizing all schemas within a label set for instruction building, raising two potential issues:
92
  1. **Inconsistency in the number of schema queries within instruction between training and evaluation**. For example, the model's performance will decrease if it is trained on about 20 schema queries but tested with either 10 or 30, even if the training and evaluation schemas are similar in content.
93
  2. **Inadequate differentiation among schemas in the instructions**. For example, semantically similar schemas like "layoffs", "depart" and "dismissals", may present co-occurrence ambiguities that could confuse the LLMs. Such schemas should co-occur more frequently within the instruction.
94
 
95
- Therefore, we introduce the following solutions: 1) Hard Negative Schema; and 2) Batched Instruction Generation.
96
 
97
 
98
  ![iepile](./assets/iepile.jpg)
@@ -114,52 +121,27 @@ Subsequently, we obtain the final schema set $L' = Pos\_L + Neg\_L$. We employ a
114
  </details>
115
 
116
 
117
- **Instruction Format**
118
-
119
- The **instruction** format of `IEPILE` adopts a JSON-like string structure, which is essentially a dictionary-type string composed of the following three main components:
120
- (1) **`'instruction'`**, which is the task description outlining the execution goal of the instruction;
121
- (2) **`'schema'`**, a list of labels to be extracted, clearly indicating the key fields of the information to be extracted;
122
- (3) **`'input'`**, referring to the source text used for information extraction.
123
-
124
-
125
- Here is an example of an instruction for executing a NER task:
126
- ```json
127
- {
128
- "instruction": "You are an expert in named entity recognition. Please extract entities that match the schema definition from the input. Return an empty list if the entity type does not exist. Please respond in the format of a JSON string.",
129
- "schema": ["location", "else", "organization", "person"],
130
- "input": "The objective of the Basic Course on War is to provide for combatants of the EPR basic military knowledge for the armed conflict against the police and military apparatus of the bourgeoisie."
131
- }
132
- ```
133
-
134
- Please note that the above dictionary should be in the format of a JSON string. For the sake of clear demonstration, it has been modified into a dictionary format.
135
 
136
- <details>
137
- <summary><b>More Tasks</b></summary>
138
 
139
 
140
- ```json
141
- {
142
- "instruction": "You are an expert in relationship extraction. Please extract relationship triples that match the schema definition from the input. Return an empty list for relationships that do not exist. Please respond in the format of a JSON string.",
143
- "schema": ["children", "country capital", "country of administrative divisions", "company"],
144
- "input": "Born on May 1 , 1927 , in Brichevo , Bessarabia in the present-day Republic of Moldova , Mr. Bertini emigrated to Palestine with his family as a child and pursued musical studies there , in Milan , and in Paris , where he worked with Nadia Boulanger and Arthur Honegger."
145
- }
146
 
147
- {
148
- "instruction": "You are an expert in event extraction. Please extract events from the input that conform to the schema definition. Return an empty list for events that do not exist, and return NAN for arguments that do not exist. If an argument has multiple values, please return a list. Respond in the format of a JSON string.",
149
- "schema": [{"event_type": "pardon", "trigger": true, "arguments": ["defendant"]},{"event_type": "extradite", "trigger": true, "arguments": ["person", "agent", "destination", "origin"]}, {"event_type": "sue", "trigger": true, "arguments": ["place", "plaintiff"]}, {"event_type": "start organization", "trigger": true, "arguments": ["organization", "agent", "place"]}],
150
- "input": "Ethical and legal issues in hiring Marinello"
151
- }
152
- ```
153
 
154
- </details>
 
 
 
155
 
156
  The file [instruction.py](./ie2instruction/convert/utils/instruction.py) provides instructions for various tasks.
157
 
158
-
159
-
160
- ### 2.2Statistics of IEPILE
161
- Based on the aforementioned methods, we obtain a high-quality information extraction instruction dataset, known as **`IEPILE`**. This dataset contains approximately **over 2 million** instruction entries, each comprising of `instruction` and `output` fields, which can be directly used for supervised fine-tuning of models. In terms of storage size, IEPILE occupies about **3GB** of disk space and contains roughly **0.32B** tokens (using the baichuan2 tokenizer).
162
-
163
 
164
  ```json
165
  {
@@ -170,8 +152,13 @@ Based on the aforementioned methods, we obtain a high-quality information extrac
170
  }
171
  ```
172
 
 
 
 
 
 
173
  <details>
174
- <summary><b>More Tasks</b></summary>
175
 
176
  ```json
177
  {
@@ -192,57 +179,55 @@ Based on the aforementioned methods, we obtain a high-quality information extrac
192
  </details>
193
 
194
 
195
- Descriptions of the fields:
196
-
197
- |Field|Description|
198
- |:---:|:---:|
199
- |task|The task demonstrated by the instance, one of the five types (NER, RE, EE, EET, EEA). |
200
- |source|The dataset demonstrated by the instance.|
201
- |instruction|The instruction inputted into the model, processed into a JSON string by json.dumps, includes `"instruction"`, `"schema"`, and `"input"` fields.|
202
- |output|The model's output, formatted as a dictionary's JSON string, where the key is the schema and the value is the content extracted.|
203
 
204
-
205
- ## 🚴3Using IEPILE to Train Models
206
 
207
  ### 3.1Environment
208
 
209
  Before you begin, make sure to create an appropriate **virtual environment** following the instructions below:
210
 
211
  ```bash
212
- conda create -n IEPILE python=3.9 # Create a virtual environment
213
- conda activate IEPILE # Activate the environment
214
  pip install -r requirements.txt # Install dependencies
215
  ```
216
 
217
 
218
- ### 3.2Download Data
219
 
220
- **`IEPILE`** dataset download links: [Google Drive](https://drive.google.com/file/d/1jPdvXOTTxlAmHkn5XkeaaCFXQkYJk5Ng/view?usp=sharing) | [Hugging Face](https://huggingface.co/datasets/zjunlp/IEPILE)
221
 
222
- ```bash
223
- mkdir results
224
- mkdir lora
225
- mkdir data
 
226
  ```
227
 
 
 
228
 
229
- Data should be placed in the `./data` directory.
230
 
 
231
 
232
 
233
- ### 3.3Models
234
 
235
- Here are some of the models supported by the code in this repository:
236
- ["`llama`", "`alpaca`", "`vicuna`", "`zhixi`", "`falcon`", "`baichuan`", "`chatglm`", "`qwen`", "`moss`", "`openba`"]
237
 
238
- Model download links for **`LLaMA2-IEPILE`** | **`Baichuan2-IEPILE`** | **`KnowLM-IE-v2`**: [zjunlp/llama2-13b-iepile-lora](https://huggingface.co/zjunlp/llama2-13b-iepile-lora/tree/main) | [zjunlp/baichuan2-13b-iepile-lora](https://huggingface.co/zjunlp/baichuan2-13b-iepile-lora) | [zjunlp/KnowLM-IE-v2]()
 
 
 
 
 
239
 
240
- ### 3.4LoRA Fine-tuning
241
 
242
 
243
- #### 3.4.1Fine-tuning LLaMA2 with LoRA
244
 
245
- > Important Note: All the commands below should be executed within the IEPILE directory. For example, if you want to run the fine-tuning script, you should use the following command: `bash ft_scripts/fine_llama.bash`. Please ensure your current working directory is correct.
 
 
246
 
247
 
248
 
@@ -280,45 +265,43 @@ CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" torchrun --nproc_per_node=8 --master_port
280
  --bf16
281
  ```
282
 
 
 
 
 
 
 
 
 
 
283
 
284
- * `--model_name`: Specifies the model name you wish to use. The current list of supported models includes: ["`llama`", "`alpaca`", "`vicuna`", "`zhixi`", "`falcon`", "`baichuan`", "`chatglm`", "`qwen`", "`moss`", "`openba`"]. Please note that this parameter should be distinguished from `--model_name_or_path`.
285
- * `--model_name_or_path`: The path to the model parameters. Please download the corresponding model from HuggingFace.
286
- * `--template`: The template name being used, which includes: `alpaca`, `baichuan`, `baichuan2`, `chatglm3`, and others. Refer to [src/datamodule/template.py](./src/datamodule/template.py) to see all supported template names, with `alpaca` being the default template.
287
- * `--train_file`, `--valid_file` (optional): The file paths for the training and validation sets, respectively. If valid_file is not provided, the system will by default allocate a number of samples specified by val_set_size from the file designated by train_file to be the validation set. You can also change the number of samples in the validation set by adjusting the val_set_size parameter. Note: The current format for training, validation, and test files only supports JSON format.
288
- * `--output_dir`: Sets the path to save the weights after LoRA fine-tuning.
289
- * `--val_set_size`: Defines the number of samples in the validation set, with the default being 1000.
290
- * `per_device_train_batch_size`, per_device_eval_batch_size: The batch size per GPU device, with 2-4 being recommended for RTX 3090.
291
- * `max_source_length`, `max_target_length`, `cutoff_len`: The maximum input length, maximum output length, and cutoff length. The cutoff length can be simply considered as the maximum input length + maximum output length, and should be set to an appropriate value based on specific needs and memory size.
292
 
293
  To learn more about parameter configuration, please refer to the [src/utils/args](./src/args).
294
 
295
- The specific script for fine-tuning the LLaMA2 model can be found in [ft_scripts/fine_llama.bash](./ft_scripts/fine_llama.bash).
296
-
297
 
298
 
299
- #### 3.4.3LoRA Fine-tuning Baichuan2
300
 
301
- The specific script for fine-tuning the Baichuan2 model can be found in [ft_scripts/fine_baichuan.bash](./ft_scripts/fine_baichuan.bash).bash.
302
 
303
 
304
- #### 3.4.3LoRA Fine-tuning Other Models
305
 
306
- To fine-tune other models, you just need to adjust the `--model_name`, `--template` parameters. For example, for the `alpaca` model, set `--model_name alpaca`, `--template alpaca`, and for the `chatglm3` model, set `--model_name chatglm`, `--template chatglm3`.
307
 
 
308
 
309
- ### 3.5Continued Model Training
310
 
311
- Although the `Baichuan2-IEPILE` and `LLaMA2-IEPILE` models have undergone extensive instruction fine-tuning on multiple general datasets and thus possess a degree of general information extraction capability, they may still exhibit certain limitations when processing data in specific domains (such as `law`, `education`, `science`, `telecommunications`). To address this challenge, it is recommended to conduct secondary training of these models on datasets specific to these domains. This will help the models better adapt to the semantic and structural characteristics of the specific domains, significantly enhancing their information extraction capability within those domains.
312
 
313
-
314
-
315
- #### 3.5.1Training Data Conversion
316
 
317
  Firstly, it's necessary to **format the data** to include `instruction` and `output` fields. For this purpose, we provide a script [convert_func.py](./ie2instruction/convert_func.py), which can batch convert data into a format that can be directly used by the model.
318
 
319
 
320
  > Before using the [convert_func.py](./ie2instruction/convert_func.py) script, please make sure to refer to the [data](./data) directory. This directory provides detailed instructions on the data format required for each task. Refer to `sample.json` to understand the format of the data before conversion, `schema.json` to see the organization of the schema, and `train.json` to describe the data format after conversion.
321
 
 
 
322
 
323
  ```bash
324
  python ie2instruction/convert_func.py \
@@ -332,36 +315,75 @@ python ie2instruction/convert_func.py \
332
  --split train
333
  ```
334
 
335
- * `--language`: Supports two languages, `zh` (Chinese) and `en` (English), with different instruction templates used for different languages.
336
- * `--task`: Currently supports five types of tasks: ['RE', 'NER', 'EE', 'EET', 'EEA'].
337
- * `--split_num`: The maximum number of schemas in a single instruction. The default is 4; -1 means no splitting. Recommended splitting numbers vary by task: NER: 6, RE: 4, EE: 4, EET: 4, EEA: 4.
338
- * `--random_sort`: Whether to randomly sort the schemas in the instruction. The default is False, meaning schemas are sorted alphabetically.
339
- * `--split`: Indicates the type of data constructed, ['train', 'test']. `train` will include the converted `output`, while `test` will be the `label`.
340
 
341
- After conversion, the training data will have four fields: `task`, `source`, `instruction`, and `output`. You can refer to [Statistics of IEPILE](./README.md#22statistics-of-iepile) to view the format and function of each field.
 
 
 
 
342
 
 
343
 
344
- #### 3.5.2Continued Training
345
 
 
346
 
347
- * If you are continuing training from the fine-tuned LoRA weights, you only need to set the `--checkpoint_dir` parameter to the path of the fine-tuned LoRA weights, for example, `'zjunlp/llama2-13b-iepile-lora'`.
348
 
349
- * If you are continuing training from the fine-tuned model weights, you only need to set the `--model_name_or_path` parameter to the path of the fine-tuned model weights, for example, `'zjunlp/KnowLM-IE-v2'`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
350
 
351
- The specific script for continuing training from the fine-tuned LoRA weights can be found in [ft_scripts/fine_continue.bash](./ft_scripts/fine_continue.bash).
352
 
 
353
 
354
 
 
355
 
356
- ## 4.Prediction
357
 
358
- ### 4.1Test Data Conversion
359
 
360
 
361
- Firstly, it's necessary to **format the data** to include `instruction` and `output` fields. For this purpose, we provide a script [convert_func.py](./ie2instruction/convert_func.py), which can batch convert data into a format that can be directly used by the model.
362
 
363
 
364
- > Before using the [convert_func.py](./ie2instruction/convert_func.py) script, please make sure to refer to the [data](./data) directory. This directory provides detailed instructions on the data format required for each task. Refer to `sample.json` to understand the format of the data before conversion, `schema.json` to see the organization of the schema, and `train.json` to describe the data format after conversion.
 
 
 
 
 
 
365
 
366
  ```bash
367
  python ie2instruction/convert_func.py \
@@ -374,28 +396,24 @@ python ie2instruction/convert_func.py \
374
  --split test
375
  ```
376
 
 
377
 
378
- * `--language`: Supports two languages, `zh` (Chinese) and `en` (English), with different instruction templates used for different languages.
379
- * `--task`: Currently supports five types of tasks: ['RE', 'NER', 'EE', 'EET', 'EEA'].
380
- * `--split_num`: The maximum number of schemas in a single instruction. The default is 4; -1 means no splitting. Recommended splitting numbers vary by task: NER: 6, RE: 4, EE: 4, EET: 4, EEA: 4.
381
- * `--random_sort`: Whether to randomly sort the schemas in the instruction. The default is False, meaning schemas are sorted alphabetically.
382
- * `--split`: Indicates the type of data constructed, ['train', 'test']. `train` will include the converted `output`, while `test` will be the `label`.
383
 
384
- After conversion, the test data will have three fields: `id`, `instruction`, `label`.
385
 
386
-
387
-
388
- ### 4.2IE-Specific Model Prediction
389
 
390
  ```bash
391
  CUDA_VISIBLE_DEVICES=0 python src/inference.py \
392
  --stage sft \
393
- --model_name_or_path 'zjunlp/KnowLM-IE-v2' \
394
- --model_name 'baichuan' \
395
- --template 'baichuan2' \
 
396
  --do_predict \
397
  --input_file 'data/input.json' \
398
- --output_file 'results/KnowLM-IE-v2_output.json' \
 
399
  --output_dir 'lora/test' \
400
  --predict_with_generate \
401
  --max_source_length 512 \
@@ -403,25 +421,27 @@ CUDA_VISIBLE_DEVICES=0 python src/inference.py \
403
  --max_new_tokens 300
404
  ```
405
 
406
- * `--model_name`, `--template`, `--bf16` should be consistent with the settings during training.
407
- * `--output_dir`: This can be set to any path as it doesn't carry significance for inference.
408
- * `--input_file`, `--output_file`: The path to the input test file and the path for the prediction output file.
409
- * `--max_source_length`, `--max_new_tokens`: Maximum input and output lengths, adjusted according to device capabilities.
 
 
 
 
410
 
411
 
412
- ### 4.3Basic Model + LoRA Prediction
413
 
414
  ```bash
415
  CUDA_VISIBLE_DEVICES=0 python src/inference.py \
416
  --stage sft \
417
- --model_name_or_path 'models/llama2-13B-Chat' \
418
- --checkpoint_dir 'zjunlp/llama2-13b-iepile-lora' \
419
- --model_name 'llama' \
420
- --template 'llama2' \
421
  --do_predict \
422
  --input_file 'data/input.json' \
423
- --output_file 'results/llama2-13b-iepile-lora_output.json' \
424
- --finetuning_type lora \
425
  --output_dir 'lora/test' \
426
  --predict_with_generate \
427
  --max_source_length 512 \
@@ -429,11 +449,12 @@ CUDA_VISIBLE_DEVICES=0 python src/inference.py \
429
  --max_new_tokens 300
430
  ```
431
 
432
- * `--checkpoint_dir`: Path to the trained LoRA weights.
433
 
 
434
 
435
 
436
- ## 5.Evaluation
437
 
438
  We provide scripts for evaluating the F1 scores for various tasks.
439
 
@@ -443,20 +464,37 @@ python ie2instruction/eval_func.py \
443
  --task NER
444
  ```
445
 
446
- * `--task`: Currently supports five types of tasks: ['RE', 'NER', 'EE', 'EET', 'EEA'].
 
447
 
448
 
449
- # 6. Statement and License
 
450
  We believe that annotated data contains the wisdom of humanity, and its existence is to promote the benefit of all humankind and help enhance our quality of life. We strongly urge all users not to use our corpus for any actions that may harm national or public security or violate legal regulations.
451
  We have done our best to ensure the quality and legality of the data provided. However, we also recognize that despite our efforts, there may still be some unforeseen issues, such as concerns about data protection and risks and problems caused by data misuse. We will not be responsible for these potential problems.
452
- For original data that is subject to usage permissions stricter than the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) agreement, IEPILE will adhere to those stricter terms. In all other cases, our operations will be based on the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license agreement.
 
453
 
454
 
455
 
456
- # 7. Limitations
457
 
458
  From the data perspective, our study primarily focuses on schema-based IE, which limits our ability to generalize to human instructions that do not follow our specific format requirements.
459
  Additionally, we do not explore the field of Open Information Extraction (Open IE); however, if we remove schema constraints, our dataset would be suitable for Open IE scenarios.
460
- Besides, IEPILE is confined to data in English and Chinese, and in the future, we hope to include data in more languages.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
461
 
462
- From the model perspective, due to computational resource limitations, our research only assessed two models: Baichuan and LLaMA, along with some baseline models. Our dataset can be applied to any other large language models (LLMs), such as Qwen, ChatGLM.
 
8
  ---
9
 
10
 
 
 
11
  <p align="left">
12
  <b> English | <a href="https://huggingface.co/datasets/zjunlp/IEPILE/blob/main/README_ZH.md">Chinese</a> </b>
13
  </p>
14
 
15
 
16
+ # IEPile: A Large-Scale Information Extraction Corpus
17
+
18
+ This is the official repository for [IEPile: Unearthing Large-Scale Schema-Based Information Extraction Corpus](https://arxiv.org/abs/2402.14710)
19
+
20
+ [**Datasets**](https://huggingface.co/datasets/zjunlp/iepie) |
21
+ [**Paper**](https://huggingface.co/papers/2402.14710) |
22
+ [**Usage**](./README.md#🚴3using-IEPile-to-train-models) |
23
+ [**Limitations**](./README.md#7-limitations) |
24
+ [**Statement & License**](./README.md#6-statement-and-license) |
25
+ [**Citation**](./README.md#8-cite)
26
+
27
+ > Please note that our IEPile may undergo **updates** (we will inform you upon their release). It is recommended to utilize the most current version.
28
+
29
+
30
+ - [IEPile: A Large-Scale Information Extraction Corpus](#iepile-a-large-scale-information-extraction-corpus)
31
  - [🎯1.Introduction](#1introduction)
32
  - [πŸ“Š2.Data](#2data)
33
+ - [2.1Construction of IEPile](#21construction-of-iepile)
34
+ - [2.2Data Format of IEPile](#22data-format-of-iepile)
35
+ - [🚴3.Using IEPile to Train Models](#3using-iepile-to-train-models)
36
  - [3.1Environment](#31environment)
37
+ - [3.2Download Data and Models](#32download-data-and-models)
 
38
  - [3.4LoRA Fine-tuning](#34lora-fine-tuning)
39
+ - [4.Continued Training with In-Domain Data](#4continued-training-with-in-domain-data)
40
+ - [4.1Training Data Conversion](#41training-data-conversion)
41
+ - [4.2Continued Training](#42continued-training)
42
+ - [5.Prediction](#5prediction)
43
+ - [5.1Test Data Conversion](#51test-data-conversion)
44
+ - [5.2Basic Model + LoRA Prediction](#52basic-model--lora-prediction)
45
+ - [5.3IE-Specific Model Prediction](#53ie-specific-model-prediction)
46
+ - [6.Evaluation](#6evaluation)
47
+ - [7.Statement and License](#7statement-and-license)
48
+ - [8.Limitations](#8limitations)
49
+ - [9.Cite](#9cite)
50
+ - [10.Acknowledgements](#10acknowledgements)
 
51
 
52
 
53
  ## 🎯1.Introduction
54
 
55
 
56
+ **`IEPile`** dataset download links: [Google Drive](https://drive.google.com/file/d/1jPdvXOTTxlAmHkn5XkeaaCFXQkYJk5Ng/view?usp=sharing) | [Hugging Face](https://huggingface.co/datasets/zjunlp/iepile)
57
 
58
 
59
  > Please be aware that the data contained in the dataset links provided above has already excluded any part related to the ACE2005 dataset. Should you require access to the unfiltered, complete dataset and have successfully obtained the necessary permissions, please do not hesitate to contact us via email at [email protected] or [email protected]. We will provide the complete dataset resources for your use.
60
 
61
 
62
+ Model download links for **`LLaMA2-IEPile`** | **`Baichuan2-IEPile`** | **`knowlm-ie-v2(based on Baichuan2)`**: [zjunlp/llama2-13b-iepile-lora](https://huggingface.co/zjunlp/llama2-13b-iepile-lora/tree/main) | [zjunlp/baichuan2-13b-iepile-lora](https://huggingface.co/zjunlp/baichuan2-13b-iepile-lora) | [zjunlp/KnowLM-IE-v2]()
 
 
 
 
63
 
64
 
65
  ![statistic](./assets/statistic.jpg)
66
 
67
 
68
+ We have meticulously collected and cleaned existing Information Extraction (IE) datasets, integrating a total of 26 English IE datasets and 7 Chinese IE datasets. As shown in Figure 1, these datasets cover multiple domains including **general**, **medical**, **financial**, and others.
69
 
70
+ In this study, we adopted the proposed "`schema-based batched instruction generation method`" to successfully create a large-scale, high-quality IE fine-tuning dataset named **IEPile**, containing approximately `0.32B` tokens.
71
 
72
+ Based on **IEPile**, we fine-tuned the `Baichuan2-13B-Chat` and `LLaMA2-13B-Chat` models using the `Lora` technique. Experiments have demonstrated that the fine-tuned `Baichuan2-IEPile` and `LLaMA2-IEPile` models perform remarkably on fully supervised training sets and have achieved significant improvements in **zero-shot information extraction tasks**.
73
 
 
74
 
75
 
76
  ![zero_en](./assets/zero_en.jpg)
 
93
  ## πŸ“Š2.Data
94
 
95
 
96
+ ### 2.1Construction of IEPile
97
 
98
+ We concentrate on instruction-based IE, thus the construction of schema within the instructions is crucial. This is because they reflect the specific extraction requirements and are dynamically variable. Previous approaches with existing IE datasets often employ a rather extensive schema processing strategy when constructing instructions, utilizing all schemas within a label set for instruction building, raising two potential issues:
99
  1. **Inconsistency in the number of schema queries within instruction between training and evaluation**. For example, the model's performance will decrease if it is trained on about 20 schema queries but tested with either 10 or 30, even if the training and evaluation schemas are similar in content.
100
  2. **Inadequate differentiation among schemas in the instructions**. For example, semantically similar schemas like "layoffs", "depart" and "dismissals", may present co-occurrence ambiguities that could confuse the LLMs. Such schemas should co-occur more frequently within the instruction.
101
 
102
+ Therefore, we introduce the following solutions: 1οΌ‰Hard Negative Schema; and 2οΌ‰ Batched Instruction Generation.
103
 
104
 
105
  ![iepile](./assets/iepile.jpg)
 
121
  </details>
122
 
123
 
124
+ ### 2.2Data Format of IEPile
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
+ Each instance in `IEPile` contains four fields: `task`, `source`, `instruction`, and `output`. Below are the explanations for each field:
 
127
 
128
 
129
+ | Field | Description |
130
+ | :---: | :---: |
131
+ | task | The task to which the instance belongs, one of the five types (`NER`, `RE`, `EE`, `EET`, `EEA`). |
132
+ | source | The dataset to which the instance belongs. |
133
+ | instruction | The instruction for inputting into the model, processed into a JSON string via json.dumps, including three fields: `"instruction"`, `"schema"`, and `"input"`. |
134
+ | output | The output in the format of a dictionary's JSON string, where the key is the schema, and the value is the extracted content. |
135
 
 
 
 
 
 
 
136
 
137
+ In `IEPile`, the **instruction** format of `IEPile` adopts a JSON-like string structure, which is essentially a dictionary-type string composed of the following three main components:
138
+ (1) **`'instruction'`**: Task description, which outlines the task to be performed by the instruction (one of `NER`, `RE`, `EE`, `EET`, `EEA`).
139
+ (2) **`'schema'`**: A list of schemas to be extracted (`entity types`, `relation types`, `event types`).
140
+ (3) **`'input'`**: The text from which information is to be extracted.
141
 
142
  The file [instruction.py](./ie2instruction/convert/utils/instruction.py) provides instructions for various tasks.
143
 
144
+ Below is a **data example**:
 
 
 
 
145
 
146
  ```json
147
  {
 
152
  }
153
  ```
154
 
155
+ The data instance belongs to the `NER` task, is part of the `CoNLL2003` dataset, the schema list to be extracted includes ["`person`", "`organization`", "`else`", "`location`"], and the text to be extracted from is "*284 Robert Allenby ( Australia ) 69 71 71 73 , Miguel Angel Martin ( Spain ) 75 70 71 68 ( Allenby won at first play-off hole )*". The output is `{"person": ["Robert Allenby", "Allenby", "Miguel Angel Martin"], "organization": [], "else": [], "location": ["Australia", "Spain"]}`.
156
+
157
+ > Note that the order of schemas in the output is consistent with the order in the instruction.
158
+
159
+
160
  <details>
161
+ <summary><b>More Tasks Instance</b></summary>
162
 
163
  ```json
164
  {
 
179
  </details>
180
 
181
 
 
 
 
 
 
 
 
 
182
 
183
+ ## 🚴3.Using IEPile to Train Models
 
184
 
185
  ### 3.1Environment
186
 
187
  Before you begin, make sure to create an appropriate **virtual environment** following the instructions below:
188
 
189
  ```bash
190
+ conda create -n IEPile python=3.9 # Create a virtual environment
191
+ conda activate IEPile # Activate the environment
192
  pip install -r requirements.txt # Install dependencies
193
  ```
194
 
195
 
196
+ ### 3.2Download Data and Models
197
 
198
+ **`IEPile`** dataset download links: [Google Drive](https://drive.google.com/file/d/1jPdvXOTTxlAmHkn5XkeaaCFXQkYJk5Ng/view?usp=sharing) | [Hugging Face](https://huggingface.co/datasets/zjunlp/IEPile)
199
 
200
+
201
+ ```python
202
+ IEPile
203
+ β”œβ”€β”€ train.json # Training set
204
+ └── dev.json # Validation set
205
  ```
206
 
207
+ Here are some of the models supported by the code in this repository:
208
+ [[llama](https://huggingface.co/meta-llama), [alpaca](https://github.com/tloen/alpaca-lora), [vicuna](https://huggingface.co/lmsys), [zhixi](https://github.com/zjunlp/KnowLM), [falcon](https://huggingface.co/tiiuae), [baichuan](https://huggingface.co/baichuan-inc), [chatglm](https://huggingface.co/THUDM), [qwen](https://huggingface.co/Qwen), [moss](https://huggingface.co/fnlp), [openba](https://huggingface.co/OpenBA)]
209
 
 
210
 
211
+ Model download links for **`LLaMA2-IEPile`** | **`Baichuan2-IEPile`** | **`KnowLM-IE-v2`**: [zjunlp/llama2-13b-IEPile-lora](https://huggingface.co/zjunlp/llama2-13b-IEPile-lora/tree/main) | [zjunlp/baichuan2-13b-IEPile-lora](https://huggingface.co/zjunlp/baichuan2-13b-IEPile-lora) | [zjunlp/KnowLM-IE-v2]()
212
 
213
 
214
+ **`LLaMA2-IEPile`** and **`Baichuan2-IEPile`** are two models mentioned in the IEPile paper that were fine-tuned on `LLaMA2-13B-Chat` and `Baichuan2-13B-Chat` using LoRA.
215
 
 
 
216
 
217
+ ```bash
218
+ mkdir data # Put data here
219
+ mkdir models # Put base models here
220
+ mkdir results # Put prediction results here
221
+ mkdir lora # Put LoRA fine-tuning results here
222
+ ```
223
 
224
+ Data should be placed in the `./data` directory.
225
 
226
 
 
227
 
228
+ ### 3.4LoRA Fine-tuning
229
+
230
+ > Important Note: All the commands below should be executed within the `IEPile` directory. For example, if you want to run the fine-tuning script, you should use the following command: `bash ft_scripts/fine_llama.bash`. Please ensure your current working directory is correct.
231
 
232
 
233
 
 
265
  --bf16
266
  ```
267
 
268
+ * `model_name`: Specifies the **name of the model architecture** you want to use (7B, 13B, Base, Chat belong to the same model architecture). Currently supported models include: ["`llama`", "`alpaca`", "`vicuna`", "`zhixi`", "`falcon`", "`baichuan`", "`chatglm`", "`qwen`", "`moss`", "`openba`"]. **Please note**, this parameter should be distinguished from `--model_name_or_path`.
269
+ * `model_name_or_path`: Model path, please download the corresponding model from [HuggingFace](https://huggingface.co/models).
270
+ * `template`: The **name of the template** used, including: `alpaca`, `baichuan`, `baichuan2`, `chatglm3`, etc. Refer to [src/datamodule/template.py](./src/datamodule/template.py) to see all supported template names. The default is the `alpaca` template. **For `Chat` versions of models, it is recommended to use the matching template, while `Base` version models can default to using `alpaca`**.
271
+ * `train_file`, `valid_file (optional)`: The **file paths** for the training set and validation set. Note: Currently, the format for files only supports **JSON format**.
272
+ * `output_dir`: The **path to save the weight parameters** after LoRA fine-tuning.
273
+ * `val_set_size`: The number of samples in the **validation set**, default is 1000.
274
+ * `per_device_train_batch_size`, `per_device_eval_batch_size`: The `batch_size` on each GPU device, adjust according to the size of the memory.
275
+ * `max_source_length`, `max_target_length`, `cutoff_len`: The maximum input and output lengths, and the cutoff length, which can simply be considered as the maximum input length + maximum output length. Set appropriate values according to specific needs and memory size.
276
+ * `deepspeed`: Remove if there is not enough device resources.
277
 
278
+ > Quantization can be performed by setting `bits` to 8 or 4.
 
 
 
 
 
 
 
279
 
280
  To learn more about parameter configuration, please refer to the [src/utils/args](./src/args).
281
 
282
+ The specific script for fine-tuning the `LLaMA2-13B-Chat` model can be found in [ft_scripts/fine_llama.bash](./ft_scripts/fine_llama.bash).
 
283
 
284
 
285
+ The specific script for fine-tuning the `Baichuan2-13B-Chat` model can be found in [ft_scripts/fine_baichuan.bash](./ft_scripts/fine_baichuan.bash).bash.
286
 
 
287
 
288
 
 
289
 
 
290
 
291
+ ## 4.Continued Training with In-Domain Data
292
 
293
+ Although the `Baichuan2-IEPile` and `LLaMA2-IEPile` models have undergone extensive instruction fine-tuning on multiple general datasets and thus possess a degree of **general information extraction capability**, they may still exhibit certain limitations when processing data in **specific domains** (such as `law`, `education`, `science`, `telecommunications`). To address this challenge, it is recommended to conduct **secondary training** of these models on datasets specific to these domains. This will help the models better adapt to the semantic and structural characteristics of the specific domains, significantly enhancing their **information extraction capability within those domains**.
294
 
 
295
 
296
+ ### 4.1Training Data Conversion
 
 
297
 
298
  Firstly, it's necessary to **format the data** to include `instruction` and `output` fields. For this purpose, we provide a script [convert_func.py](./ie2instruction/convert_func.py), which can batch convert data into a format that can be directly used by the model.
299
 
300
 
301
  > Before using the [convert_func.py](./ie2instruction/convert_func.py) script, please make sure to refer to the [data](./data) directory. This directory provides detailed instructions on the data format required for each task. Refer to `sample.json` to understand the format of the data before conversion, `schema.json` to see the organization of the schema, and `train.json` to describe the data format after conversion.
302
 
303
+ > Additionally, you can directly use the bilingual (Chinese and English) information extraction dataset [zjunlp/InstructIE](https://huggingface.co/datasets/zjunlp/InstructIE), which includes 12 themes such as characters, vehicles, works of art, natural science, man-made objects, astronomical objects, etc.
304
+
305
 
306
  ```bash
307
  python ie2instruction/convert_func.py \
 
315
  --split train
316
  ```
317
 
 
 
 
 
 
318
 
319
+ * `language`: Supports two languages, `zh` (Chinese) and `en` (English), with different instruction templates used for each language.
320
+ * `task`: Currently supports five types of tasks: ['`RE`', '`NER`', '`EE`', '`EET`', '`EEA`'].
321
+ * `split_num`: Defines the maximum number of schemas that can be included in a single instruction. The default value is 4, and setting it to -1 means no splitting is done. The recommended number of task splits varies by task: **6 for NER, and 4 for RE, EE, EET, EEA**.
322
+ * `random_sort`: Whether to randomize the order of schemas in the instructions. The default is False, which means schemas are sorted alphabetically.
323
+ * `split`: Specifies the type of dataset, with options `train` or `test`.
324
 
325
+ The converted training data will contain four fields: `task`, `source`, `instruction`, `output`.
326
 
 
327
 
328
+ ### 4.2Continued Training
329
 
 
330
 
331
+ ```bash
332
+ output_dir='lora/llama2-13b-chat-v1-continue'
333
+ mkdir -p ${output_dir}
334
+ CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" torchrun --nproc_per_node=8 --master_port=1287 src/test_finetune.py \
335
+ --do_train --do_eval \
336
+ --overwrite_output_dir \
337
+ --model_name_or_path 'models/llama2-13B-Chat' \
338
+ --checkpoint_dir 'zjunlp/llama2-13b-iepile-lora' \
339
+ --stage 'sft' \
340
+ --model_name 'llama' \
341
+ --template 'llama2' \
342
+ --train_file 'data/train.json' \
343
+ --valid_file 'data/dev.json' \
344
+ --output_dir=${output_dir} \
345
+ --per_device_train_batch_size 24 \
346
+ --per_device_eval_batch_size 24 \
347
+ --gradient_accumulation_steps 4 \
348
+ --preprocessing_num_workers 16 \
349
+ --num_train_epochs 10 \
350
+ --learning_rate 5e-5 \
351
+ --max_grad_norm 0.5 \
352
+ --optim "adamw_torch" \
353
+ --max_source_length 400 \
354
+ --cutoff_len 700 \
355
+ --max_target_length 300 \
356
+ --report_to tensorboard \
357
+ --evaluation_strategy "epoch" \
358
+ --save_strategy "epoch" \
359
+ --save_total_limit 10 \
360
+ --lora_r 64 \
361
+ --lora_alpha 64 \
362
+ --lora_dropout 0.05 \
363
+ --bf16
364
+ ```
365
 
366
+ * To continue training based on the fine-tuned LoRA weights, simply point the `--checkpoint_dir` parameter to the path of the LoRA weights, for example by setting it to `'zjunlp/llama2-13b-iepile-lora'`.
367
 
368
+ > Quantization can be performed by setting `bits` to 8 or 4.
369
 
370
 
371
+ > Please note that when using **`LLaMA2-IEPile`** or **`Baichuan2-IEPile`**, keep both lora_r and lora_alpha at 64. We do not provide recommended settings for these parameters.
372
 
 
373
 
374
+ * To continue training based on the fine-tuned model weights, just set the `--model_name_or_path` parameter to the path of the weights, such as `'zjunlp/KnowLM-IE-v2'`, without setting `--checkpoint_dir`.
375
 
376
 
377
+ The script can be found at [ft_scripts/fine_continue.bash](./ft_scripts/fine_continue.bash).
378
 
379
 
380
+ ## 5.Prediction
381
+
382
+ ### 5.1Test Data Conversion
383
+
384
+
385
+ Before preparing the test data conversion, please visit the [data](./data) directory to understand the data structure required for each task: 1) For the input data format, see `sample.json`. 2) For the schema format, please refer to `schema.json`. 3) For the format of the transformed data, refer to `train.json`. **Unlike training data, test data input does not need to include annotation fields (`entity`, `relation`, `event`)**.
386
+
387
 
388
  ```bash
389
  python ie2instruction/convert_func.py \
 
396
  --split test
397
  ```
398
 
399
+ When setting `split` to **test**, select the appropriate number of schemas according to the task type: **6 is recommended for NER, while 4 is recommended for RE, EE, EET, EEA**. The transformed test data will contain five fields: `id`, `task`, `source`, `instruction`, `label`.
400
 
401
+ The `label` field will be used for subsequent evaluation. If the input data lacks the annotation fields (`entity`, `relation`, `event`), the transformed test data will not contain the `label` field, which is suitable for scenarios where no original annotated data is available.
 
 
 
 
402
 
 
403
 
404
+ ### 5.2Basic Model + LoRA Prediction
 
 
405
 
406
  ```bash
407
  CUDA_VISIBLE_DEVICES=0 python src/inference.py \
408
  --stage sft \
409
+ --model_name_or_path 'models/llama2-13B-Chat' \
410
+ --checkpoint_dir 'zjunlp/llama2-13b-IEPile-lora' \
411
+ --model_name 'llama' \
412
+ --template 'llama2' \
413
  --do_predict \
414
  --input_file 'data/input.json' \
415
+ --output_file 'results/llama2-13b-IEPile-lora_output.json' \
416
+ --finetuning_type lora \
417
  --output_dir 'lora/test' \
418
  --predict_with_generate \
419
  --max_source_length 512 \
 
421
  --max_new_tokens 300
422
  ```
423
 
424
+ * During inference, `model_name`, `template`, and `bf16` must be the same as the settings used during training.
425
+ * `model_name_or_path`: Specify the path to the base model being used, which must match the corresponding LoRA model.
426
+ * `checkpoint_dir`: The path to the LoRA weight files.
427
+ * `output_dir`: This parameter does not take effect during inference and any path can be specified.
428
+ * `input_file`, `output_file`: Specify the input path for the test file and the output path for the prediction results, respectively.
429
+ * `max_source_length`, `max_new_tokens`: Set the maximum input length and the number of new tokens to be generated, adjusting according to device performance.
430
+
431
+ > Quantization can be performed by setting `bits` to 8 or 4.
432
 
433
 
434
+ ### 5.3IE-Specific Model Prediction
435
 
436
  ```bash
437
  CUDA_VISIBLE_DEVICES=0 python src/inference.py \
438
  --stage sft \
439
+ --model_name_or_path 'zjunlp/KnowLM-IE-v2' \
440
+ --model_name 'baichuan' \
441
+ --template 'baichuan2' \
 
442
  --do_predict \
443
  --input_file 'data/input.json' \
444
+ --output_file 'results/KnowLM-IE-v2_output.json' \
 
445
  --output_dir 'lora/test' \
446
  --predict_with_generate \
447
  --max_source_length 512 \
 
449
  --max_new_tokens 300
450
  ```
451
 
452
+ `model_name_or_path`: The path to the weights of the model specialized for Information Extraction (IE).
453
 
454
+ > Quantization can be performed by setting `bits` to 8 or 4.
455
 
456
 
457
+ ## 6.Evaluation
458
 
459
  We provide scripts for evaluating the F1 scores for various tasks.
460
 
 
464
  --task NER
465
  ```
466
 
467
+ * `task`: Currently supports five types of tasks: ['`RE`', '`NER`', '`EE`', '`EET`', '`EEA`'].
468
+ * You can set `sort_by` to `source` to calculate the F1 scores on each dataset separately.
469
 
470
 
471
+
472
+ ## 7.Statement and License
473
  We believe that annotated data contains the wisdom of humanity, and its existence is to promote the benefit of all humankind and help enhance our quality of life. We strongly urge all users not to use our corpus for any actions that may harm national or public security or violate legal regulations.
474
  We have done our best to ensure the quality and legality of the data provided. However, we also recognize that despite our efforts, there may still be some unforeseen issues, such as concerns about data protection and risks and problems caused by data misuse. We will not be responsible for these potential problems.
475
+ For original data that is subject to usage permissions stricter than the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) agreement, IEPile will adhere to those stricter terms. In all other cases, our operations will be based on the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license agreement.
476
+
477
 
478
 
479
 
480
+ ## 8.Limitations
481
 
482
  From the data perspective, our study primarily focuses on schema-based IE, which limits our ability to generalize to human instructions that do not follow our specific format requirements.
483
  Additionally, we do not explore the field of Open Information Extraction (Open IE); however, if we remove schema constraints, our dataset would be suitable for Open IE scenarios.
484
+ Besides, IEPile is confined to data in English and Chinese, and in the future, we hope to include data in more languages.
485
+
486
+ From the model perspective, due to computational resource limitations, our research only assessed two models: Baichuan and LLaMA, along with some baseline models. Our dataset can be applied to any other large language models (LLMs), such as Qwen, ChatGLM, Gemma.
487
+
488
+
489
+
490
+ ## 9.Cite
491
+ If you use the IEPile or the code, please cite the paper:
492
+
493
+
494
+
495
+
496
+ ## 10.Acknowledgements
497
+ We are very grateful for the inspiration provided by the [MathPile](mathpile) and [KnowledgePile](https://huggingface.co/datasets/Query-of-CC/Knowledge_Pile) projects. Special thanks are due to the builders and maintainers of the following datasets: [AnatEM](https://doi.org/10.1093/BIOINFORMATICS/BTT580)、[BC2GM](https://link.springer.com/chapter/10.1007/978-3-030-68763-2_48)、[BC4CHEMD](https://link.springer.com/chapter/10.1007/978-3-030-68763-2_48)、[NCBI-Disease](https://linkinghub.elsevier.com/retrieve/pii/S1532046413001974)、[BC5CDR](https://openreview.net/pdf?id=9EAQVEINuum)、[HarveyNER](https://aclanthology.org/2022.naacl-main.243/)、[CoNLL2003](https://aclanthology.org/W03-0419/)、[GENIA](https://pubmed.ncbi.nlm.nih.gov/12855455/)、[ACE2005](https://catalog.ldc.upenn.edu/LDC2006T06)、[MIT Restaurant](https://ieeexplore.ieee.org/document/6639301)、[MIT Movie](https://ieeexplore.ieee.org/document/6639301)、[FabNER](https://link.springer.com/article/10.1007/s10845-021-01807-x)、[MultiNERD](https://aclanthology.org/2022.findings-naacl.60/)、[Ontonotes](https://aclanthology.org/N09-4006/)、[FindVehicle](https://arxiv.org/abs/2304.10893)、[CrossNER](https://ojs.aaai.org/index.php/AAAI/article/view/17587)、[MSRA NER](https://aclanthology.org/W06-0115/)、[Resume NER](https://aclanthology.org/P18-1144/)、[CLUE NER](https://arxiv.org/abs/2001.04351)、[Weibo NER](https://aclanthology.org/D15-1064/)、[Boson](https://github.com/InsaneLife/ChineseNLPCorpus/tree/master/NER/boson)、[ADE Corpus](https://jbiomedsem.biomedcentral.com/articles/10.1186/2041-1480-3-15)、[GIDS](https://arxiv.org/abs/1804.06987)、[CoNLL2004](https://aclanthology.org/W04-2412/)、[SciERC](https://aclanthology.org/D18-1360/)、[Semeval-RE](https://aclanthology.org/S10-1006/)、[NYT11-HRL](https://ojs.aaai.org/index.php/AAAI/article/view/4688)、[KBP37](https://arxiv.org/abs/1508.01006)、[NYT](https://link.springer.com/chapter/10.1007/978-3-642-15939-8_10)、[Wiki-ZSL](https://aclanthology.org/2021.naacl-main.272/)、[FewRel](https://aclanthology.org/D18-1514/)、[CMeIE](https://link.springer.com/chapter/10.1007/978-3-030-60450-9_22)、[DuIE](https://link.springer.com/chapter/10.1007/978-3-030-32236-6_72)、[COAE2016](https://github.com/Sewens/COAE2016)、[IPRE](https://arxiv.org/abs/1907.12801)、[SKE2020](https://aistudio.baidu.com/datasetdetail/177191)、[CASIE](https://ojs.aaai.org/index.php/AAAI/article/view/6401)、[PHEE](https://aclanthology.org/2022.emnlp-main.376/)、[CrudeOilNews](https://aclanthology.org/2022.lrec-1.49/)、[RAMS](https://aclanthology.org/2020.acl-main.718/)、[WikiEvents](https://aclanthology.org/2021.naacl-main.69/)、[DuEE](https://link.springer.com/chapter/10.1007/978-3-030-60457-8_44)、[DuEE-Fin](https://link.springer.com/chapter/10.1007/978-3-031-17120-8_14)、[FewFC](https://ojs.aaai.org/index.php/AAAI/article/view/17720)、[CCF law](https://aistudio.baidu.com/projectdetail/4201483), and more. These datasets have significantly contributed to the advancement of this research. We are also grateful for the valuable contributions in the field of information extraction made by [InstructUIE](http://arxiv.org/abs/2304.08085) and [YAYI-UIE](http://arxiv.org/abs/2312.15548), both in terms of data and model innovation. Our research results have benefitted from their creativity and hard work as well. Additionally, our heartfelt thanks go to [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory); our fine-tuning code implementation owes much to their work. The assistance provided by these academic resources has been instrumental in the completion of our research, and for this, we are deeply appreciative.
498
+
499
+
500