Datasets:

ArXiv:
License:
ghh001 commited on
Commit
b979d00
β€’
1 Parent(s): 1b19400

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -315
README.md CHANGED
@@ -19,38 +19,27 @@ This is the official repository for [IEPile: Unearthing Large-Scale Schema-Based
19
 
20
  [**Datasets**](https://huggingface.co/datasets/zjunlp/iepie) |
21
  [**Paper**](https://huggingface.co/papers/2402.14710) |
22
- [**Usage**](./README.md#🚴3using-IEPile-to-train-models) |
23
- [**Limitations**](./README.md#7-limitations) |
24
- [**Statement & License**](./README.md#6-statement-and-license) |
25
- [**Citation**](./README.md#8-cite)
26
 
27
  > Please note that our IEPile may undergo **updates** (we will inform you upon their release). It is recommended to utilize the most current version.
28
 
29
 
30
  - [IEPile: A Large-Scale Information Extraction Corpus](#iepile-a-large-scale-information-extraction-corpus)
31
- - [🎯1.Introduction](#1introduction)
32
- - [πŸ“Š2.Data](#2data)
33
  - [2.1Construction of IEPile](#21construction-of-iepile)
34
  - [2.2Data Format of IEPile](#22data-format-of-iepile)
35
- - [🚴3.Using IEPile to Train Models](#3using-iepile-to-train-models)
36
- - [3.1Environment](#31environment)
37
- - [3.2Download Data and Models](#32download-data-and-models)
38
- - [3.4LoRA Fine-tuning](#34lora-fine-tuning)
39
- - [4.Continued Training with In-Domain Data](#4continued-training-with-in-domain-data)
40
- - [4.1Training Data Conversion](#41training-data-conversion)
41
- - [4.2Continued Training](#42continued-training)
42
- - [5.Prediction](#5prediction)
43
- - [5.1Test Data Conversion](#51test-data-conversion)
44
- - [5.2Basic Model + LoRA Prediction](#52basic-model--lora-prediction)
45
- - [5.3IE-Specific Model Prediction](#53ie-specific-model-prediction)
46
- - [6.Evaluation](#6evaluation)
47
- - [7.Statement and License](#7statement-and-license)
48
- - [8.Limitations](#8limitations)
49
- - [9.Cite](#9cite)
50
- - [10.Acknowledgements](#10acknowledgements)
51
-
52
-
53
- ## 🎯1.Introduction
54
 
55
 
56
  **`IEPile`** dataset download links: [Google Drive](https://drive.google.com/file/d/1jPdvXOTTxlAmHkn5XkeaaCFXQkYJk5Ng/view?usp=sharing) | [Hugging Face](https://huggingface.co/datasets/zjunlp/iepile)
@@ -90,7 +79,7 @@ Based on **IEPile**, we fine-tuned the `Baichuan2-13B-Chat` and `LLaMA2-13B-Chat
90
  </details>
91
 
92
 
93
- ## πŸ“Š2.Data
94
 
95
 
96
  ### 2.1Construction of IEPile
@@ -180,296 +169,13 @@ The data instance belongs to the `NER` task, is part of the `CoNLL2003` dataset,
180
 
181
 
182
 
183
- ## 🚴3.Using IEPile to Train Models
184
-
185
- ### 3.1Environment
186
-
187
- Before you begin, make sure to create an appropriate **virtual environment** following the instructions below:
188
-
189
- ```bash
190
- conda create -n IEPile python=3.9 # Create a virtual environment
191
- conda activate IEPile # Activate the environment
192
- pip install -r requirements.txt # Install dependencies
193
- ```
194
-
195
-
196
- ### 3.2Download Data and Models
197
-
198
- **`IEPile`** dataset download links: [Google Drive](https://drive.google.com/file/d/1jPdvXOTTxlAmHkn5XkeaaCFXQkYJk5Ng/view?usp=sharing) | [Hugging Face](https://huggingface.co/datasets/zjunlp/IEPile)
199
-
200
-
201
- ```python
202
- IEPile
203
- β”œβ”€β”€ train.json # Training set
204
- └── dev.json # Validation set
205
- ```
206
-
207
- Here are some of the models supported by the code in this repository:
208
- [[llama](https://huggingface.co/meta-llama), [alpaca](https://github.com/tloen/alpaca-lora), [vicuna](https://huggingface.co/lmsys), [zhixi](https://github.com/zjunlp/KnowLM), [falcon](https://huggingface.co/tiiuae), [baichuan](https://huggingface.co/baichuan-inc), [chatglm](https://huggingface.co/THUDM), [qwen](https://huggingface.co/Qwen), [moss](https://huggingface.co/fnlp), [openba](https://huggingface.co/OpenBA)]
209
-
210
-
211
- Model download links for **`LLaMA2-IEPile`** | **`Baichuan2-IEPile`** | **`KnowLM-IE-v2`**: [zjunlp/llama2-13b-IEPile-lora](https://huggingface.co/zjunlp/llama2-13b-IEPile-lora/tree/main) | [zjunlp/baichuan2-13b-IEPile-lora](https://huggingface.co/zjunlp/baichuan2-13b-IEPile-lora) | [zjunlp/KnowLM-IE-v2]()
212
-
213
-
214
- **`LLaMA2-IEPile`** and **`Baichuan2-IEPile`** are two models mentioned in the IEPile paper that were fine-tuned on `LLaMA2-13B-Chat` and `Baichuan2-13B-Chat` using LoRA.
215
-
216
-
217
- ```bash
218
- mkdir data # Put data here
219
- mkdir models # Put base models here
220
- mkdir results # Put prediction results here
221
- mkdir lora # Put LoRA fine-tuning results here
222
- ```
223
-
224
- Data should be placed in the `./data` directory.
225
-
226
-
227
-
228
- ### 3.4LoRA Fine-tuning
229
-
230
- > Important Note: All the commands below should be executed within the `IEPile` directory. For example, if you want to run the fine-tuning script, you should use the following command: `bash ft_scripts/fine_llama.bash`. Please ensure your current working directory is correct.
231
-
232
-
233
-
234
- ```bash
235
- output_dir='lora/llama2-13b-chat-v1'
236
- mkdir -p ${output_dir}
237
- CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" torchrun --nproc_per_node=8 --master_port=1287 src/test_finetune.py \
238
- --do_train --do_eval \
239
- --overwrite_output_dir \
240
- --model_name_or_path 'models/llama2-13b-chat' \
241
- --stage 'sft' \
242
- --model_name 'llama' \
243
- --template 'llama2' \
244
- --train_file 'data/train.json' \
245
- --valid_file 'data/dev.json' \
246
- --output_dir=${output_dir} \
247
- --per_device_train_batch_size 24 \
248
- --per_device_eval_batch_size 24 \
249
- --gradient_accumulation_steps 4 \
250
- --preprocessing_num_workers 16 \
251
- --num_train_epochs 10 \
252
- --learning_rate 5e-5 \
253
- --max_grad_norm 0.5 \
254
- --optim "adamw_torch" \
255
- --max_source_length 400 \
256
- --cutoff_len 700 \
257
- --max_target_length 300 \
258
- --report_to tensorboard \
259
- --evaluation_strategy "epoch" \
260
- --save_strategy "epoch" \
261
- --save_total_limit 10 \
262
- --lora_r 16 \
263
- --lora_alpha 32 \
264
- --lora_dropout 0.05 \
265
- --bf16
266
- ```
267
-
268
- * `model_name`: Specifies the **name of the model architecture** you want to use (7B, 13B, Base, Chat belong to the same model architecture). Currently supported models include: ["`llama`", "`alpaca`", "`vicuna`", "`zhixi`", "`falcon`", "`baichuan`", "`chatglm`", "`qwen`", "`moss`", "`openba`"]. **Please note**, this parameter should be distinguished from `--model_name_or_path`.
269
- * `model_name_or_path`: Model path, please download the corresponding model from [HuggingFace](https://huggingface.co/models).
270
- * `template`: The **name of the template** used, including: `alpaca`, `baichuan`, `baichuan2`, `chatglm3`, etc. Refer to [src/datamodule/template.py](./src/datamodule/template.py) to see all supported template names. The default is the `alpaca` template. **For `Chat` versions of models, it is recommended to use the matching template, while `Base` version models can default to using `alpaca`**.
271
- * `train_file`, `valid_file (optional)`: The **file paths** for the training set and validation set. Note: Currently, the format for files only supports **JSON format**.
272
- * `output_dir`: The **path to save the weight parameters** after LoRA fine-tuning.
273
- * `val_set_size`: The number of samples in the **validation set**, default is 1000.
274
- * `per_device_train_batch_size`, `per_device_eval_batch_size`: The `batch_size` on each GPU device, adjust according to the size of the memory.
275
- * `max_source_length`, `max_target_length`, `cutoff_len`: The maximum input and output lengths, and the cutoff length, which can simply be considered as the maximum input length + maximum output length. Set appropriate values according to specific needs and memory size.
276
- * `deepspeed`: Remove if there is not enough device resources.
277
-
278
- > Quantization can be performed by setting `bits` to 8 or 4.
279
-
280
- To learn more about parameter configuration, please refer to the [src/utils/args](./src/args).
281
-
282
- The specific script for fine-tuning the `LLaMA2-13B-Chat` model can be found in [ft_scripts/fine_llama.bash](./ft_scripts/fine_llama.bash).
283
-
284
-
285
- The specific script for fine-tuning the `Baichuan2-13B-Chat` model can be found in [ft_scripts/fine_baichuan.bash](./ft_scripts/fine_baichuan.bash).bash.
286
-
287
-
288
-
289
-
290
-
291
- ## 4.Continued Training with In-Domain Data
292
-
293
- Although the `Baichuan2-IEPile` and `LLaMA2-IEPile` models have undergone extensive instruction fine-tuning on multiple general datasets and thus possess a degree of **general information extraction capability**, they may still exhibit certain limitations when processing data in **specific domains** (such as `law`, `education`, `science`, `telecommunications`). To address this challenge, it is recommended to conduct **secondary training** of these models on datasets specific to these domains. This will help the models better adapt to the semantic and structural characteristics of the specific domains, significantly enhancing their **information extraction capability within those domains**.
294
-
295
-
296
- ### 4.1Training Data Conversion
297
-
298
- Firstly, it's necessary to **format the data** to include `instruction` and `output` fields. For this purpose, we provide a script [convert_func.py](./ie2instruction/convert_func.py), which can batch convert data into a format that can be directly used by the model.
299
-
300
-
301
- > Before using the [convert_func.py](./ie2instruction/convert_func.py) script, please make sure to refer to the [data](./data) directory. This directory provides detailed instructions on the data format required for each task. Refer to `sample.json` to understand the format of the data before conversion, `schema.json` to see the organization of the schema, and `train.json` to describe the data format after conversion.
302
-
303
- > Additionally, you can directly use the bilingual (Chinese and English) information extraction dataset [zjunlp/InstructIE](https://huggingface.co/datasets/zjunlp/InstructIE), which includes 12 themes such as characters, vehicles, works of art, natural science, man-made objects, astronomical objects, etc.
304
-
305
-
306
- ```bash
307
- python ie2instruction/convert_func.py \
308
- --src_path data/NER/sample.json \
309
- --tgt_path data/NER/train.json \
310
- --schema_path data/NER/schema.json \
311
- --language zh \
312
- --task NER \
313
- --split_num 6 \
314
- --random_sort \
315
- --split train
316
- ```
317
-
318
-
319
- * `language`: Supports two languages, `zh` (Chinese) and `en` (English), with different instruction templates used for each language.
320
- * `task`: Currently supports five types of tasks: ['`RE`', '`NER`', '`EE`', '`EET`', '`EEA`'].
321
- * `split_num`: Defines the maximum number of schemas that can be included in a single instruction. The default value is 4, and setting it to -1 means no splitting is done. The recommended number of task splits varies by task: **6 for NER, and 4 for RE, EE, EET, EEA**.
322
- * `random_sort`: Whether to randomize the order of schemas in the instructions. The default is False, which means schemas are sorted alphabetically.
323
- * `split`: Specifies the type of dataset, with options `train` or `test`.
324
-
325
- The converted training data will contain four fields: `task`, `source`, `instruction`, `output`.
326
-
327
-
328
- ### 4.2Continued Training
329
-
330
-
331
- ```bash
332
- output_dir='lora/llama2-13b-chat-v1-continue'
333
- mkdir -p ${output_dir}
334
- CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" torchrun --nproc_per_node=8 --master_port=1287 src/test_finetune.py \
335
- --do_train --do_eval \
336
- --overwrite_output_dir \
337
- --model_name_or_path 'models/llama2-13B-Chat' \
338
- --checkpoint_dir 'zjunlp/llama2-13b-iepile-lora' \
339
- --stage 'sft' \
340
- --model_name 'llama' \
341
- --template 'llama2' \
342
- --train_file 'data/train.json' \
343
- --valid_file 'data/dev.json' \
344
- --output_dir=${output_dir} \
345
- --per_device_train_batch_size 24 \
346
- --per_device_eval_batch_size 24 \
347
- --gradient_accumulation_steps 4 \
348
- --preprocessing_num_workers 16 \
349
- --num_train_epochs 10 \
350
- --learning_rate 5e-5 \
351
- --max_grad_norm 0.5 \
352
- --optim "adamw_torch" \
353
- --max_source_length 400 \
354
- --cutoff_len 700 \
355
- --max_target_length 300 \
356
- --report_to tensorboard \
357
- --evaluation_strategy "epoch" \
358
- --save_strategy "epoch" \
359
- --save_total_limit 10 \
360
- --lora_r 64 \
361
- --lora_alpha 64 \
362
- --lora_dropout 0.05 \
363
- --bf16
364
- ```
365
-
366
- * To continue training based on the fine-tuned LoRA weights, simply point the `--checkpoint_dir` parameter to the path of the LoRA weights, for example by setting it to `'zjunlp/llama2-13b-iepile-lora'`.
367
-
368
- > Quantization can be performed by setting `bits` to 8 or 4.
369
-
370
-
371
- > Please note that when using **`LLaMA2-IEPile`** or **`Baichuan2-IEPile`**, keep both lora_r and lora_alpha at 64. We do not provide recommended settings for these parameters.
372
-
373
-
374
- * To continue training based on the fine-tuned model weights, just set the `--model_name_or_path` parameter to the path of the weights, such as `'zjunlp/KnowLM-IE-v2'`, without setting `--checkpoint_dir`.
375
-
376
-
377
- The script can be found at [ft_scripts/fine_continue.bash](./ft_scripts/fine_continue.bash).
378
-
379
-
380
- ## 5.Prediction
381
-
382
- ### 5.1Test Data Conversion
383
-
384
-
385
- Before preparing the test data conversion, please visit the [data](./data) directory to understand the data structure required for each task: 1) For the input data format, see `sample.json`. 2) For the schema format, please refer to `schema.json`. 3) For the format of the transformed data, refer to `train.json`. **Unlike training data, test data input does not need to include annotation fields (`entity`, `relation`, `event`)**.
386
-
387
-
388
- ```bash
389
- python ie2instruction/convert_func.py \
390
- --src_path data/NER/sample.json \
391
- --tgt_path data/NER/test.json \
392
- --schema_path data/NER/schema.json \
393
- --language zh \
394
- --task NER \
395
- --split_num 6 \
396
- --split test
397
- ```
398
-
399
- When setting `split` to **test**, select the appropriate number of schemas according to the task type: **6 is recommended for NER, while 4 is recommended for RE, EE, EET, EEA**. The transformed test data will contain five fields: `id`, `task`, `source`, `instruction`, `label`.
400
-
401
- The `label` field will be used for subsequent evaluation. If the input data lacks the annotation fields (`entity`, `relation`, `event`), the transformed test data will not contain the `label` field, which is suitable for scenarios where no original annotated data is available.
402
-
403
-
404
- ### 5.2Basic Model + LoRA Prediction
405
-
406
- ```bash
407
- CUDA_VISIBLE_DEVICES=0 python src/inference.py \
408
- --stage sft \
409
- --model_name_or_path 'models/llama2-13B-Chat' \
410
- --checkpoint_dir 'zjunlp/llama2-13b-IEPile-lora' \
411
- --model_name 'llama' \
412
- --template 'llama2' \
413
- --do_predict \
414
- --input_file 'data/input.json' \
415
- --output_file 'results/llama2-13b-IEPile-lora_output.json' \
416
- --finetuning_type lora \
417
- --output_dir 'lora/test' \
418
- --predict_with_generate \
419
- --max_source_length 512 \
420
- --bf16 \
421
- --max_new_tokens 300
422
- ```
423
-
424
- * During inference, `model_name`, `template`, and `bf16` must be the same as the settings used during training.
425
- * `model_name_or_path`: Specify the path to the base model being used, which must match the corresponding LoRA model.
426
- * `checkpoint_dir`: The path to the LoRA weight files.
427
- * `output_dir`: This parameter does not take effect during inference and any path can be specified.
428
- * `input_file`, `output_file`: Specify the input path for the test file and the output path for the prediction results, respectively.
429
- * `max_source_length`, `max_new_tokens`: Set the maximum input length and the number of new tokens to be generated, adjusting according to device performance.
430
-
431
- > Quantization can be performed by setting `bits` to 8 or 4.
432
-
433
-
434
- ### 5.3IE-Specific Model Prediction
435
-
436
- ```bash
437
- CUDA_VISIBLE_DEVICES=0 python src/inference.py \
438
- --stage sft \
439
- --model_name_or_path 'zjunlp/KnowLM-IE-v2' \
440
- --model_name 'baichuan' \
441
- --template 'baichuan2' \
442
- --do_predict \
443
- --input_file 'data/input.json' \
444
- --output_file 'results/KnowLM-IE-v2_output.json' \
445
- --output_dir 'lora/test' \
446
- --predict_with_generate \
447
- --max_source_length 512 \
448
- --bf16 \
449
- --max_new_tokens 300
450
- ```
451
-
452
- `model_name_or_path`: The path to the weights of the model specialized for Information Extraction (IE).
453
-
454
- > Quantization can be performed by setting `bits` to 8 or 4.
455
-
456
-
457
- ## 6.Evaluation
458
-
459
- We provide scripts for evaluating the F1 scores for various tasks.
460
-
461
- ```bash
462
- python ie2instruction/eval_func.py \
463
- --path1 data/NER/processed.json \
464
- --task NER
465
- ```
466
 
467
- * `task`: Currently supports five types of tasks: ['`RE`', '`NER`', '`EE`', '`EET`', '`EEA`'].
468
- * You can set `sort_by` to `source` to calculate the F1 scores on each dataset separately.
469
 
 
470
 
471
 
472
- ## 7.Statement and License
473
  We believe that annotated data contains the wisdom of humanity, and its existence is to promote the benefit of all humankind and help enhance our quality of life. We strongly urge all users not to use our corpus for any actions that may harm national or public security or violate legal regulations.
474
  We have done our best to ensure the quality and legality of the data provided. However, we also recognize that despite our efforts, there may still be some unforeseen issues, such as concerns about data protection and risks and problems caused by data misuse. We will not be responsible for these potential problems.
475
  For original data that is subject to usage permissions stricter than the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) agreement, IEPile will adhere to those stricter terms. In all other cases, our operations will be based on the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license agreement.
@@ -477,7 +183,7 @@ For original data that is subject to usage permissions stricter than the [CC BY-
477
 
478
 
479
 
480
- ## 8.Limitations
481
 
482
  From the data perspective, our study primarily focuses on schema-based IE, which limits our ability to generalize to human instructions that do not follow our specific format requirements.
483
  Additionally, we do not explore the field of Open Information Extraction (Open IE); however, if we remove schema constraints, our dataset would be suitable for Open IE scenarios.
@@ -487,13 +193,13 @@ From the model perspective, due to computational resource limitations, our resea
487
 
488
 
489
 
490
- ## 9.Cite
491
  If you use the IEPile or the code, please cite the paper:
492
 
493
 
494
 
495
 
496
- ## 10.Acknowledgements
497
  We are very grateful for the inspiration provided by the [MathPile](mathpile) and [KnowledgePile](https://huggingface.co/datasets/Query-of-CC/Knowledge_Pile) projects. Special thanks are due to the builders and maintainers of the following datasets: [AnatEM](https://doi.org/10.1093/BIOINFORMATICS/BTT580)、[BC2GM](https://link.springer.com/chapter/10.1007/978-3-030-68763-2_48)、[BC4CHEMD](https://link.springer.com/chapter/10.1007/978-3-030-68763-2_48)、[NCBI-Disease](https://linkinghub.elsevier.com/retrieve/pii/S1532046413001974)、[BC5CDR](https://openreview.net/pdf?id=9EAQVEINuum)、[HarveyNER](https://aclanthology.org/2022.naacl-main.243/)、[CoNLL2003](https://aclanthology.org/W03-0419/)、[GENIA](https://pubmed.ncbi.nlm.nih.gov/12855455/)、[ACE2005](https://catalog.ldc.upenn.edu/LDC2006T06)、[MIT Restaurant](https://ieeexplore.ieee.org/document/6639301)、[MIT Movie](https://ieeexplore.ieee.org/document/6639301)、[FabNER](https://link.springer.com/article/10.1007/s10845-021-01807-x)、[MultiNERD](https://aclanthology.org/2022.findings-naacl.60/)、[Ontonotes](https://aclanthology.org/N09-4006/)、[FindVehicle](https://arxiv.org/abs/2304.10893)、[CrossNER](https://ojs.aaai.org/index.php/AAAI/article/view/17587)、[MSRA NER](https://aclanthology.org/W06-0115/)、[Resume NER](https://aclanthology.org/P18-1144/)、[CLUE NER](https://arxiv.org/abs/2001.04351)、[Weibo NER](https://aclanthology.org/D15-1064/)、[Boson](https://github.com/InsaneLife/ChineseNLPCorpus/tree/master/NER/boson)、[ADE Corpus](https://jbiomedsem.biomedcentral.com/articles/10.1186/2041-1480-3-15)、[GIDS](https://arxiv.org/abs/1804.06987)、[CoNLL2004](https://aclanthology.org/W04-2412/)、[SciERC](https://aclanthology.org/D18-1360/)、[Semeval-RE](https://aclanthology.org/S10-1006/)、[NYT11-HRL](https://ojs.aaai.org/index.php/AAAI/article/view/4688)、[KBP37](https://arxiv.org/abs/1508.01006)、[NYT](https://link.springer.com/chapter/10.1007/978-3-642-15939-8_10)、[Wiki-ZSL](https://aclanthology.org/2021.naacl-main.272/)、[FewRel](https://aclanthology.org/D18-1514/)、[CMeIE](https://link.springer.com/chapter/10.1007/978-3-030-60450-9_22)、[DuIE](https://link.springer.com/chapter/10.1007/978-3-030-32236-6_72)、[COAE2016](https://github.com/Sewens/COAE2016)、[IPRE](https://arxiv.org/abs/1907.12801)、[SKE2020](https://aistudio.baidu.com/datasetdetail/177191)、[CASIE](https://ojs.aaai.org/index.php/AAAI/article/view/6401)、[PHEE](https://aclanthology.org/2022.emnlp-main.376/)、[CrudeOilNews](https://aclanthology.org/2022.lrec-1.49/)、[RAMS](https://aclanthology.org/2020.acl-main.718/)、[WikiEvents](https://aclanthology.org/2021.naacl-main.69/)、[DuEE](https://link.springer.com/chapter/10.1007/978-3-030-60457-8_44)、[DuEE-Fin](https://link.springer.com/chapter/10.1007/978-3-031-17120-8_14)、[FewFC](https://ojs.aaai.org/index.php/AAAI/article/view/17720)、[CCF law](https://aistudio.baidu.com/projectdetail/4201483), and more. These datasets have significantly contributed to the advancement of this research. We are also grateful for the valuable contributions in the field of information extraction made by [InstructUIE](http://arxiv.org/abs/2304.08085) and [YAYI-UIE](http://arxiv.org/abs/2312.15548), both in terms of data and model innovation. Our research results have benefitted from their creativity and hard work as well. Additionally, our heartfelt thanks go to [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory); our fine-tuning code implementation owes much to their work. The assistance provided by these academic resources has been instrumental in the completion of our research, and for this, we are deeply appreciative.
498
 
499
 
 
19
 
20
  [**Datasets**](https://huggingface.co/datasets/zjunlp/iepie) |
21
  [**Paper**](https://huggingface.co/papers/2402.14710) |
22
+ [**Usage**](./README.md#3using-IEPile-to-train-models) |
23
+ [**Limitations**](./README.md#5-limitations) |
24
+ [**Statement & License**](./README.md#4-statement-and-license) |
25
+ [**Citation**](./README.md#6-cite)
26
 
27
  > Please note that our IEPile may undergo **updates** (we will inform you upon their release). It is recommended to utilize the most current version.
28
 
29
 
30
  - [IEPile: A Large-Scale Information Extraction Corpus](#iepile-a-large-scale-information-extraction-corpus)
31
+ - [1.Introduction](#1introduction)
32
+ - [2.Data](#2data)
33
  - [2.1Construction of IEPile](#21construction-of-iepile)
34
  - [2.2Data Format of IEPile](#22data-format-of-iepile)
35
+ - [3.Using IEPile to Train Models](#3using-iepile-to-train-models)
36
+ - [4.Statement and License](#4statement-and-license)
37
+ - [5.Limitations](#5limitations)
38
+ - [6.Cite](#6cite)
39
+ - [7.Acknowledgements](#7acknowledgements)
40
+
41
+
42
+ ## 1.Introduction
 
 
 
 
 
 
 
 
 
 
 
43
 
44
 
45
  **`IEPile`** dataset download links: [Google Drive](https://drive.google.com/file/d/1jPdvXOTTxlAmHkn5XkeaaCFXQkYJk5Ng/view?usp=sharing) | [Hugging Face](https://huggingface.co/datasets/zjunlp/iepile)
 
79
  </details>
80
 
81
 
82
+ ## 2.Data
83
 
84
 
85
  ### 2.1Construction of IEPile
 
169
 
170
 
171
 
172
+ ## 3.Using IEPile to Train Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
 
 
 
174
 
175
+ Please visit our official GitHub repository for a comprehensive guide on training and inference with [IEPile](https://github.com/zjunlp/IEPile).
176
 
177
 
178
+ ## 4.Statement and License
179
  We believe that annotated data contains the wisdom of humanity, and its existence is to promote the benefit of all humankind and help enhance our quality of life. We strongly urge all users not to use our corpus for any actions that may harm national or public security or violate legal regulations.
180
  We have done our best to ensure the quality and legality of the data provided. However, we also recognize that despite our efforts, there may still be some unforeseen issues, such as concerns about data protection and risks and problems caused by data misuse. We will not be responsible for these potential problems.
181
  For original data that is subject to usage permissions stricter than the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) agreement, IEPile will adhere to those stricter terms. In all other cases, our operations will be based on the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license agreement.
 
183
 
184
 
185
 
186
+ ## 5.Limitations
187
 
188
  From the data perspective, our study primarily focuses on schema-based IE, which limits our ability to generalize to human instructions that do not follow our specific format requirements.
189
  Additionally, we do not explore the field of Open Information Extraction (Open IE); however, if we remove schema constraints, our dataset would be suitable for Open IE scenarios.
 
193
 
194
 
195
 
196
+ ## 6.Cite
197
  If you use the IEPile or the code, please cite the paper:
198
 
199
 
200
 
201
 
202
+ ## 7.Acknowledgements
203
  We are very grateful for the inspiration provided by the [MathPile](mathpile) and [KnowledgePile](https://huggingface.co/datasets/Query-of-CC/Knowledge_Pile) projects. Special thanks are due to the builders and maintainers of the following datasets: [AnatEM](https://doi.org/10.1093/BIOINFORMATICS/BTT580)、[BC2GM](https://link.springer.com/chapter/10.1007/978-3-030-68763-2_48)、[BC4CHEMD](https://link.springer.com/chapter/10.1007/978-3-030-68763-2_48)、[NCBI-Disease](https://linkinghub.elsevier.com/retrieve/pii/S1532046413001974)、[BC5CDR](https://openreview.net/pdf?id=9EAQVEINuum)、[HarveyNER](https://aclanthology.org/2022.naacl-main.243/)、[CoNLL2003](https://aclanthology.org/W03-0419/)、[GENIA](https://pubmed.ncbi.nlm.nih.gov/12855455/)、[ACE2005](https://catalog.ldc.upenn.edu/LDC2006T06)、[MIT Restaurant](https://ieeexplore.ieee.org/document/6639301)、[MIT Movie](https://ieeexplore.ieee.org/document/6639301)、[FabNER](https://link.springer.com/article/10.1007/s10845-021-01807-x)、[MultiNERD](https://aclanthology.org/2022.findings-naacl.60/)、[Ontonotes](https://aclanthology.org/N09-4006/)、[FindVehicle](https://arxiv.org/abs/2304.10893)、[CrossNER](https://ojs.aaai.org/index.php/AAAI/article/view/17587)、[MSRA NER](https://aclanthology.org/W06-0115/)、[Resume NER](https://aclanthology.org/P18-1144/)、[CLUE NER](https://arxiv.org/abs/2001.04351)、[Weibo NER](https://aclanthology.org/D15-1064/)、[Boson](https://github.com/InsaneLife/ChineseNLPCorpus/tree/master/NER/boson)、[ADE Corpus](https://jbiomedsem.biomedcentral.com/articles/10.1186/2041-1480-3-15)、[GIDS](https://arxiv.org/abs/1804.06987)、[CoNLL2004](https://aclanthology.org/W04-2412/)、[SciERC](https://aclanthology.org/D18-1360/)、[Semeval-RE](https://aclanthology.org/S10-1006/)、[NYT11-HRL](https://ojs.aaai.org/index.php/AAAI/article/view/4688)、[KBP37](https://arxiv.org/abs/1508.01006)、[NYT](https://link.springer.com/chapter/10.1007/978-3-642-15939-8_10)、[Wiki-ZSL](https://aclanthology.org/2021.naacl-main.272/)、[FewRel](https://aclanthology.org/D18-1514/)、[CMeIE](https://link.springer.com/chapter/10.1007/978-3-030-60450-9_22)、[DuIE](https://link.springer.com/chapter/10.1007/978-3-030-32236-6_72)、[COAE2016](https://github.com/Sewens/COAE2016)、[IPRE](https://arxiv.org/abs/1907.12801)、[SKE2020](https://aistudio.baidu.com/datasetdetail/177191)、[CASIE](https://ojs.aaai.org/index.php/AAAI/article/view/6401)、[PHEE](https://aclanthology.org/2022.emnlp-main.376/)、[CrudeOilNews](https://aclanthology.org/2022.lrec-1.49/)、[RAMS](https://aclanthology.org/2020.acl-main.718/)、[WikiEvents](https://aclanthology.org/2021.naacl-main.69/)、[DuEE](https://link.springer.com/chapter/10.1007/978-3-030-60457-8_44)、[DuEE-Fin](https://link.springer.com/chapter/10.1007/978-3-031-17120-8_14)、[FewFC](https://ojs.aaai.org/index.php/AAAI/article/view/17720)、[CCF law](https://aistudio.baidu.com/projectdetail/4201483), and more. These datasets have significantly contributed to the advancement of this research. We are also grateful for the valuable contributions in the field of information extraction made by [InstructUIE](http://arxiv.org/abs/2304.08085) and [YAYI-UIE](http://arxiv.org/abs/2312.15548), both in terms of data and model innovation. Our research results have benefitted from their creativity and hard work as well. Additionally, our heartfelt thanks go to [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory); our fine-tuning code implementation owes much to their work. The assistance provided by these academic resources has been instrumental in the completion of our research, and for this, we are deeply appreciative.
204
 
205