
The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Invalid value. in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables df = pandas_read_json(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json return pd.read_json(path_or_buf, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json return json_reader.read() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read obj = self._get_object_parser(self.data) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser obj = FrameParser(json, **kwargs).parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse self._parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse ujson_loads(json, precise_float=self.precise_float), dtype=None ValueError: Expected object or value During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3212, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2051, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2226, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1677, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 299, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
B2NER
We present B2NERD, a cohesive and efficient dataset that can improve LLMs' generalization on the challenging Open NER task, refined from 54 existing English or Chinese datasets. Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages.
- 📖 Paper: Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition
- 🎮 Github Repo: https://github.com/UmeanNever/B2NER .
- 📀 Data: You can download from here (the B2NERD_data.zip in the "Files and versions" tab). See below data section for more information.
- 💾 Model (LoRA Adapters): See 7B model and 20B model. You may refer to the github repo for quick demo usage.
See github repo for more information about data usage and this work.
Data
One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training. The preprocessed test datasets (7 for Chinese NER and 7 for English NER) used for Open NER OOD evaluation in our paper are also included in the released dataset to facilitate convenient evaluation for future research.
We provide 3 versions of our dataset.
B2NERD
(Recommended): Contain ~52k samples from 54 Chinese or English datasets. This is the final version of our dataset suitable for out-of-domain / zero-shot NER model training. It features standardized entity definitions and pruned, diverse data.B2NERD_all
: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not undergo any data selection or pruning.B2NERD_raw
: The raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization.
You can download the data from here (the B2NERD_data.zip in the "Files and versions" tab) or Google Drive. Current data is uploaded as .zip for convenience. We are considering upload raw data files for better preview.
Please ensure that you have the proper licenses to access the raw datasets in our collection.
Below are the datasets statistics and source datasets for B2NERD
dataset.
Split | Lang. | Datasets | Types | Num | Raw Num |
---|---|---|---|---|---|
Train | En | 19 | 119 | 25,403 | 838,648 |
Zh | 21 | 222 | 26,504 | 580,513 | |
Total | 40 | 341 | 51,907 | 1,419,161 | |
Test | En | 7 | 85 | - | 6,466 |
Zh | 7 | 60 | - | 14,257 | |
Total | 14 | 145 | - | 20,723 |


More information can be found in the Appendix of paper.
Cite
@inproceedings{yang-etal-2025-beyond,
title = "Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition",
author = "Yang, Yuming and
Zhao, Wantong and
Huang, Caishuang and
Ye, Junjie and
Wang, Xiao and
Zheng, Huiyuan and
Nan, Yang and
Wang, Yuran and
Xu, Xueying and
Huang, Kaixin and
Zhang, Yunke and
Gui, Tao and
Zhang, Qi and
Huang, Xuanjing",
editor = "Rambow, Owen and
Wanner, Leo and
Apidianaki, Marianna and
Al-Khalifa, Hend and
Eugenio, Barbara Di and
Schockaert, Steven",
booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
month = jan,
year = "2025",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.coling-main.725/",
pages = "10902--10923"
}
- Downloads last month
- 77
Models trained or fine-tuned on Umean/B2NERD
