metadata
dataset_info:
features:
- name: sentences
list:
- name: detokenized_text
dtype: string
- name: index
dtype: int64
- name: token_positions
sequence:
sequence: int64
- name: tokens
list:
- name: index
dtype: int64
- name: text
dtype: string
- name: xpos
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 41993013
num_examples: 1345
- name: validation
num_bytes: 4236748
num_examples: 135
- name: test
num_bytes: 4312728
num_examples: 207
download_size: 8195556
dataset_size: 50542489
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Detokenizes the ECMT dataset using the kiwipiepy library.
The script used to convert the dataset is here: https://gist.github.com/ianporada/a246ebf59696c6e16e1bc1873bc182a4
The library version used is kiwipiepy==0.20.3 / kiwipiepy_model==0.20.0
The dataset schema is as follows:
{
# the original document filename
"doc_id": str,
# a list of sentences in the document
"sentences": [
"index": int, # the index of the sentence within the document
"detokenized_text": str, # a single string representing the text of the sentence (detokenized using kiwipiepy)
# a list of token positions which are tuples of the form (start, end)
# the token at index i corresponds to characters detokenized_text[start:end]
"token_positions": [(int, int), ...],
# the original values of each token from the dataset
"tokens": [{"index": int, "text": str, "xpos": str}, ...],
],
# a list of coreference chains, each chain is a list of mentions
# each mention is a list of form [sentence_index, start_token_index, end_token_index] where token indices are inclusive indices within the given sentence
"coref_chains": [[[int, int, int], ...], ...]
}