Datasets:

Modalities:
Text
Formats:
json
Languages:
Chinese
ArXiv:
Libraries:
Datasets
Dask

load_dataset failed

#1
by swulling - opened

from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")
```

ValueError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1931 )
-> 1932 writer.write_table(table)
1933 num_examples_progress_update += len(table)

8 frames
ValueError: Couldn't cast
output: string
instruction: string
task_name_in_eng: string
input: string
index: string
other: string
domain: list<item: string>
child 0, item: string
task_type: struct<major: list<item: string>, minor: list<item: string>>
child 0, major: list<item: string>
child 0, item: string
child 1, minor: list<item: string>
child 0, item: string
to
{'output': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'human_verified': Value(dtype='bool', id=None), 'copyright': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_from': Value(dtype='string', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}}
because column names don't match

The above exception was the direct cause of the following exception:

DatasetGenerationError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1948 if isinstance(e, SchemaInferenceError) and e.context is not None:
1949 e = e.context
-> 1950 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1951
1952 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)

DatasetGenerationError: An error occurred while generating the dataset
```

I meet the same problem; it seems the problem from the data self.

Multimodal Art Projection org
edited Dec 22, 2023

Thanks for reporting @swulling @wyxwangmed
It seems that the formats of some data instances are not uniform, and we are trying to fix them. I will let you know once there are any updates.

In the meantime, you can construct the format you need by accessing the full dataset through the link below.
https://huggingface.co/datasets/m-a-p/COIG-CQIA/blob/main/COIG-CQIA-full.jsonl
We added an extra key "file_name" in this file to indicate which part of the dataset it belongs to.

经过多次尝试,我发现将COIG-CQIA/wikihow/wikihow_zh_sample_clean_v2.jsonl 与其他数据一同加载就会产生这个报错,不加载该数据集,或则仅加载该数据集,都不会产生问题

git lfs clone https://huggingface.co/datasets/m-a-p/COIG-CQIA
from datasets import load_dataset

dataset = load_dataset("json", data_files=[
    "COIG-CQIA/chinese_traditional/chengyu.jsonl",
    "COIG-CQIA/chinese_traditional/poem.jsonl",
    "COIG-CQIA/chinese_traditional/trad-multi-choice-100-2.jsonl",
    "COIG-CQIA/chinese_traditional/trad-multi-choice-100.jsonl",
    "COIG-CQIA/chinese_traditional/trad-multi-choice-40.jsonl",
    "COIG-CQIA/chinese_traditional/translate_classical_chinese.jsonl",
    "COIG-CQIA/coig_pc/coig_pc_core_sample.jsonl",
    "COIG-CQIA/douban/book_introduce.jsonl",
    "COIG-CQIA/douban/book_reviews.jsonl",
    "COIG-CQIA/douban/movie_anime_synopsis_more_prompt.jsonl",
    "COIG-CQIA/douban/movie_documentary_synopsis_more_prompt.jsonl",
    "COIG-CQIA/douban/movie_recommendation.jsonl",
    "COIG-CQIA/douban/movie_recommendation2.jsonl",
    "COIG-CQIA/douban/movie_reviews.jsonl",
    "COIG-CQIA/douban/movie_synopsis_more_prompt.jsonl",
    "COIG-CQIA/douban/tv_anime_synopsis_more_prompt.jsonl",
    "COIG-CQIA/douban/tv_documentary_synopsis_more_prompt.jsonl",
    "COIG-CQIA/douban/tv_recommendation.jsonl",
    "COIG-CQIA/douban/tv_recommendation2.jsonl",
    "COIG-CQIA/douban/tv_reviews.jsonl",
    "COIG-CQIA/douban/tv_synopsis_more_prompt.jsonl",
    "COIG-CQIA/exam/coig_exam_sampled_clean_v3.jsonl",
    "COIG-CQIA/exam/kaoyan.jsonl",
    "COIG-CQIA/exam/law_gee_exam_clean_v2.jsonl",
    "COIG-CQIA/finance/finance_coig_pc.jsonl",
    "COIG-CQIA/finance/mba_baike.jsonl",
    "COIG-CQIA/human_value/100poison.jsonl",
    "COIG-CQIA/human_value/coig_human_value_multi_choice.jsonl",
    "COIG-CQIA/logi_qa/logi-qa.jsonl",
    "COIG-CQIA/ruozhiba/ruozhiba.jsonl",
    "COIG-CQIA/segmentfault/segmentfault_upvote5_clean.jsonl",
    "COIG-CQIA/wiki/10why_final.jsonl",
    "COIG-CQIA/wiki/51zyzy.jsonl",
    "COIG-CQIA/wiki/agriculture.jsonl",
    "COIG-CQIA/wiki/baobao.jsonl",
    "COIG-CQIA/wiki/bkmy_medicine.jsonl",
    "COIG-CQIA/wiki/bkmy_symptom.jsonl",
    "COIG-CQIA/wiki/digital_wiki.jsonl",
    "COIG-CQIA/wiki/zgbk.jsonl",
    # "COIG-CQIA/wikihow/wikihow_zh_sample_clean_v2.jsonl", # 如果取消注释,则会报错
    "COIG-CQIA/xhs/xhs.jsonl",
    "COIG-CQIA/zhihu/zhihu_score8.0-8.5_clean_v2.jsonl",
    "COIG-CQIA/zhihu/zhihu_score8.5-9.0_clean_v3.jsonl",
    "COIG-CQIA/zhihu/zhihu_score9.0-10_clean_v10.jsonl",
])

Sign up or log in to comment