Dataset Viewer issue: DatasetGenerationError

#4
by ksmehrab - opened
HDR Imageomics Institute org

The dataset viewer is not working.

Error details:

Error code:   DatasetGenerationError
Exception:    DatasetGenerationError
Message:      An error occurred while generating the dataset
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 193, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1491, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1323, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

cc @albertvillanova @lhoestq @severo .

The error is for the config trait_segmentation

Maybe the solution is to rename segmentation_data.csv to metadata.csv, following https://huggingface.co/docs/hub/datasets-image? cc @polinaeterna if you have more details.

HDR Imageomics Institute org

@severo , is there a problem with subsets that are different types? We've been seeing these issues with metadata files across datasets that have a mix of text and image subsets (ex: NEON beetles). We really appreciate your help trying to solve these issues!

is there a problem with subsets that are different types?

no, it's ok. splits must have the same set of columns, but not subsets.

HDR Imageomics Institute org

@severo , thanks for confirming.

This is a similar error to the NEON beetles dataset, so I'm wondering if @lhoestq has a suggestion (pinged on that issue)?

I confirm the Viewer only supports metadata CSV files if they are named metadata.csv.

Btw if you want to check locally if a certain structure works, you can do

from datasets import load_dataset

ds = load_dataset("path/to/local/dir")
print(ds[0])
HDR Imageomics Institute org
edited Aug 23

I confirm the Viewer only supports metadata CSV files if they are named metadata.csv.

Btw if you want to check locally if a certain structure works, you can do

from datasets import load_dataset

ds = load_dataset("path/to/local/dir")
print(ds[0])

Ah, so when pairing images with CSVs containing info about them it has to be named metadata.csv even with yaml configs? Is there any plan to make this more flexible for datasets with multiple subsets to avoid the need to put a metadata.csv into each image directory where they are not as easily accessed? Even if it could just end in _metadata.csv or -metadata.csv, that would be very helpful to allow for more flexibility of dataset structure.

Also, thanks for the clarity on checking locally @lhoestq .

Ah, so when pairing images with CSVs containing info about them it has to be named metadata.csv even with yaml configs?

yes correct !

Is there any plan to make this more flexible for datasets with multiple subsets to avoid the need to put a metadata.csv into each image directory where they are not as easily accessed?

that would be great ! would be happy to discuss this on github https://github.com/huggingface/datasets

HDR Imageomics Institute org

that would be great ! would be happy to discuss this on github https://github.com/huggingface/datasets

Thanks, @lhoestq . I've submitted an issue requesting such a feature and primed the discussion with my simplified suggested solution.

Sign up or log in to comment