Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ValueError
Message:      Invalid string class label bkl
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 323, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 92, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 69, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1393, in __iter__
                  example = _apply_feature_types_on_example(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1080, in _apply_feature_types_on_example
                  encoded_example = features.encode_example(example)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in encode_example
                  return encode_nested_example(self, example)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1244, in encode_nested_example
                  {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1244, in <dictcomp>
                  {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1300, in encode_nested_example
                  return schema.encode_example(obj) if obj is not None else None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1090, in encode_example
                  example_data = self.str2int(example_data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1027, in str2int
                  output = [self._strval2int(value) for value in values]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1027, in <listcomp>
                  output = [self._strval2int(value) for value in values]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1048, in _strval2int
                  raise ValueError(f"Invalid string class label {value}")
              ValueError: Invalid string class label bkl

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions

  • Original Paper and Dataset here
  • Kaggle dataset here

Introduction to datasets

Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available dataset of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities. The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen's disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratoses, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc).

More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal).

The test set is not public, but the evaluation server remains running (see the challenge website). Any publications written using the HAM10000 data should be evaluated on the official test set hosted there, so that methods can be fairly compared.

  • Test site can be accessed here

Disclaimer and additional information

This is a contribution to open sourced data in hugging face for image data. Images can be obtained from above links.

Train test split was done using a stratified splitting by cancer/diagnosis type. The code to stratify the dataset can be obtained on my github here.

I do not own any rights to above images.

More Information needed

Downloads last month
37