Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ArrowInvalid
Message:      BatchSize must be greater than 0, got 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2226, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 299, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 94, in _generate_tables
                  parquet_fragment.to_batches(
                File "pyarrow/_dataset.pyx", line 1529, in pyarrow._dataset.Fragment.to_batches
                File "pyarrow/_dataset.pyx", line 3558, in pyarrow._dataset.Scanner.from_fragment
                File "pyarrow/_dataset.pyx", line 3351, in pyarrow._dataset._populate_builder
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: BatchSize must be greater than 0, got 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Flickr

217.646.487 images with lat lon coordinates from Flickr. This repo is a filtered version of https://huggingface.co/datasets/bigdata-pw/Flickr for all rows containing a valid lat lon pair.

image/png

Filter Process

It was harder than expected. The main problem is that sometimes the connection to HF hub breaks. Afaik DuckDB cannot quite deal with this very well, so this code does not work:

import duckdb
duckdb.sql("INSTALL httpfs;LOAD httpfs;") # required extension
df = duckdb.sql(f"SELECT latitude, longitude FROM 'hf://datasets/bigdata-pw/Flickr/*.parquet' WHERE latitude IS NOT NULL AND longitude IS NOT NULL").df()

Instead, I used a more granular process to make sure I really got all the files.

import duckdb
import pandas as pd
from tqdm import tqdm
import os

# Create a directory to store the reduced parquet files
output_dir = "reduced_flickr_data"
if not os.path.exists(output_dir):
    os.makedirs(output_dir)

# Install and load the httpfs extension (only needs to be done once)
try:
    duckdb.sql("INSTALL httpfs; LOAD httpfs;")  # required extension
except Exception as e:
    print(f"httpfs most likely already installed/loaded: {e}") # most likely already installed

duckdb.sql("SET enable_progress_bar = false;")
# Get a list of already downloaded files to make the script idempotent
downloaded_files = set(os.listdir(output_dir))



for i in tqdm(range(0, 5150), desc="Downloading and processing files"):
    part_number = str(i).zfill(5)  # Pad with zeros to get 00000 format
    file_name = f"part-{part_number}.parquet"
    output_path = os.path.join(output_dir, file_name)

    # Skip if the file already exists
    if file_name in downloaded_files:
        #print(f"Skipping {file_name} (already downloaded)")  # Optional:  Uncomment for more verbose output.
        continue

    try:
        # Construct the DuckDB query, suppressing the built-in progress bar
        query = f"""
            SELECT *
            FROM 'hf://datasets/bigdata-pw/Flickr/{file_name}'
            WHERE latitude IS NOT NULL
                AND longitude IS NOT NULL
                AND (
                (
                    TRY_CAST(latitude AS DOUBLE) IS NOT NULL AND
                    TRY_CAST(longitude AS DOUBLE) IS NOT NULL AND
                    (TRY_CAST(latitude AS DOUBLE) != 0.0 OR TRY_CAST(longitude AS DOUBLE) != 0.0)
                )
                OR
                (
                    TRY_CAST(latitude AS VARCHAR) IS NOT NULL AND
                    TRY_CAST(longitude AS VARCHAR) IS NOT NULL AND
                   (latitude != '0' AND latitude != '0.0' AND longitude != '0' AND longitude != '0.0')
                )

                )
        """
        df = duckdb.sql(query).df()

        # Save the filtered data to a parquet file, creating the directory if needed.
        df.to_parquet(output_path)
        #print(f"saved part {part_number}") # optional, for logging purposes


    except Exception as e:
        print(f"Error processing {file_name}: {e}")
        continue  # Continue to the next iteration even if an error occurs

print("Finished processing all files.")

The first run took roughly 15 hours on my connection. When finished a handfull of files will have failed, for me less than 5. Rerun the script to add the missing files; it will finish in a minute.


Below the rest of the original dataset card.


Dataset Details

Dataset Description

Approximately 5 billion images from Flickr. Entries include URLs to images at various resolutions and available metadata such as license, geolocation, dates, description and machine tags (camera info).

  • Curated by: hlky
  • License: Open Data Commons Attribution License (ODC-By) v1.0

Citation Information

@misc{flickr,
  author = {hlky},
  title = {Flickr},
  year = {2024},
  publisher = {hlky},
  journal = {Hugging Face repository},
  howpublished = {\url{[https://huggingface.co/datasets/bigdata-pw/Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr)}}
}

Attribution Information

Contains information from [Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr) which is made available
under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
Downloads last month
31