File size: 4,568 Bytes
5dd9478
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72063a9
 
 
710d3d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5dd9478
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
license: odc-by
pretty_name: Flickr
size_categories:
- 100M<n<1B
task_categories:
- image-classification
- image-to-text
- text-to-image
tags:
- geospatial
---
# Dataset Card for Flickr

217.646.487 images with lat lon coordinates from Flickr. This repo is a filtered version of https://huggingface.co/datasets/bigdata-pw/Flickr for all rows containing a valid lat lon pair. 


![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/0aTQX7TCbRRvqrMnAubmz.png)

## Filter Process

It was harder than expected. The main problem is that sometimes the connection to HF hub breaks. Afaik DuckDB cannot quite deal with this very well, so this code does not work:

```python
import duckdb
duckdb.sql("INSTALL httpfs;LOAD httpfs;") # required extension
df = duckdb.sql(f"SELECT latitude, longitude FROM 'hf://datasets/bigdata-pw/Flickr/*.parquet' WHERE latitude IS NOT NULL AND longitude IS NOT NULL").df()
```

Instead, I used a more granular process to make sure I really got all the files.

```python
import duckdb
import pandas as pd
from tqdm import tqdm
import os

# Create a directory to store the reduced parquet files
output_dir = "reduced_flickr_data"
if not os.path.exists(output_dir):
    os.makedirs(output_dir)

# Install and load the httpfs extension (only needs to be done once)
try:
    duckdb.sql("INSTALL httpfs; LOAD httpfs;")  # required extension
except Exception as e:
    print(f"httpfs most likely already installed/loaded: {e}") # most likely already installed

duckdb.sql("SET enable_progress_bar = false;")
# Get a list of already downloaded files to make the script idempotent
downloaded_files = set(os.listdir(output_dir))



for i in tqdm(range(0, 5150), desc="Downloading and processing files"):
    part_number = str(i).zfill(5)  # Pad with zeros to get 00000 format
    file_name = f"part-{part_number}.parquet"
    output_path = os.path.join(output_dir, file_name)

    # Skip if the file already exists
    if file_name in downloaded_files:
        #print(f"Skipping {file_name} (already downloaded)")  # Optional:  Uncomment for more verbose output.
        continue

    try:
        # Construct the DuckDB query, suppressing the built-in progress bar
        query = f"""
            SELECT *
            FROM 'hf://datasets/bigdata-pw/Flickr/{file_name}'
            WHERE latitude IS NOT NULL
                AND longitude IS NOT NULL
                AND (
                (
                    TRY_CAST(latitude AS DOUBLE) IS NOT NULL AND
                    TRY_CAST(longitude AS DOUBLE) IS NOT NULL AND
                    (TRY_CAST(latitude AS DOUBLE) != 0.0 OR TRY_CAST(longitude AS DOUBLE) != 0.0)
                )
                OR
                (
                    TRY_CAST(latitude AS VARCHAR) IS NOT NULL AND
                    TRY_CAST(longitude AS VARCHAR) IS NOT NULL AND
                   (latitude != '0' AND latitude != '0.0' AND longitude != '0' AND longitude != '0.0')
                )

                )
        """
        df = duckdb.sql(query).df()

        # Save the filtered data to a parquet file, creating the directory if needed.
        df.to_parquet(output_path)
        #print(f"saved part {part_number}") # optional, for logging purposes


    except Exception as e:
        print(f"Error processing {file_name}: {e}")
        continue  # Continue to the next iteration even if an error occurs

print("Finished processing all files.")
```

The first run took roughly 15 hours on my connection. When finished a handfull of files will have failed, for me less than 5. Rerun the script to add the missing files; it will finish in a minute.

---

Below the rest of the original dataset card.

---

## Dataset Details

### Dataset Description

Approximately 5 billion images from Flickr. Entries include URLs to images at various resolutions and available metadata such as license, geolocation, dates, description and machine tags (camera info).

- **Curated by:** hlky
- **License:** Open Data Commons Attribution License (ODC-By) v1.0

# Citation Information
```
@misc{flickr,
  author = {hlky},
  title = {Flickr},
  year = {2024},
  publisher = {hlky},
  journal = {Hugging Face repository},
  howpublished = {\url{[https://huggingface.co/datasets/bigdata-pw/Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr)}}
}
```

## Attribution Information
```
Contains information from [Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr) which is made available
under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
```