Datasets:

ArXiv:
License:
This view is limited to 50 files because it contains too many changes.  See the raw diff here.
Files changed (50) hide show
  1. .gitattributes +0 -1
  2. README.md +105 -86
  3. Weather_Analogs/README.md +0 -36
  4. Weather_Analogs/weather_analog.zip +0 -3
  5. aviation_turbulence/{README.md → README.turbulence.txt} +0 -0
  6. aviation_turbulence/dataset.py +0 -63
  7. hurricane/2021.h5 +0 -3
  8. hurricane/2022.h5 +0 -3
  9. hurricane/best_track/ATL_hurricanes.txt +0 -0
  10. hurricane/best_track/PAC_hurricanes.txt +0 -0
  11. hurricane/best_track/data_description.pdf +0 -0
  12. hurricane/dataset.py +0 -63
  13. hurricane/variable_list.csv +0 -39
  14. long_term_precipitation_forecast/.gitattributes +0 -0
  15. long_term_precipitation_forecast/dataset.py +0 -99
  16. long_term_precipitation_forecast/test_data/.gitattributes +0 -0
  17. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190101_00_00.nc +0 -3
  18. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190102_00_00.nc +0 -3
  19. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190103_00_00.nc +0 -3
  20. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190104_00_00.nc +0 -3
  21. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190105_00_00.nc +0 -3
  22. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190106_00_00.nc +0 -3
  23. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190107_00_00.nc +0 -3
  24. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190108_00_00.nc +0 -3
  25. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190109_00_00.nc +0 -3
  26. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190110_00_00.nc +0 -3
  27. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190111_00_00.nc +0 -3
  28. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190112_00_00.nc +0 -3
  29. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190113_00_00.nc +0 -3
  30. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190114_00_00.nc +0 -3
  31. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190115_00_00.nc +0 -3
  32. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190116_00_00.nc +0 -3
  33. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190117_00_00.nc +0 -3
  34. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190118_00_00.nc +0 -3
  35. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190119_00_00.nc +0 -3
  36. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190120_00_00.nc +0 -3
  37. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190121_00_00.nc +0 -3
  38. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190122_00_00.nc +0 -3
  39. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190123_00_00.nc +0 -3
  40. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190124_00_00.nc +0 -3
  41. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190125_00_00.nc +0 -3
  42. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190126_00_00.nc +0 -3
  43. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190127_00_00.nc +0 -3
  44. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190128_00_00.nc +0 -3
  45. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190129_00_00.nc +0 -3
  46. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190130_00_00.nc +0 -3
  47. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190131_00_00.nc +0 -3
  48. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190201_00_00.nc +0 -3
  49. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190202_00_00.nc +0 -3
  50. long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190203_00_00.nc +0 -3
.gitattributes CHANGED
@@ -57,4 +57,3 @@ aviation_turbulence/training_data_high_fl.nc filter=lfs diff=lfs merge=lfs -text
57
  aviation_turbulence/training_data_low_fl.nc filter=lfs diff=lfs merge=lfs -text
58
  aviation_turbulence/training_data_med_fl.nc filter=lfs diff=lfs merge=lfs -text
59
  long_term_precipitation_forecast/training_data/patmosx/patmosx_19900101_00_00.nc filter=lfs diff=lfs merge=lfs -text
60
- *.grib2 filter=lfs diff=lfs merge=lfs -text
 
57
  aviation_turbulence/training_data_low_fl.nc filter=lfs diff=lfs merge=lfs -text
58
  aviation_turbulence/training_data_med_fl.nc filter=lfs diff=lfs merge=lfs -text
59
  long_term_precipitation_forecast/training_data/patmosx/patmosx_19900101_00_00.nc filter=lfs diff=lfs merge=lfs -text
 
README.md CHANGED
@@ -2,127 +2,146 @@
2
  license: mit
3
  ---
4
 
 
5
 
6
- # Dataset Card for WxC-Bench
7
-
8
- **WxC-Bench** primary goal is to provide a standardized benchmark for evaluating the performance of AI models in Atmospheric and Earth Sciences across various tasks.
9
 
10
  ## Dataset Details
11
-
12
- WxC-Bench contains datasets for six key tasks:
13
- 1. **Nonlocal Parameterization of Gravity Wave Momentum Flux**
14
- 2. **Prediction of Aviation Turbulence**
15
- 3. **Identifying Weather Analogs**
16
- 4. **Generation of Natural Language Weather Forecasts**
17
- 5. **Long-Term Precipitation Forecasting**
18
- 6. **Hurricane Track and Intensity Prediction**
19
 
20
  ### Dataset Description
21
 
22
- #### 1. Nonlocal Parameterization of Gravity Wave Momentum Flux
23
- The input variables consist of three dynamic atmospheric variables (zonal and meridional winds and potential temperature), concatenated along the vertical dimension. The output variables are the zonal and meridional components of vertical momentum flux due to gravity waves.
24
 
25
- - **Curated by:** [Aman Gupta](https://www.github.com/amangupta2)
26
- <!-- - **License:** MIT License -->
27
 
28
- #### 2. Generation of Natural Language Weather Forecasts
29
- The dataset includes the HRRR re-analysis data paired with NOAA Storm Prediction Center daily reports for January 2017. This task aims to generate human-readable weather forecasts.
30
 
31
- - **Curated by:** [NASA IMPACT](https://www.github.com/nasa-impact)
32
- <!-- - **License:** MIT License -->
 
 
 
33
 
34
- #### 3. Long-Term Precipitation Forecasting
35
- This dataset contains daily global rainfall accumulation records and corresponding satellite observations. The goal is to predict rainfall up to 28 days in advance.
36
 
37
- - **Curated by:** [Simon Pfreundschuh](https://www.github.com/simonpf) (Colorado State University)
38
 
39
- #### 4. Aviation Turbulence Prediction
40
- Aimed at detecting turbulence conditions that impact aviation safety.
 
41
 
42
- - **Curated by:** [NASA IMPACT](https://www.github.com/nasa-impact)
43
- <!-- - **License:** MIT License -->
44
 
45
- #### 5. Hurricane Track and Intensity Prediction
46
- Provides HURDAT2 data for predicting hurricane paths and intensity changes.
47
 
48
- - **Curated by:** [NASA IMPACT](https://www.github.com/nasa-impact)
49
- <!-- - **License:** MIT License -->
50
 
51
- #### 6. Weather Analog Search
52
- Data to identify analog weather patterns for improved forecasting.
53
 
54
- - **Curated by:** [NASA IMPACT](https://www.github.com/nasa-impact)
55
- <!-- - **License:** MIT License -->
56
 
57
- ### Dataset Sources
58
 
59
- #### Nonlocal Parameterization of Gravity Wave Momentum Flux
60
- Developed using ERA5 reanalysis data (top 15 pressure levels above 1 hPa are excluded). Inputs were coarsely grained from winds and temperatures on a 0.3° grid.
61
 
62
- #### Long-Term Precipitation Forecasting
63
- Precipitation data sources include the PERSIANN CDR dataset (until June 2020) and IMERG final daily product. Satellite observations are sourced from PATMOS-x, GridSat-B1, and SSMI(S) brightness temperatures CDRs, with baseline forecasts from ECMWF and the UK Met Office S2S database.
64
 
65
  ## Dataset Structure
66
 
67
- WxC-Bench datasets are organized by task directories:
68
- | WxC-Bench |
69
- |---------------------|
70
- | aviation_turbulence |
71
- | nonlocal_parameterization |
72
- | weather_analogs |
73
- | hurricane |
74
- | weather_forecast_discussion |
75
- | long_term_precipitation_forecast |
76
 
77
- Each directory contains datasets specific to the respective downstream tasks.
78
 
79
  ## Dataset Creation
80
 
81
  ### Curation Rationale
82
- The WxC-Bench dataset aims to create a unified standard for assessing AI models applied to complex meteorological and atmospheric science tasks.
 
 
 
83
 
84
  ### Source Data
85
 
86
- The datasets were created using multiple authoritative data sources, such as ERA5 reanalysis data, NOAA Storm Prediction Center reports, PERSIANN CDR, and IMERG products. Data processing involved spatial and temporal alignment, quality control, and variable normalization.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
 
88
- ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
  **BibTeX:**
91
 
92
- ```
93
- @misc{shinde2024wxcbenchnoveldatasetweather,
94
- title={WxC-Bench: A Novel Dataset for Weather and Climate Downstream Tasks},
95
- author={Rajat Shinde and Christopher E. Phillips and Kumar Ankur and Aman Gupta and Simon Pfreundschuh and Sujit Roy and Sheyenne Kirkland and Vishal Gaur and Amy Lin and Aditi Sheshadri and Udaysankar Nair and Manil Maskey and Rahul Ramachandran},
96
- year={2024},
97
- eprint={2412.02780},
98
- archivePrefix={arXiv},
99
- primaryClass={cs.LG},
100
- url={https://arxiv.org/abs/2412.02780},
101
- }
102
- ```
103
-
104
- ## Dataset Card Authors
105
-
106
- - Rajat Shinde
107
- - Christopher E. Phillips
108
- - Sujit Roy
109
- - Ankur Kumar
110
- - Aman Gupta
111
- - Simon Pfreundschuh
112
- - Sheyenne Kirkland
113
- - Vishal Gaur
114
- - Amy Lin
115
- - Aditi Sheshadri
116
- - Manil Maskey
117
- - Rahul Ramachandran
118
 
119
- ## Dataset Card Contact
120
 
121
- For each task, please contact:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
 
123
- - **Nonlocal Parameterization of Gravity Wave Momentum Flux:** [Aman Gupta](https://www.github.com/amangupta2)
124
- - **Aviation Turbulence Prediction:** [Christopher E. Phillips](https://www.github.com/sodoesaburningbus)
125
- - **Identifying Weather Analogs:** Christopher E. Phillips, Rajat Shinde
126
- - **Natural Language Weather Forecasts:** [Rajat Shinde](https://www.github.com/omshinde), Sujit Roy
127
- - **Long-Term Precipitation Forecasting:** [Simon Pfreundschuh](https://www.github.com/simonpf)
128
- - **Hurricane Track and Intensity Prediction:** [Ankur Kumar](https://www.github.com/ankurk017)
 
2
  license: mit
3
  ---
4
 
5
+ # Dataset Card for WINDSET
6
 
7
+ WINDSET is the Weather Insights and Novel Data for Systematic Evaluation and Testing dataset.
8
+ WINDSET's goal is to provide a simple standard for evaluating the performance of Atmospheric and Earth Science AI over a range of tasks.
 
9
 
10
  ## Dataset Details
11
+ WINDSET contains data for 6 tasks:
12
+ - Nonlocal paramterization of gravity wave momentum flux
13
+ - Prediction of aviation turbulence
14
+ - Identifying weather analogs
15
+ - Generating natural language forecasts
16
+ - Long-term precipitation forecasting
17
+ - Hurricane track and intensity prediction
 
18
 
19
  ### Dataset Description
20
 
21
+ <!-- Provide a longer summary of what this dataset is. -->
 
22
 
 
 
23
 
 
 
24
 
25
+ - **Curated by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** MIT License
30
 
31
+ ### Dataset Sources [optional]
 
32
 
33
+ <!-- Provide the basic links for the dataset. -->
34
 
35
+ - **Repository:** [More Information Needed]
36
+ - **Paper [optional]:** [More Information Needed]
37
+ - **Demo [optional]:** [More Information Needed]
38
 
39
+ ## Uses
 
40
 
41
+ <!-- Address questions around how the dataset is intended to be used. -->
 
42
 
43
+ ### Direct Use
 
44
 
45
+ <!-- This section describes suitable use cases for the dataset. -->
 
46
 
47
+ [More Information Needed]
 
48
 
49
+ ### Out-of-Scope Use
50
 
51
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
52
 
53
+ [More Information Needed]
 
54
 
55
  ## Dataset Structure
56
 
57
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
 
58
 
59
+ [More Information Needed]
60
 
61
  ## Dataset Creation
62
 
63
  ### Curation Rationale
64
+
65
+ <!-- Motivation for the creation of this dataset. -->
66
+
67
+ [More Information Needed]
68
 
69
  ### Source Data
70
 
71
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
72
+
73
+ #### Data Collection and Processing
74
+
75
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
76
+
77
+ [More Information Needed]
78
+
79
+ #### Who are the source data producers?
80
+
81
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
82
+
83
+ [More Information Needed]
84
+
85
+ ### Annotations [optional]
86
+
87
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
88
+
89
+ #### Annotation process
90
+
91
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
92
+
93
+ [More Information Needed]
94
+
95
+ #### Who are the annotators?
96
+
97
+ <!-- This section describes the people or systems who created the annotations. -->
98
+
99
+ [More Information Needed]
100
+
101
+ #### Personal and Sensitive Information
102
 
103
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Bias, Risks, and Limitations
108
+
109
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
110
+
111
+ [More Information Needed]
112
+
113
+ ### Recommendations
114
+
115
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
116
+
117
+ Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
118
+
119
+ ## Citation [optional]
120
+
121
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
122
 
123
  **BibTeX:**
124
 
125
+ [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
 
127
+ **APA:**
128
 
129
+ [More Information Needed]
130
+
131
+ ## Glossary [optional]
132
+
133
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
134
+
135
+ [More Information Needed]
136
+
137
+ ## More Information [optional]
138
+
139
+ [More Information Needed]
140
+
141
+ ## Dataset Card Authors [optional]
142
+
143
+ [More Information Needed]
144
+
145
+ ## Dataset Card Contact
146
 
147
+ [More Information Needed]
 
 
 
 
 
Weather_Analogs/README.md DELETED
@@ -1,36 +0,0 @@
1
- # Dataset Card for Weather Analog Search
2
-
3
- ## Dataset Description
4
- This dataset contains processed MERRA2 (Modern-Era Retrospective analysis for Research and Applications, Version 2) weather data focused on Western Europe. It includes two key variables:
5
- - Sea Level Pressure (SLP)
6
- - 2-meter Temperature (T2M)
7
-
8
- The data covers a geographic region bounded by:
9
- - Longitude: -15° to 0°
10
- - Latitude: 42° to 58°
11
-
12
- ## Time Coverage
13
- - Start Date: January 1, 2019
14
- - End Date: December 31, 2021
15
- - Temporal Resolution: Daily
16
-
17
- ## Data Format
18
- - NetCDF4 files
19
- - Each file contains a single day of data
20
- - File naming convention: MERRA2_SLP_T2M_YYYYMMDD.nc
21
-
22
- ## Variables
23
- - SLP: Sea Level Pressure
24
- - T2M: 2-meter Temperature
25
-
26
- ## Geographic Coverage
27
- The dataset covers Western Europe with:
28
- - 25 longitude points (-15° to 0°)
29
- - 33 latitude points (42° to 58°)
30
- - Spatial resolution: ~0.625° x 0.5°
31
-
32
- ## Data Source
33
- The data is derived from NASA's MERRA2 reanalysis dataset, processed to extract specific variables and geographic region of interest.
34
-
35
- ## Intended Use
36
- This dataset is designed for weather analog search applications, allowing users to find historical weather patterns similar to current conditions in Western Europe.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Weather_Analogs/weather_analog.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:fad10c9ace377de7817ea1f61618f3632a375181cf8be342dcae75db702f7cc4
3
- size 42235592
 
 
 
 
aviation_turbulence/{README.md → README.turbulence.txt} RENAMED
File without changes
aviation_turbulence/dataset.py DELETED
@@ -1,63 +0,0 @@
1
- import os
2
- import datasets
3
-
4
- class AviationTurbulence(datasets.GeneratorBasedBuilder):
5
- VERSION = datasets.Version("1.0.0")
6
-
7
- def _info(self):
8
- """
9
- Defines the dataset metadata and feature structure.
10
- """
11
- return datasets.DatasetInfo(
12
- description="Dataset containing .nc files for training.",
13
- features=datasets.Features({
14
- "file_path": datasets.Value("string"), # Store file paths
15
- }),
16
- supervised_keys=None, # Update if supervised task is defined
17
- homepage="https://huggingface.co/datasets/nasa-impact/WINDSET/tree/main/aviation_turbulence",
18
- license="MIT",
19
- )
20
-
21
- def _split_generators(self, dl_manager):
22
- """
23
- Define the dataset splits for train.
24
- """
25
- # Define the directory containing the dataset
26
- data_dir = os.path.join(os.getcwd(), "aviation_turbulence") # Update with the actual directory
27
-
28
- # Get the directory for the train split (no validation or test splits)
29
- train_dir = os.path.join(data_dir)
30
-
31
- return [
32
- datasets.SplitGenerator(
33
- name=datasets.Split.TRAIN,
34
- gen_kwargs={"split_dir": train_dir},
35
- ),
36
- ]
37
-
38
- def _generate_data_from_files(self, data_dir):
39
- """
40
- Generate file paths for each .nc file in the directory.
41
- """
42
- example_id = 0
43
-
44
- # Loop through the files in the directory
45
- for nc_file in os.listdir(data_dir):
46
-
47
- if nc_file.endswith(".nc"):
48
- nc_file_path = os.path.join(data_dir, nc_file)
49
-
50
- yield example_id, {
51
- "file_path": nc_file_path,
52
- }
53
- example_id += 1
54
- else:
55
- pass
56
-
57
- def _generate_examples(self, split_dir):
58
- """
59
- Generates examples for the dataset from the split directory.
60
- """
61
- # Call the data generator to get the file paths
62
- for example_id, example in self._generate_data_from_files(split_dir):
63
- yield example_id, example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hurricane/2021.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9597c2335ac6b317056d5944142b72872d623cc8bb4aea326962d4a5676756f2
3
- size 46017333248
 
 
 
 
hurricane/2022.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4748f596595d38587b502fa3c2d2ee1f87f3bfeda585a0f34a8f738f17dbafaa
3
- size 46017333248
 
 
 
 
hurricane/best_track/ATL_hurricanes.txt DELETED
The diff for this file is too large to render. See raw diff
 
hurricane/best_track/PAC_hurricanes.txt DELETED
The diff for this file is too large to render. See raw diff
 
hurricane/best_track/data_description.pdf DELETED
Binary file (227 kB)
 
hurricane/dataset.py DELETED
@@ -1,63 +0,0 @@
1
- import os
2
- import datasets
3
-
4
- class HurricaneDetection(datasets.GeneratorBasedBuilder):
5
- VERSION = datasets.Version("1.0.0")
6
-
7
- def _info(self):
8
- """
9
- Defines the dataset metadata and feature structure.
10
- """
11
- return datasets.DatasetInfo(
12
- description="Dataset containing .nc files for training.",
13
- features=datasets.Features({
14
- "file_path": datasets.Value("string"), # Store file paths
15
- }),
16
- supervised_keys=None, # Update if supervised task is defined
17
- homepage="https://huggingface.co/datasets/nasa-impact/WINDSET/tree/main/hurricane",
18
- license="MIT",
19
- )
20
-
21
- def _split_generators(self, dl_manager):
22
- """
23
- Define the dataset splits for train.
24
- """
25
- # Define the directory containing the dataset
26
- data_dir = os.path.join(os.getcwd(), "hurricane") # Update with the actual directory
27
-
28
- # Get the directory for the train split (no validation or test splits)
29
- train_dir = os.path.join(data_dir)
30
-
31
- return [
32
- datasets.SplitGenerator(
33
- name=datasets.Split.TRAIN,
34
- gen_kwargs={"split_dir": train_dir},
35
- ),
36
- ]
37
-
38
- def _generate_data_from_files(self, data_dir):
39
- """
40
- Generate file paths for each .h5 file in the directory.
41
- """
42
- example_id = 0
43
-
44
- # Loop through the files in the directory
45
- for h5_file in os.listdir(data_dir):
46
-
47
- if h5_file.endswith(".h5"):
48
- h5_file_path = os.path.join(data_dir, h5_file)
49
-
50
- yield example_id, {
51
- "file_path": h5_file_path,
52
- }
53
- example_id += 1
54
- else:
55
- pass
56
-
57
- def _generate_examples(self, split_dir):
58
- """
59
- Generates examples for the dataset from the split directory.
60
- """
61
- # Call the data generator to get the file paths
62
- for example_id, example in self._generate_data_from_files(split_dir):
63
- yield example_id, example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hurricane/variable_list.csv DELETED
@@ -1,39 +0,0 @@
1
- ,parameter,full_name,level,units
2
- 0,U10m,Zonal Wind at 10m,Surface,m/s
3
- 1,V10m,Meridional Wind at 10m,Surface,m/s
4
- 2,T2m,Temperature at 2m,2 meters,K
5
- 3,SLP,Sea Level Pressure,Sea Level,hPa
6
- 4,QV2m,Specific Humidity at 2m,2 meters,kg/kg
7
- 5,TQI,Total Precipitable Ice Water,Integrated,kg/m2
8
- 6,TQL,Total Precipitable Liquid Water,Integrated,kg/m2
9
- 7,TQV,Total Precipitable Water Vapor,Integrated,kg/m2
10
- 8,U63,Zonal Wind at eta level 63,850 hPa,m/s
11
- 9,U56,Zonal Wind at eta level 56,700 hPa,m/s
12
- 10,U50,Zonal Wind at eta level 50,487 hPa,m/s
13
- 11,U44,Zonal Wind at eta level 44,244 hPa,m/s
14
- 12,U39,Zonal Wind at eta level 39,108 hPa,m/s
15
- 13,V63,Meridional Wind at eta level 63,850 hPa,m/s
16
- 14,V56,Meridional Wind at eta level 56,700 hPa,m/s
17
- 15,V50,Meridional Wind at eta level 50,487 hPa,m/s
18
- 16,V44,Meridional Wind at eta level 44,244 hPa,m/s
19
- 17,V39,Meridional Wind at eta level 39,108 hPa,m/s
20
- 18,T63,Temperature at eta level 63,850 hPa,K
21
- 19,T56,Temperature at eta level 56,700 hPa,K
22
- 20,T50,Temperature at eta level 50,487 hPa,K
23
- 21,T44,Temperature at eta level 44,244 hPa,K
24
- 22,T39,Temperature at eta level 39,108 hPa,K
25
- 23,QV63,Specific Humidity at eta level 63,850 hPa,kg/kg
26
- 24,QV56,Specific Humidity at eta level 56,700 hPa,kg/kg
27
- 25,QV50,Specific Humidity at eta level 50,487 hPa,kg/kg
28
- 26,QV44,Specific Humidity at eta level 44,244 hPa,kg/kg
29
- 27,QV39,Specific Humidity at eta level 39,108 hPa,kg/kg
30
- 28,OMEGA63,Vertical Velocity at eta level 63,850 hPa,Pa/s
31
- 29,OMEGA56,Vertical Velocity at eta level 56,700 hPa,Pa/s
32
- 30,OMEGA50,Vertical Velocity at eta level 50,487 hPa,Pa/s
33
- 31,OMEGA44,Vertical Velocity at eta level 44,244 hPa,Pa/s
34
- 32,OMEGA39,Vertical Velocity at eta level 39,108 hPa,Pa/s
35
- 33,H63,Geopotential Height at eta level 63,850 hPa,m
36
- 34,H56,Geopotential Height at eta level 56,700 hPa,m
37
- 35,H50,Geopotential Height at eta level 50,487 hPa,m
38
- 36,H44,Geopotential Height at eta level 44,244 hPa,m
39
- 37,H39,Geopotential Height at eta level 39,108 hPa,m
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
long_term_precipitation_forecast/.gitattributes DELETED
The diff for this file is too large to render. See raw diff
 
long_term_precipitation_forecast/dataset.py DELETED
@@ -1,99 +0,0 @@
1
- import os
2
- import datasets
3
-
4
- class LongTermPrecipitationDataset(datasets.GeneratorBasedBuilder):
5
- VERSION = datasets.Version("1.0.0")
6
-
7
- def _info(self):
8
- """
9
- Defines the dataset metadata and feature structure.
10
- """
11
- return datasets.DatasetInfo(
12
- description="Dataset containing .nc files per year for variables.",
13
- features=datasets.Features({
14
- "file_path": datasets.Value("string"), # Store file paths
15
- "year": datasets.Value("string"), # Track year
16
- "subfolder": datasets.Value("string") # Track subfolder (sf1, sf2)
17
- }),
18
- supervised_keys=None, # Update if supervised task is defined
19
- homepage="https://huggingface.co/datasets/nasa-impact/WINDSET/tree/main/long_term_precipitation_forecast",
20
- license="MIT",
21
- )
22
-
23
- def _split_generators(self, dl_manager):
24
- """
25
- Define the dataset splits for train, validation, and test.
26
- """
27
- # Define the directory containing the dataset
28
- data_dir = os.path.join(os.getcwd(), "long_term_precipitation_forecast")
29
-
30
- # Get the directories for each split
31
- train_dir = os.path.join(data_dir, "training_data")
32
- validation_dir = os.path.join(data_dir, "validation_data")
33
- test_dir = os.path.join(data_dir, "test_data")
34
-
35
- return [
36
- datasets.SplitGenerator(
37
- name=datasets.Split.TRAIN,
38
- gen_kwargs={"split_dir": train_dir},
39
- ),
40
- datasets.SplitGenerator(
41
- name=datasets.Split.VALIDATION,
42
- gen_kwargs={"split_dir": validation_dir},
43
- ),
44
- datasets.SplitGenerator(
45
- name=datasets.Split.TEST,
46
- gen_kwargs={"split_dir": test_dir},
47
- ),
48
- ]
49
-
50
- def _get_subfolders(self, base_dir):
51
- """
52
- Get all subfolders from the base directory.
53
- """
54
- return [os.path.join(base_dir, subfolder) for subfolder in os.listdir(base_dir) if os.path.isdir(os.path.join(base_dir, subfolder))]
55
-
56
- def _get_year_folders(self, subfolder_dir):
57
- """
58
- Get all year folders inside a subfolder.
59
- """
60
- return [os.path.join(subfolder_dir, year_folder) for year_folder in os.listdir(subfolder_dir) if os.path.isdir(os.path.join(subfolder_dir, year_folder))]
61
-
62
- def _generate_data_from_files(self, data_dir):
63
- """
64
- Generate file paths for each subfolder, year, and daily file.
65
- """
66
- example_id = 0
67
-
68
- # Loop through subfolders
69
- for subfolder in os.listdir(data_dir):
70
- subfolder_path = os.path.join(data_dir, subfolder)
71
-
72
- if os.path.isdir(subfolder_path):
73
- # Loop through year folders inside the subfolder
74
- for year_folder in os.listdir(subfolder_path):
75
- year_folder_path = os.path.join(subfolder_path, year_folder)
76
-
77
- if os.path.isdir(year_folder_path):
78
- # Loop through daily files inside the year folder
79
- for daily_file in os.listdir(year_folder_path):
80
- daily_file_path = os.path.join(year_folder_path, daily_file)
81
-
82
- if daily_file.endswith(".nc"): # Only select NetCDF files
83
- # Yield file information for each data point
84
- yield example_id, {
85
- "file_path": daily_file_path,
86
- "year": year_folder,
87
- "subfolder": subfolder,
88
- }
89
- example_id += 1
90
- else:
91
- raise FileNotFoundError(f"{daily_file_path} not found")
92
-
93
- def _generate_examples(self, split_dir):
94
- """
95
- Generates examples for the dataset from the split directory.
96
- """
97
- # Call the data generator to get the file paths
98
- for example_id, example in self._generate_data_from_files(split_dir):
99
- yield example_id, example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
long_term_precipitation_forecast/test_data/.gitattributes DELETED
The diff for this file is too large to render. See raw diff
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190101_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bc4af780640fdc4e129a8df246a89f2515c2b59d0c9e69782e09bd4a681c28d1
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190102_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3ae15cc38a5d2e0d8ba58aa1dc658599dc6668513ba96283d494f6524858aee
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190103_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e0fd0a0a56adbea95efdfe609f9ac08280fd0e4124ded19b3a347718d6e1eba
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190104_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3df2e0669d6c54ef4db2a5e89bf6c527bcca312081d8baa04960ca73e5eb8ae9
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190105_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9e47d6b304112845ae596cd5041f496548808100fd46f400ce46e14713fd13de
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190106_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c71e2a3430da20162ac3f78fa1739160b11b0338150ddba6630b9e0b961c9f5
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190107_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6fc71735f1f95e4b97e8fb53c60390b23b35c714a9fa490716c97b039d0e0fec
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190108_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e68e6d86d23488a0a4d1087117407fce1d9272eec1b160a39a46b30e904d11eb
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190109_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5d28117d865c6884fcb16bd2bb2f83ff7406b3c4da311a741a1c0c9b9351f68
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190110_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bafdc5ce330ceb0d911b32db971a0f7a650cc3bafb501ba6ae02966cb9ceb030
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190111_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4567f5551d7af38642d95d353cd041a12b20bd9d56e0c1d3de718a3b2a932b81
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190112_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d6d197396c54ffc0fe84bf9706094ce09bcb628a5dc9df500588462febc72afe
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190113_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:15bb0d3088fdbf99071453a10ca0dff66f4cfcb4f00a1a4f49f08d219f959959
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190114_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a4d95770dbbd1f3cca38a91d119d486a6f5a25b1b7386badfbb608d35057f2d4
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190115_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b18558a36a4fbda4359a9c91a3553c5971140831bb9b6508c46342441e634cdf
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190116_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8add46a97d7bc3a85988112123ce6a285afb2c976be67d9a623a9869e4a9aead
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190117_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc71070214e6ebb7e093c4778e4fd8f336e0095d74959d816df347b88353bd21
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190118_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a1e870fb63fcb9dbb3519bd379f1e3cb5533d3d8f785896abc574b5e4c9c99a9
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190119_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:75418179bcdd1f1c9f96faa95fa5cf53378de80f0fa967c218ea3cc5ad4b1853
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190120_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:522e5922434eef128b0184bfebefdc8145d4c40606176e690f6868ddbdc31e0a
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190121_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5fff0795ca8cb6f4f404d42a196d403920e472316fc50b729c6004347242c263
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190122_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:88da0d37a3307713469f2b019704fdc4db80add6e5d39fc61e0625a500d0d239
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190123_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:817878342ec6c941356e7880ccf6303bd81d001555a46a0a80b463f2a80c2f70
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190124_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6a77eacbd9d9ca012e94c6cebf7c20ed49b37957755dd16c723e6d65c288e2be
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190125_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d6c72c75d6f95a0a99ff252ebf8c35f2f27ffcb3301fd2e9c05186222535de4
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190126_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e75a099754e25d92f811f1c1e39efb2fefc2106388ed4447d67fcba20dbc2131
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190127_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:75f1efa96cff33cd2c42f32a67d064d1c6957def593992ae6297e1880c1db11e
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190128_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ad0d189f1c0de348d50c869c2b0f5d2f91addc546c27eb8ad8504a2d7c4340e2
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190129_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b0ad5754a6c8e834e396090393ef7c53c7508fcd2e90164b8faebcd28a0568ff
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190130_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:04d50f2fe511d2f3ed2a8ece584c14c8ad98c284f79af4be318043123e0926e2
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190131_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6535553c3342d7399b3bdc1f7fcd6bb4b860518433f162cde45c2ba0d5c88ac4
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190201_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b0c4243ac4f930aac9c4ccdc4dda9c3ae82ae6c82501372cb85ce2aa5cd0d0d3
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190202_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee23a1079a38ccae3675b1efb651a5ca3f14787e065ecb283f7c739f6962ea68
3
- size 841384
 
 
 
 
long_term_precipitation_forecast/test_data/daily_precip/2019/precip_20190203_00_00.nc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d15d669c4040942ea63b2921aaa5aef6ec403a9c681469aaac099618513e3058
3
- size 841384