Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
SivilTaram commited on
Commit
d5cf816
1 Parent(s): f603cad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -4,6 +4,20 @@ language:
4
  - en
5
  ---
6
 
7
-
8
  This repository contains a total of 483 tabular datasets with meaningful column names collected from OpenML, UCI, and Kaggle platforms. The last column of each dataset is the label column. For more details, please refer to our paper https://arxiv.org/abs/2305.09696.
9
- You can use the [code](https://github.com/ZhangTP1996/TapTap/blob/master/load_pretraining_datasets.py) to load all the datasets into a dictionary of pd.DataFrame.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  ---
6
 
 
7
  This repository contains a total of 483 tabular datasets with meaningful column names collected from OpenML, UCI, and Kaggle platforms. The last column of each dataset is the label column. For more details, please refer to our paper https://arxiv.org/abs/2305.09696.
8
+ You can use the [code](https://github.com/ZhangTP1996/TapTap/blob/master/load_pretraining_datasets.py) to load all the datasets into a dictionary of pd.DataFrame.
9
+
10
+ An example script can be found below:
11
+
12
+ ```python
13
+ from datasets import load_dataset
14
+ import pandas as pd
15
+ import numpy as np
16
+
17
+ data = {}
18
+ dataset = load_dataset(path='ztphs980/taptap_datasets')
19
+ dataset = dataset['train'].to_dict()
20
+ for table_name, table in zip(dataset['dataset_name'], dataset['table']):
21
+ table = pd.DataFrame.from_dict(eval(table, {'nan': np.nan}))
22
+ data[table_name] = table
23
+ ```