Papers
arxiv:2406.19380

TabReD: A Benchmark of Tabular Machine Learning in-the-Wild

Published on Jun 27
· Submitted by puhsu on Jul 4
#2 Paper of the day
Authors:
,

Abstract

Benchmarks that closely reflect downstream application scenarios are essential for the streamlined adoption of new research in tabular machine learning (ML). In this work, we examine existing tabular benchmarks and find two common characteristics of industry-grade tabular data that are underrepresented in the datasets available to the academic community. First, tabular data often changes over time in real-world deployment scenarios. This impacts model performance and requires time-based train and test splits for correct model evaluation. Yet, existing academic tabular datasets often lack timestamp metadata to enable such evaluation. Second, a considerable portion of datasets in production settings stem from extensive data acquisition and feature engineering pipelines. For each specific dataset, this can have a different impact on the absolute and relative number of predictive, uninformative, and correlated features, which in turn can affect model selection. To fill the aforementioned gaps in academic benchmarks, we introduce TabReD -- a collection of eight industry-grade tabular datasets covering a wide range of domains from finance to food delivery services. We assess a large number of tabular ML models in the feature-rich, temporally-evolving data setting facilitated by TabReD. We demonstrate that evaluation on time-based data splits leads to different methods ranking, compared to evaluation on random splits more common in academic benchmarks. Furthermore, on the TabReD datasets, MLP-like architectures and GBDT show the best results, while more sophisticated DL models are yet to prove their effectiveness.

Community

Paper author Paper submitter

Tabular tasks encountered in industrial ML settings are currently underrepresented in existing academic benchmarks. Datasets from Kaggle or industry applications are less common than standard UCI and OpenML datasets, like covertype or adult.

We aim to narrow the gap between industry tabular ML and academic tabular ML by introducing TabReD.

Paper author Paper submitter

The code and datasets are available at: https://github.com/yandex-research/tabred

Hi @puhsu congrats on this work and thanks for submitting.

Would you be up for making the datasets available on the hub? Here's a guide on how to do that: https://huggingface.co/docs/datasets/loading. It can then be linked to this paper page as shown here: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper

·
Paper author

Hi @nielsr thanks!

I don't know really. We already host datasets on Kaggle (for some you are required to have a Kaggle account to use as of now). Plus all the preprocessing scripts load data from Kaggle datasets. I'm hesitant to have meltiple sources for the data as it might be confusing plus harder to update or push fixes (when datasets are in multiple places).

But I'm open to the idea if it offers some additional benifits, I'm just not deeply familiar with the HF datasets ecosystem (e.g. is there support for tabular datasets, especially larger ones?). I was also thinking of potentially having a leaderboard of some kind for the benchmark, for example.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.19380 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.19380 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.19380 in a Space README.md to link it from this page.

Collections including this paper 5