Datasets:
File size: 3,238 Bytes
22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 724605b 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 22e8d3c 5c4ab32 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
annotations_creators: []
language: en
task_categories: []
task_ids: []
pretty_name: cvpr2024_papers
tags:
- fiftyone
- image
batch_size: 100
dataset_summary: '
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2379 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include ''split'', ''max_samples'', etc
dataset = fouh.load_from_hub("Voxel51/CVPR_2024_Papers")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for cvpr2024_papers
<!-- Provide a quick summary of the dataset. -->
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2379 samples.
The dataset consists of images of the first page for accepted papers to CVPR 2024, plus their abstract and other metadata.
![image/png](cvpr_papers_dataset.png)
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'split', 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/CVPR_2024_Papers")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
This is a dataset of the accepted papers for CVPR 2024.
The 2024 Conference on Computer Vision and Pattern Recognition (CVPR) received 11,532 valid paper submissions,
and only 2,719 were accepted, for an overall acceptance rate of about 23.6%.
However, this dataset only has 2,379 papers. This is because its how many we were able to (easily) find papers for.
### Dataset Description
- **Curated by:** [Harpreet Sahota, Hacker-in-Residence at Voxel51](https://huggingface.co/harpreetsahota)
- **Language(s) (NLP):** en
- **License:** [CC-BY-ND-4.0](https://spdx.org/licenses/CC-BY-ND-4.0)
## Uses
You can use this dataset to learn about the trends in research at this year's CVPR, and so much more!
## Dataset Structure
The dataset consists of the following:
- An image of the first page of the paper
- `title`: The title of the paper
- `authors_list`: The list of authors
- `abstract`: The abstract of the paper
- `arxiv_link`: Link to the paper on arXiv
- `other_link`: Link to the project page, if found
- `category_name`: The primary category this paper according to [arXiv taxonomy](https://arxiv.org/category_taxonomy)
- `all_categories`: All categories this paper falls into, according to arXiv taxonomy
- `keywords`: Extracted using GPT-4o
## Dataset Creation
Generic code for building this dataset can be found [here](https://github.com/harpreetsahota204/CVPR-2024-Papers).
This dataset was built using the following steps:
- Scrape the CVPR 2024 website for accepted papers
- Use DuckDuckGo to search for a link to the paper's abstract on arXiv
- Use arXiv.py (python wrapper for the arXiv API) to extract the abstract, categories, and download the pdf for each paper
- Use pdf2image to save image of papers first page
- Use GPT-4o to extract keywords from abstract
|