File size: 1,585 Bytes
da9cbf7
bd26018
 
 
 
 
 
 
 
 
 
 
 
da9cbf7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd26018
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: Coco Captions
tags:
- sentence-transformers
dataset_info:
  config_name: pair
  features:
  - name: caption1
    dtype: string
  - name: caption2
    dtype: string
  splits:
  - name: train
    num_bytes: 46793540
    num_examples: 414010
  download_size: 23935511
  dataset_size: 46793540
configs:
- config_name: pair
  data_files:
  - split: train
    path: pair/train-*
---

# Dataset Card for Coco Captions

This dataset is a collection of caption pairs given to the same image, collected from the Coco dataset. See [Coco](https://cocodataset.org/) for additional information.
This dataset can be used directly with Sentence Transformers to train embedding models.

Note that two captions for the same image do not strictly have the same semantic meaning. 

## Dataset Subsets

### `pair` subset

* Columns: "caption1", "caption2"
* Column types: `str`, `str`
* Examples:
    ```python
    {
      'caption1': 'A clock that blends in with the wall hangs in a bathroom. ',
      'caption2': 'A very clean and well decorated empty bathroom',
    }
    ```
* Collection strategy: Reading the Coco Captions dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data), which has lists of duplicate captions. I've considered all adjacent captions as a positive pair, plus the last and first caption. So, e.g. 5 duplicate captions results in 5 duplicate pairs.
* Deduplified: No