metadata
language:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: Flickr30k Captions
tags:
- sentence-transformers
dataset_info:
config_name: pair
features:
- name: caption1
dtype: string
- name: caption2
dtype: string
splits:
- name: train
num_bytes: 21319922
num_examples: 158881
download_size: 11450890
dataset_size: 21319922
configs:
- config_name: pair
data_files:
- split: train
path: pair/train-*
Dataset Card for Flickr30k Captions
This dataset is a collection of caption pairs given to the same image, collected from Flickr30k. See Flickr30k for additional information. This dataset can be used directly with Sentence Transformers to train embedding models.
Note that two captions for the same image do not strictly have the same semantic meaning.
Dataset Subsets
pair
subset
- Columns: "caption1", "caption2"
- Column types:
str
,str
- Examples:
{ 'caption1': 'A large structure has broken and is laying in a roadway.', 'caption2': 'A man stands on wooden supports and surveys damage.', }
- Collection strategy: Reading the Flickr30k Captions dataset from embedding-training-data, which has lists of duplicate captions. I've considered all adjacent captions as a positive pair, plus the last and first caption. So, e.g. 5 duplicate captions results in 5 duplicate pairs.
- Deduplified: No