Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,16 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
config_name: pair
|
4 |
features:
|
@@ -18,3 +30,26 @@ configs:
|
|
18 |
- split: train
|
19 |
path: pair/train-*
|
20 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
multilinguality:
|
5 |
+
- monolingual
|
6 |
+
size_categories:
|
7 |
+
- 100K<n<1M
|
8 |
+
task_categories:
|
9 |
+
- feature-extraction
|
10 |
+
- sentence-similarity
|
11 |
+
pretty_name: Flickr30k Captions
|
12 |
+
tags:
|
13 |
+
- sentence-transformers
|
14 |
dataset_info:
|
15 |
config_name: pair
|
16 |
features:
|
|
|
30 |
- split: train
|
31 |
path: pair/train-*
|
32 |
---
|
33 |
+
|
34 |
+
# Dataset Card for Flickr30k Captions
|
35 |
+
|
36 |
+
This dataset is a collection of caption pairs given to the same image, collected from Flickr30k. See [Flickr30k](https://shannon.cs.illinois.edu/DenotationGraph/) for additional information.
|
37 |
+
This dataset can be used directly with Sentence Transformers to train embedding models.
|
38 |
+
|
39 |
+
Note that two captions for the same image do not strictly have the same semantic meaning.
|
40 |
+
|
41 |
+
## Dataset Subsets
|
42 |
+
|
43 |
+
### `pair` subset
|
44 |
+
|
45 |
+
* Columns: "caption1", "caption2"
|
46 |
+
* Column types: `str`, `str`
|
47 |
+
* Examples:
|
48 |
+
```python
|
49 |
+
{
|
50 |
+
'caption1': 'A large structure has broken and is laying in a roadway.',
|
51 |
+
'caption2': 'A man stands on wooden supports and surveys damage.',
|
52 |
+
}
|
53 |
+
```
|
54 |
+
* Collection strategy: Reading the Flickr30k Captions dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data).
|
55 |
+
* Deduplified: No
|