Update README.md
Browse files
README.md
CHANGED
@@ -16,68 +16,41 @@ language:
|
|
16 |
|
17 |
## Summary
|
18 |
|
19 |
-
SHP is a dataset of **
|
20 |
It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
|
21 |
-
Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users.
|
22 |
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
3. Both comments have a score >= 2 and the post has a score >= 10.
|
27 |
-
4. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
|
28 |
-
5. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
29 |
|
30 |
-
|
31 |
-
Since the comment score is also a noisy estimate of the comment's utility, the second and third conditions were enforced to ensure that the preference is genuine.
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
|
37 |
-
| -------------------- | -------------------------- | ---------------------------- | ------------------- |
|
38 |
-
| SHP | Reddit post and comments | Aggregate Preference Label | |
|
39 |
-
| Anthropic/HH-RLHF | Dialogue history with LLM | Individual Preference Label | |
|
40 |
-
|
41 |
-
This may be due to the aggregate human preferences in SHP being more stable easier to predict than the individual human preferences in the Anthropic data, as well as our strict data filtering described above.
|
42 |
-
|
43 |
-
|
44 |
-
## Files
|
45 |
-
|
46 |
-
SHP contains a train, validation, and test split for comments scraped from 18 different subreddits:
|
47 |
-
`askculinary`, `askhr`, `askdocs`, `askanthropology`, `asksciencefiction`, `askacademia`, `askengineers`, `legaladvice`, `explainlikeimfive`, `askbaking`, `askphysics`, `askscience`, `askphilosophy`, `askvet`, `changemyview`, `askcarguys`, `askhistorians`, `asksocialscience`.
|
48 |
-
|
49 |
-
We chose subreddits based on:
|
50 |
-
1. whether they were well-known (subscriber count >= 50K)
|
51 |
-
2. whether they were actively moderated
|
52 |
-
3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
|
53 |
-
|
54 |
-
The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
|
55 |
-
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%.
|
56 |
-
| | train | validation | test | total |
|
57 |
-
|--------------------|---------:|-----------:|-----:|-------:
|
58 |
-
| Number of Examples | 198556 | 10555 |10454 | 219565|
|
59 |
|
60 |
|
61 |
## Data Structure
|
62 |
|
63 |
-
Here's an example from the training data:
|
64 |
```
|
65 |
{
|
66 |
-
`post_id`:
|
67 |
-
`domain`:
|
68 |
-
`upvote_ratio`:
|
69 |
-
`history`:
|
70 |
-
`c_root_id_A`:
|
71 |
-
`c_root_id_B`:
|
72 |
-
`created_at_utc_A`:
|
73 |
-
`created_at_utc_B`:
|
74 |
-
`score_A`:
|
75 |
-
`score_B`:
|
76 |
-
`human_ref_A`:
|
77 |
-
`human_ref_B`:
|
78 |
-
`labels`:
|
79 |
-
`seconds_difference`:
|
80 |
-
`score_ratio`:
|
81 |
}
|
82 |
```
|
83 |
|
@@ -99,6 +72,56 @@ where the fields are:
|
|
99 |
- ```score_ratio```: the ratio score_A:score B (will be >= 2) (float)
|
100 |
|
101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
## Disclaimer
|
103 |
|
104 |
Although we filtered out posts with NSFW (over 18) content, some of the data may contain discriminatory or harmful language.
|
|
|
16 |
|
17 |
## Summary
|
18 |
|
19 |
+
SHP is a dataset of **385K aggregate human preferences** over Reddit comments in 18 different subject areas, from cooking to legal advice.
|
20 |
It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
|
|
|
21 |
|
22 |
+
Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (in aggregate).
|
23 |
+
SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is definitively more preferred to B.
|
24 |
+
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.
|
|
|
|
|
|
|
25 |
|
26 |
+
How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
|
|
|
27 |
|
28 |
+
| Dataset | Input | Output | No. Domains | Data Format |
|
29 |
+
| -------------------- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- |
|
30 |
+
| SHP | Reddit post and comments | Aggregate Preference Label | 18 (cooking, cars, ...) | Question/Answer + Assertion/Response |
|
31 |
+
| Anthropic/HH-RLHF | Dialogue history with LLM | Individual Preference Label | 2 (harmful, helpful) | Multi-turn Dialogue |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
|
34 |
## Data Structure
|
35 |
|
36 |
+
Here's an example from the `askculinary` training data:
|
37 |
```
|
38 |
{
|
39 |
+
`post_id`:"qt3nxl",
|
40 |
+
`domain`:"askculinary_train",
|
41 |
+
`upvote_ratio`:0.98,
|
42 |
+
`history`:"What's the best way to disassemble raspberries? Like this, but down to the individual seeds: https:\/\/i.imgur.com\/Z0c6ZKE.jpg I've been pulling them apart with tweezers and it's really time consuming. I have about 10 pounds to get through this weekend.",
|
43 |
+
`c_root_id_A`:"hkh25sc",
|
44 |
+
`c_root_id_B`:"hkh25lp",
|
45 |
+
`created_at_utc_A`:1636822112,
|
46 |
+
`created_at_utc_B`:1636822110,
|
47 |
+
`score_A`:340,
|
48 |
+
`score_B`:166,
|
49 |
+
`human_ref_A`:"Pectinex, perhaps? It's an enzyme that breaks down cellulose. With citrus, you let it sit in a dilute solution of pectinex overnight to break down the connective tissues. You end up with perfect citrus supremes. If you let the raspberries sit for a shorter time, I wonder if it would separate the seeds the same way...? Here's an example: https:\/\/www.chefsteps.com\/activities\/perfect-citrus-supreme",
|
50 |
+
`human_ref_B`:"Raspberry juice will make a bright stain at first, but in a matter of weeks it will start to fade away to almost nothing. It is what is known in the natural dye world as a fugitive dye, it will fade even without washing or exposure to light. I hope she gets lots of nice photos of these stains on her dress, because soon that will be all she has left of them!",
|
51 |
+
`labels`:1,
|
52 |
+
`seconds_difference`:2.0,
|
53 |
+
`score_ratio`:2.0481927711
|
54 |
}
|
55 |
```
|
56 |
|
|
|
72 |
- ```score_ratio```: the ratio score_A:score B (will be >= 2) (float)
|
73 |
|
74 |
|
75 |
+
## Dataset Design
|
76 |
+
|
77 |
+
The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*.
|
78 |
+
For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
|
79 |
+
|
80 |
+
### Subreddit Selection
|
81 |
+
|
82 |
+
This may be due to the aggregate human preferences in SHP being more stable easier to predict than the individual human preferences in the Anthropic data, as well as our strict data filtering described above.
|
83 |
+
|
84 |
+
SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on:
|
85 |
+
1. whether they were well-known (subscriber count >= 50K)
|
86 |
+
2. whether they were actively moderated
|
87 |
+
3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
|
88 |
+
|
89 |
+
The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
|
90 |
+
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%:
|
91 |
+
|
92 |
+
| subreddit | train | validation | test | total |
|
93 |
+
| ------------------ | -------: | ---------: | ---: | ----: |
|
94 |
+
| askhistorians | 3264 | 113 | 164 | 3541 |
|
95 |
+
| askvet | 3300 | 170 | 224 | 3694 |
|
96 |
+
| askscience | 13316 | 899 | 977 | 15192 |
|
97 |
+
| askphysics | 7364 | 409 | 587 | 8360 |
|
98 |
+
|
99 |
+
|
100 |
+
|
101 |
+
of
|
102 |
+
The input in SHP contains more [FLANT5-usable information](https://icml.cc/virtual/2022/oral/16634) about the preference label than in
|
103 |
+
|
104 |
+
Specifically, given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
105 |
+
1. A was written *no later than* B.
|
106 |
+
2. Despite being written later, A has a score that is at least 2 times as high as B's.
|
107 |
+
3. Both comments have a score >= 2 and the post has a score >= 10.
|
108 |
+
4. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
|
109 |
+
5. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
110 |
+
|
111 |
+
Since comments made earlier get more visibility, the first condition is needed to ensure that A's higher score is not the result of a first-mover advantage.
|
112 |
+
Since the comment score is also a noisy estimate of the comment's utility, the second and third conditions were enforced to ensure that the preference is genuine.
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
## Files
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
## Disclaimer
|
126 |
|
127 |
Although we filtered out posts with NSFW (over 18) content, some of the data may contain discriminatory or harmful language.
|