clefourrier HF staff commited on
Commit
8d442c9
·
1 Parent(s): 6644bd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -4
README.md CHANGED
@@ -44,6 +44,9 @@ dataset_info:
44
  - name: total_awards_received
45
  dtype: string
46
  splits:
 
 
 
47
  - name: tifu
48
  num_bytes: 4761338653
49
  num_examples: 12738669
@@ -188,12 +191,69 @@ dataset_info:
188
  - name: gardening
189
  num_bytes: 1825313940
190
  num_examples: 4568468
191
- - name: programming
192
- num_bytes: 3466623746
193
- num_examples: 7503347
194
  download_size: 105790281180
195
  dataset_size: 246259893952
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
  ---
 
197
  # Dataset Card for "REDDIT_comments"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
198
 
199
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
44
  - name: total_awards_received
45
  dtype: string
46
  splits:
47
+ - name: programming
48
+ num_bytes: 3466623746
49
+ num_examples: 7503347
50
  - name: tifu
51
  num_bytes: 4761338653
52
  num_examples: 12738669
 
191
  - name: gardening
192
  num_bytes: 1825313940
193
  num_examples: 4568468
 
 
 
194
  download_size: 105790281180
195
  dataset_size: 246259893952
196
+ annotations_creators:
197
+ - no-annotation
198
+ language:
199
+ - en
200
+ language_creators:
201
+ - machine-generated
202
+ license: []
203
+ multilinguality:
204
+ - monolingual
205
+ pretty_name: Reddit comments
206
+ size_categories:
207
+ - 10B<n<100B
208
+ source_datasets: []
209
+ tags:
210
+ - reddit
211
+ - social-media
212
+ task_categories:
213
+ - text-generation
214
+ task_ids:
215
+ - dialogue-modeling
216
+ - language-modeling
217
  ---
218
+
219
  # Dataset Card for "REDDIT_comments"
220
+ ## Dataset Description
221
+
222
+ - **Homepage:**
223
+ - **Paper: https://arxiv.org/abs/2001.08435**
224
+
225
+ ### Dataset Summary
226
+ Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
227
+
228
+ ### Supported Tasks
229
+ These comments can be used for text generation and language modeling, as well as dialogue modeling.
230
+
231
+ ## Dataset Structure
232
+ ### Data Splits
233
+ Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
234
+
235
+ ## Dataset Creation
236
+
237
+ ### Curation Rationale
238
+ All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
239
+
240
+ ### Source Data
241
+ The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
242
+
243
+ #### Initial Data Collection and Normalization
244
+ See the paper.
245
+
246
+ #### Who are the source language producers?
247
+ Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
248
+
249
+ ### Personal and Sensitive Information
250
+ The data contains Redditor's usernames associated to their content.
251
+
252
+ ## Considerations for Using the Data
253
+
254
+ This dataset should be anonymized before any processing.
255
+ Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
256
+
257
+ ### Contributions
258
 
259
+ Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.