Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
liyucheng commited on
Commit
2bb7d6b
1 Parent(s): 127bea2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -8,18 +8,18 @@ license: cc-by-2.0
8
 
9
  - **Paper:** [Colbert: Using bert sentence embedding for humor detection](https://arxiv.org/abs/2004.12765)
10
 
11
- ### Dataset Summary
12
 
13
- Creative Language Toolkit ([CLTK](https://github.com/liyucheng09/cltk)) Metadata
 
 
 
 
14
  - CL Type: Humor
15
  - Task Type: detection
16
  - Size: 200k
17
  - Created time: 2020
18
 
19
- ColBERT Humor contains 200,000 labeled short texts, equally distributed between humorous and non-humorous content. The dataset was created to overcome the limitations of prior humor detection datasets, which were characterized by inconsistencies in text length, word count, and formality, making them easy to predict with simple models without truly understanding the nuances of humor. The two sources for this dataset are the News Category dataset, featuring 200k news headlines from the Huffington Post (2012-2018), and a collection of 231,657 Reddit jokes. The texts have been rigorously preprocessed to ensure syntactic similarity, requiring models to delve into the linguistic intricacies to distinguish humor, effectively providing a more complex and substantial platform for humor detection research.
20
-
21
- For the details of this dataset, we refer you to the original [paper](https://arxiv.org/abs/2004.12765).
22
-
23
  ### Citation Information
24
 
25
  If you find this dataset helpful, please cite:
 
8
 
9
  - **Paper:** [Colbert: Using bert sentence embedding for humor detection](https://arxiv.org/abs/2004.12765)
10
 
11
+ ## Dataset Summary
12
 
13
+ ColBERT Humor contains 200,000 labeled short texts, equally distributed between humorous and non-humorous content. The dataset was created to overcome the limitations of prior humor detection datasets, which were characterized by inconsistencies in text length, word count, and formality, making them easy to predict with simple models without truly understanding the nuances of humor. The two sources for this dataset are the News Category dataset, featuring 200k news headlines from the Huffington Post (2012-2018), and a collection of 231,657 Reddit jokes. The texts have been rigorously preprocessed to ensure syntactic similarity, requiring models to delve into the linguistic intricacies to distinguish humor, effectively providing a more complex and substantial platform for humor detection research.
14
+
15
+ For the details of this dataset, we refer you to the original [paper](https://arxiv.org/abs/2004.12765).
16
+
17
+ Metadata in Creative Language Toolkit ([CLTK](https://github.com/liyucheng09/cltk))
18
  - CL Type: Humor
19
  - Task Type: detection
20
  - Size: 200k
21
  - Created time: 2020
22
 
 
 
 
 
23
  ### Citation Information
24
 
25
  If you find this dataset helpful, please cite: