Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
liyucheng commited on
Commit
c613d29
1 Parent(s): 9b7f25c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -1,3 +1,38 @@
1
  ---
2
  license: cc-by-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-2.0
3
  ---
4
+
5
+ # ColBERT_Humor
6
+
7
+ ## Dataset Description
8
+
9
+ - **Paper:** [Colbert: Using bert sentence embedding for humor detection](https://arxiv.org/abs/2004.12765)
10
+
11
+ ### Dataset Summary
12
+
13
+ Creative Language Toolkit (CLTK) Metadata
14
+ - CL Type: Humor
15
+ - Task Type: detection
16
+ - Size: 200k
17
+ - Created time: 2020
18
+
19
+ ColBERT Humor contains 200,000 labeled short texts, equally distributed between humorous and non-humorous content. The dataset was created to overcome the limitations of prior humor detection datasets, which were characterized by inconsistencies in text length, word count, and formality, making them easy to predict with simple models without truly understanding the nuances of humor. The two sources for this dataset are the News Category dataset, featuring 200k news headlines from the Huffington Post (2012-2018), and a collection of 231,657 Reddit jokes. The texts have been rigorously preprocessed to ensure syntactic similarity, requiring models to delve into the linguistic intricacies to distinguish humor, effectively providing a more complex and substantial platform for humor detection research.
20
+
21
+ For the details of this dataset, we refer you to the original paper.
22
+
23
+ ### Citation Information
24
+
25
+ If you find this dataset helpful, please cite:
26
+
27
+ ```
28
+ @article{annamoradnejad2020colbert,
29
+ title={Colbert: Using bert sentence embedding for humor detection},
30
+ author={Annamoradnejad, Issa and Zoghi, Gohar},
31
+ journal={arXiv preprint arXiv:2004.12765},
32
+ year={2020}
33
+ }
34
+ ```
35
+
36
+ ### Contributions
37
+
38
+ If you have any queries, please open an issue or direct your queries to [mail](mailto:[email protected]).