Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Portuguese
ArXiv:
Libraries:
Datasets
Dask
License:
nicholasKluge commited on
Commit
ee4aedb
·
verified ·
1 Parent(s): c84d093

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -9
README.md CHANGED
@@ -68,12 +68,12 @@ size_categories:
68
 
69
  - **Homepage:** https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter
70
  - **Repository:** https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter
71
- - **Paper:** [Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/xxxx.xxxxx)
72
  - **Point of Contact:** [Nk-correa](mailto:[email protected])
73
 
74
  ### Dataset Summary
75
 
76
- GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of [GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo) (i.e., specifically those that were not synthetic). This dataset was used to train the text-quality filters described in "_[Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/xxxx.xxxxx)_". To create the text embeddings, we used [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). All scores were generated by GPT-4o.
77
 
78
  ### Supported Tasks and Leaderboards
79
 
@@ -81,7 +81,7 @@ This dataset can be utilized for tasks involving text classification/regression
81
 
82
  ### Languages
83
 
84
- Portuguese.
85
 
86
  ## Dataset Structure
87
 
@@ -123,7 +123,7 @@ dataset = load_dataset("TucanoBR/GigaVerbo-Text-Filter", split='train', streamin
123
 
124
  ### Curation Rationale
125
 
126
- This dataset was developed as part of the study "[Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/xxxx.xxxxx)". In short, GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of [GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo).
127
 
128
  ### Source Data
129
 
@@ -205,11 +205,14 @@ The following datasets and respective licenses from GigaVerbo (only training spl
205
 
206
  ```latex
207
 
208
- @misc{correa24tucano,
209
- title = {{Tucano: Advancing Neural Text Generation for Portuguese}},
210
- author = {Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
211
- journal={arXiv preprint arXiv:xxxx.xxxxx},
212
- year={2024}
 
 
 
213
  }
214
 
215
  ```
 
68
 
69
  - **Homepage:** https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter
70
  - **Repository:** https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter
71
+ - **Paper:** [Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/2411.07854)
72
  - **Point of Contact:** [Nk-correa](mailto:[email protected])
73
 
74
  ### Dataset Summary
75
 
76
+ GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of [GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo) (i.e., specifically those that were not synthetic). This dataset was used to train the text-quality filters described in "_[Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/2411.07854)_". To create the text embeddings, we used [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). All scores were generated by GPT-4o.
77
 
78
  ### Supported Tasks and Leaderboards
79
 
 
81
 
82
  ### Languages
83
 
84
+ Portuguese
85
 
86
  ## Dataset Structure
87
 
 
123
 
124
  ### Curation Rationale
125
 
126
+ This dataset was developed as part of the study "[Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/2411.07854)". In short, GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of [GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo).
127
 
128
  ### Source Data
129
 
 
205
 
206
  ```latex
207
 
208
+ @misc{correa2024tucanoadvancingneuraltext,
209
+ title={{Tucano: Advancing Neural Text Generation for Portuguese}},
210
+ author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
211
+ year={2024},
212
+ eprint={2411.07854},
213
+ archivePrefix={arXiv},
214
+ primaryClass={cs.CL},
215
+ url={https://arxiv.org/abs/2411.07854},
216
  }
217
 
218
  ```